Category Archives: IoT

Pharma meets the Internet of Things

Pharma meets the Internet of Things, some commented references for this future trend. Use them to understand

[0] After the IoT comes Gaia

There are two realms of computation, which should and will become one: the IT technology and biochemistry.

General stuff

The notion of computation is now well known, we speak about what is computable and about various models of computation (i.e. how we compute) which always turned out to be equivalent in the sense that they give the same class of computable things (that’s the content of the Church-Turing thesis).

It is interesting though how we compute, not only what is computable.

In IT perhaps the biggest (and socially relevant) problem is decentralized asynchronous computing. Until now there is no really working solution of a model of computation which is:
– local in space (decentralized)
– local in time (asynchronous)
– with no pre-imposed hierarchy or external authority which forces coherence

In biochemistry, people know that we, anything living, are molecular assemblies which work:
– local in space (all chemical interactions are local)
– local in time (there is no external clock which synchronizes the reactions)
– random (everything happens without any external control)

Useful links for an aerial view on molecular computing, seen as the biochemistry side of computation:


Some history and details provided. Quote from the end of the section “Biochemistry-based information technology”

“Other experiments have shown that basic computations may be executed using a number of different building blocks (for example, simple molecular “machines” that use a combination of DNA and protein-based enzymes). By harnessing the power of molecules, new forms of information-processing technology are possible that are evolvable, self-replicating, self-repairing, and responsive. The possible applications of this emerging technology will have an impact on many areas, including intelligent medical diagnostics and drug delivery, tissue engineering, energy, and the environment.”


A detailed historical view (written in 2000) of the efforts towards “molecular electronics”. Mind that’s not the same subject as [1], because the effort here is to use biochemistry to mimic silicon computers. While [1] also contains such efforts (building logical gates with DNA, etc), DNA computing does propose also a more general view: building structure from structure as nature does.


Two easy to read articles about real applications of molecular computing:
– “Microscopic machine mimics the ribosome, forms molecular assembly line”
– “Biological computer can decrypt images stored in DNA”


Article about Craig Venter from 2016, found by looking for “Craig Venter Illumina”. Other informative searches would be “Digital biological converter” or anything “Craig Venter”


Interesting talk by an interesting researcher Lee Cronin

[6] The Molecular Programming Project

Worth to be browsed in detail for seeing the various trends and results

Sitting in the middle, between biochemistry and IT:

[1] Algorithmic Chemistry (Alchemy) of Fontana and Buss

Walter Fontana today:

[2] The Chemical Abstract Machine by Berry and Boudol

[3] Molecular Computers (by me, part of an Open Science project, see also my homepage and the chemlambda github page )

On the IT side there’s a beautiful research field, starting of course with lambda calculus by Church. Later on this evolved in the direction of rewriting systems, then graph rewriting systems. I can’t even start to write all that’s done in this direction, other than:

[1] Y. Lafont, Interaction Combinators

but see as well the Alchemy, which uses lambda calculus!

However, it would be misleading to reduce everything to lambda calculus. I came to the conclusion that lambda calculus or Turing machines are only two among the vast possibilities, and not very important. My experience with chemlambda shows that the most relevant mechanism turns around the triple of nodes FI, FO, FOE and their rewrites. Lambda calculus is obtained by the addition of a pair of A (application) and L (lambda) nodes, along with standard compatible moves. One might use as well nodes related to a  Turing Machine instead, as explained in

Everything works just the same. The center, what makes things work, is not related to Logic or Computation as they are usually considered. More later.

Mol language and chemlambda gui instead of html and web browsers gives new Net service?

The WWW is an Internet system, based on the following ingredients:

  • web pages (written in html)
  • a (web) browser
  •  a web server (because of the choice of client-server architecture)

Tim Berners-Lee wrote those programs. Then the WWW appeared and exploded.

The force behind this explosion comes from the separation of the system into independent parts. Anybody can write a web page, anybody who has the browser program can navigate the web, anybody who wants to make a web server needs basically nothing more than the program for that (and the  previously existing  infrastructure).

In principle it works because of the lack of control over the structure and functioning.

It works because of the separation of form from content, among other clever separations.

It is so successful, it is under our noses, but apparently very few people think about the applications of the WWW ideas in other parts of the culture.

Separation of form from content means that you have to acknowledge that meaning is not what rules the world. Semantics has only only a local, very fragile  existence, you can’t go too far if you build on semantics.

Leave the meaning to the user, let the web client build his meaning from the web pages he can access via his browser. He can access and get the info because the meaning has been separated from the form.

How about another Net service, like the WWW, but which does something different, which goes to the roots of computation?

It would need:

  • artificial molecules instead of web pages; these are files written in a fictional language called “Mol”
  • a gui for the chemlambda artificial chemistry, instead of a web browser;  one should think about it as a Mol compiler & gui,
  • a chemical server which makes chemical soups, or broths, leaving the reduction algorithm to the users;

This Mol language  is an idea which holds some potential, but which needs a lot of pondering. Because the “language” idea has bad effects on computation.



Updates on the program about artificial chemistry and decentralized computing

I pass directly to the matter.

UPDATE: now there are demos in D3 for some of the things described here.

Where I’m now.  I have a development of the initial chemlambda formalism (introduced in the Chemical concrete machine) which can be coupled with with various algorithms for performing reductions, which range from the most stupid

  • on one machine run a program which advances in sequential steps, such that each step begins by finding all possible graph rewrites on a given molecule, then chooses according to a criterion called “priority choice” a maximal collection of graph rewrites which can be done simultaneously; after the application of all graph rewrites there is a second stage when the “COMB” moves are applied, which eliminate the supplementary Arrow elements

to more and more intelligent

  • distribute to several machines the stupid strategy  but keep a synchronous global control over this
  • use one or more machines which maintain channels of communication (which pose synchronization problems) between them, i.e. use process algebra style models of computation over the chemlambda formalism
  • on one machine, use a Chemical Reaction Network model of computation by starting the computation with a multiset of molecules and then do as in the stupid strategy, but with a random ingredient added, like for example choosing randomly a subset of graph rewrites from those possible, or allowing randomly moves performed in the opposite direction.  Produce probabilistic results about the distribution of numbers of molecules which appear. This of course is a model which is extremely expensive, but perhaps because of the fact that the stupid strategy is very cheap in terms of IT resources, maybe it works.
  • same as previously, but on many machines, a CRN style on each machine, a process algebra style between machines.

I claim that all these models don’t add anything really interesting over the stupid model. These are like fashion ornamentation over a bad, but popular design.

None of these additions use in a significant way the advantages of chemlambda, which are:

  • is a purely local graph rewrite system
  • there are no correct molecules (graphs), nor wrong, or illegal ones
  • the graphs ale almost never DAGs, nor they represent a flowchart of a computation
  • hence there is no global “meaning” associated to them
  • the formalism does not work by passing values from here to there (so why one should think to couple chemlambda with something adapted to the sender-wire-receiver paradigm?)
  • the molecules encode a (local) meaning not by their number, but by their shape (therefore why would one want to use CRN for these? )
  • the molecules are not identified with their function, so from the point of view of chemlambda it does not matter if you use a functional programming paradigm or an imperative one

I hold that on the contrary, chemlambda is really close to some of Nature’s workings:

  • Nature does not need meaning to work, this meaning is simply a human point of view, a hash table which simplify our explanations. To believe that viruses and cells have tasks, or functions, or that they have a representation of themselves, or that they need one, all this is a sterile and confusing, but pervasive belief.
  • Nature is purely local, in the sense that everything happens by a chain (or a net, or other analogy that our weak minds hungry for meaning propose) of localized interactions (btw this has little to do with the problem of what is space, and more to do with the one of what is a degree of freedom)
  • Nature does not use functions, functionalism is an old and outdated idea which felt into oblivion a long time ago in chemistry, but it is still everywhere used in the hard sciences, especially after the formalization a la Bourbaki (and other blinds), who significantly was incapable of touching more natural fields like geometry
  • Nature rarely uses sender-wire-receiver settings, these are, I suppose, the scars of the WW2, when IT started to take shape.
  • Nature does not work by passing values, or numbers, or bits, we do use these abstractions for understanding and we build our computers like this
  • Nature does not have or need a global point of view, we do in our explanations.

However, there is a social side of research which makes interesting the pursue of the exploration of these familiar models in relation to chemlambda. People believe these models are interesting, so it would be good to see exactly how chemlambda looks with these ornaments on top.

Now let’s pass to really interesting models.

Why put randomness by hand into the model when there is enough randomness in the real world?

Instead of CRN and process algebra (with it’s famous parallel composition operation, which is of course just another manifestation of a global point of view), let’s just preoccupy to understand how the world looks from the point of view of an individual molecule.

Forget about soups of multisets of anonymous molecules, let’s be individualistic. What happens with one molecule?

Well, it enters in chemical interactions with other molecules, depending on where it is relative to others (oups, global pov), depending of many external randomness sources. From time to time the molecule enters in interaction with another one and when it does, the chemical reaction is like a graph rewrite on the molecule and the other, which may happen with some randomness as well, but more important, it happens in certain definite ways, depending on the molecule chemical composition and shape.

That is more or less all, from the chemical point of view.

OK then, let’s ignore the randomness because anyways in the real world there are many sources of randomness, there is plenty of randomness, and let’s try to make a model of one individual molecule which behaves in some ways (does some graph rewrites on the complex formed by itself and the other(s) molecules from the chemical reaction).

In other words, let’s make an actor molecule.

Mind that from the point of view of one molecule there is no global state of the world.

This is what is proposed in section 3 of the article GLC actors, artificial chemical connectomes, topological issues and knots. Not for chemlambda, but for the GLC, the distant parent.

What is the relevance.  You can use this for several purposes.

  • at once, this gives a decentralized computing system based on artificial chemistry which is very close to Nature way, therefore
  • is good to try it on the Net
  • and is good to try it in real world (with the condition to identify real chemistry reactions which are like the chemlambda graph rewrites, something I believe is true)
  • and moreover is good to try it as an ingredient in the future IoT,  where we can import the amazing idea of Craig Venter to send a printer of lifeforms to Mars and then send by radio any DNA encoding from Earth to Mars. Why not do the same entirely on Earth? Imagine: the real world has a chemistry, the virtual one has chemlambda, therefore we can pass smoothly from one to the other because they are based on same principles. The technology behind the IoT would then be a giant, worldwide distributed Venter Printer, in one sense, coupled with a world wide sensor (phones, cameras, fridges and whatnot) which convert the acquired data from the real world into the chemlambda format of the virtual one.

That’s the first batch of uses. There are others, maybe less ambitious but more easily attainable.

  • would you want to do a realistical neural network? We may try it by making a fine grained neuron and distribute it over the network. Indeed, each real neuron is the host of a collection of chemical reactions, from the synaptic cleft, to the soma, to the next synaptic cleft.
  • too ambitions maybe, so let’s restrict to something simpler: can we do Turing neurons in chemlambda? Sure, done already.

That is more of a principle of organizing computations, and a path to pursue.

  • or maybe we want to have a world wide distributed universal computer. It will not be very fast, but the purpose is not be fast, but distributed. I call this “the Soup”. Imagine: everybody who wants to be part of it just downloads a collection of scripts and puts it somewhere which has an URI (like a web page). There is no sneak behaviour there, no net bots or other freaky idea which would transgress any privacy.  Each participant will become like an actor molecule (or maybe 10, or 100, depending on the burden of the scripts on the computer). Anybody could launch a designed molecule into the Soup, starting a network of reactions with the others molecules from the Soup (i.e. with the ones which are stored as states of the actors in other computers). The communication between computers will be decentralized and will not make much sense to a sneaker anyways, which brings me to other possible applications
  • the first one is something like a distributed homomorphic encryption service. Big words, but what really means is that the Soup could offer this service by default. Some said that this would be a form of obfuscation, because you take a program (like a lambda term) and then you execute it in this weird chemlambda. But this is not at all correct, because, recall, chemlambda has only little to do with lambda terms (a thing which I don’t arrive to stress enough) and there is no correct or illegal molecule (differently from any other formalism which superficially resembles to chemlambda, like GOI or ZX), and there is no global meaning, nor passing of values which can be then intercepted. No, the “encryption” works like in Naure, where, for example the workings of a living cells are “encrypted” by default, by the fact that they carry no meaning, they are purely local and decentralized.

The list of possible application forked, I go back to the previous.

  • it is possible that not neurons, which are big chemical entities, may be better understood, but much smaller ones, like bio molecules. Maybe some chemlambda style worlings are relevant at the genetic level. More concretely, perhaps one can identify real molecules or use DNA and enzymes to do chemlambda in reality. Better still, and more realistical I think, maybe chemlambda is only a proof of principle that it is possible to understand life processes at the levels of molecule, not by the usual way. The usual way consists largely into trying to make sense, attribute tasks and functions and do probabilistic calculi on huge quantities of data collected in real chemistry. This new way would consist into the exploration of the molecules as embodied abstractions, embodied programs… Oh, it does not sound original enough, let me try again.  Compare with Alchemy of Fontana and Buss. In that amazing research program they propose that molecules are like lambda terms, reactions are like the application operation and active chemical sites like abstraction. In chemlambda the application and abstraction are not operations, but atoms or small parts of molecules. And the molecules are not only lambda terms, but much more varied. Chemical reactions are graph rewrites. Therefore, even if we restrict to molecules which have to do with lambda terms, we see that in chemlambda the application and abstraction are embodied, they are made of matter, be it atoms or molecules. Chemical reactions are like reductions in lambda calculus, or more general, they are graph rewrites. In Alchemy the function of the molecule is the normal form of the lambda term it represents. In chemlambda there is no function, in the sense that there may be actually several, or none, depending on the interaction with other molecules.  I believe that this view is closer to Nature than the classical, pervasive one, and that it might help the understanding of the inner working of bio molecules.

What needs to be done. Many things:

  • if we use chemlambda with a model of choice, ranging from stupid to intelligent, nevertheless there are new things to learn if you want to program with it. Only by looking at the exploration of the stupid model, started with the scripts made by this mathematician (see the github repo), one has a feeling of overwhelming in front of a new continent. There are many questions to be answered, like: what is the exact rigorous relation between recursion and self-reproduction, seen in chemlambda? how to better program without passing values, as chemlambda proposes? how to geometrize this sort of computation, because when translated to chemlambda many programs are made mostly of currying and uncurrying and other blind procedures which are really not necessary in this model? what is the exact relation between various priority choices and various evaluation strategies in usual programming? how to program without functions, i.e. without extensionality?

In order to answer to such questions are needed CS professionals.

  • visualizations of graphs (molecules) and their rewrites are not needed for the formalism to work, but they are helpful for learning how to use it. Like it or not, but most of our brains process non linguistic stuff, and as you know an image is worthy a 1000 words. Finding good ways to visualize chemlambda and the various models helps learning to use it and it offers as well a bridge towards less IT sophisticated researchers, like chemists or biologists.

Help me to build a better chemlambda gui. Step by step, according to opportunistic needs of the explorers, there is no blueprint to execute here.

  • as for the decentralized computing fork of the project, this is not hard to do in principle, or at least this is how it appears to this mathematician. However, in practice there are certainly ways better than others to do this.

Again CS guys are needed here. But leave at the door, please, process calculi, CRNs and other ornaments, and after that look at the body under the dress. Does the dress falls well over that body, or if not what is to be done? Ornaments are only a cheap way to trick the customer.

  • for the real chemistry branch, real chemists are needed. This is way outside of my competences, but it may be helpful for you, the biochemist. If so, then I have something to learn as well, and maybe you’ll see that a mathematician is much more useful than only as a source of equations and correct probabilistic calculus.

Real chemists, with labs and knowledge about this are needed here! Let’s discuss less about making molecules to do boolean algebra and more about making them into embodied programs.

What do I need for the program. Money, of course. Funds. Brain time. Code. Proofs. Experiments.


Why process calculi are old industrial revolution thinking: the example with the apple pies

I have strong ideological arguments against process calculi, exactly because of the parallel composition. I think that parallel composition is not realistical, because there is no meaning in the “parallel” unless you have a God view over the distributed computation.

This is a very brute argument, but I can make it detailed (and I did it, here and there in this open notebook).

In my opinion we are still in the process of letting go the old ideas of the industrial revolution. The main idea which we need to exorcise out of every corner of the mind is that there is a benevolent (or not) dictator who organize the process of the world (be it a factory, a government, a ministry, or a school class) in a way which is easy to lead because it has well placed bottlenecks which give a global meaning to the process.

Concretely, the very successful idea of organizing stuff, which comes from the industrial revolution, is that one has to abstract over the individuals, the subjects, then  to stream the interactions between them (the individual abstracted into functions) by creating a hierarchy of bottlenecks. The advantage is that structure gives a meaning to what is happening.

A meaning is simply like a hash table.

The power of this system of organization is tremendous.  It led to the creation of the modern states, as well as to the creation of economic and ideological systems, like capitalism and communism, which are both alike in the way they treat individuals as abstractions.

This kind of organisation pervades everything, in very concrete and punctual ways, so much so that the material structure which holds together our society (like  in particular the server-client structure of the net, as a random example, but less some of the net protocols) has grown in the way it is not only because there are some universal laws and invisible hands which constrain it, but also because this structure is an addition of a myriad of components which have been designed in this way and not in another because of the industrial revolution ideology of control and abstraction.

The power of the industrial revolution main idea is that you can take any naturally occurring process (like apples growing in trees and people culling them and making pies) and structure it in a meaningful way and transform it into a viral process (apple pies making factory). You just have to abstract apples and peoples into resources and synchronize the various parts, to define the inputs and outputs and then optimize your control over it and then you can make 10^9 evaluations of the abstract notion of “apple pie”  and put them on the shelves of the supermarket, instead of 10^3 individual apple pies as grandmothers used to make in their kitchens.

Now, in the factory of apple pies, the notion of parallel processes makes perfect sense. Contrary to that, in the real real world with trees and apples and grandmothers with their ovens, P | Q makes sense only in retrospect.

If you were God then you could look from far above at all these grandmas and see lots of P | Q. But the grandmas don’t need the parallel composition  to make their delicious apple pies. Moreover, the way of life is that generally there is no need for a centralized control, no need for a meaning. Viruses and cells don’t know they are viruses and cells. They work very well without knowing they do some tasks inside an environment.

The life ozone cell  may be in parallel with the life of another from the God’s point of view, but this relation is certainly not a part of, nor a need for these life processes to function.
The big question for me is: how to replicate this by techne? It is clearly possible, as proved by the world we live in. It looks to me very promising to try to work under these self-imposed constraints: no meaning, no parallel composition in particular, no abstraction, no levels. It is surprising that chemlambda works at all already.



Visual tutorial for “the soup”

I started here a visual tutorial for chemlambda and it’s gui in the making. I call it a tutorial for the “soup” because it is about a soup of molecules. A living soup.

Hope that in  the  recent future will become THE SOUP. The distributed soup. The decentralized living soup.

Bookmark the page because content will be added on a daily basis!


List of Ayes/Noes of artificial chemistry chemlambda

List of noes

  • distributed (no unique place, no external passive space)
  • asynchronous (no unique time, no external global time)
  • decentralized (no unique boss, no external acyclic hierarchy)
  • no semantics (no unique meaning, no signal propagation, no values)
  • no functions (not vitalism)
  • no probability


List of ayes


Questions/Answers Session: on chemlambda and computing models

I open a session on no-nonsense chemlambda and distributed GLC.

This will NOT be made public, only by private mail messages.

If you want to hear more:

  • precise
  • structured
  • advanced (i.e. not presented at chorasimilarity)

then mail me at and let’s talk about  parts you don’t get clearly.

Looking forward to hear from you,

Marius Buliga



Open notebook science for everyone, done by everyone

I am deeply impressed by the post:

Jean Claude Bradley Memorial Symposium; July 14th; let’s take Open Notebook Science to everyone

Here are some quotes:

Jean-Claude Bradley was one of the most influential open scientists of our time. He was an innovator in all that he did, from Open Education to bleeding edge Open Science; in 2006, he coined the phrase Open Notebook Science. His loss is felt deeply by friends and colleagues around the world.

“Science, and science communication is in crisis. We need bold, simple visions to take us out of this, and Open Notebook Science (ONS) does exactly that. It:

  • is inclusive. Anyone can be involved at any level. You don’t have to be an academic.
  • is honest. Everything that is done is Open, so there is no fraud, no misrepresentation.
  • is immediate. The science is available as it happens. Publication is not an operation, but an attitude of mind
  • is preserved. ONS ensures that the record, and the full record, persists.
  • is repeatable or falsifiable. The full details of what was done are there so the experiment can be challenged or repeated at any time
  • is inexpensive. We waste 100 Billion USD / year of science through bad practice so we save that immediately. But also we get rid of paywalls, lawyers, opportunity costs, nineteenth century publishing practices, etc.”

Every word is true!

This is the future of the research communication. Or at least the beginning of it. ONS has open, perpetual peer review as a subset.

Personal notes.  Look at the left upper corner of this page, it reads:

chorasimilarity | computing with space | open notebook.

Yay! the time  is coming!  the weirdos who write on arXiv, now figshare,  who use open notebooks, all  as a replacement for legacy publication,   will soon be mainstream 🙂

Now, seriously, let’s put some gamification into it, so those who ask “what IS a notebook?”  can play too. They ARE the future. Hope that soon the Game of Research and Review, aka playing  MMORPG  games at the knowledge frontier, will emerge.

There are obvious reasons for that:

  • the smartphone freeds us from staying in one physical place while we surf the virtual world
  • which has as an effect that we rediscover that physical space is important for our interactions, see  Ingress
  • gamification of human activities is replacing the industrial era habits, (pyramidal, static organizations, uniformization, identification of humans with their functions (worker, consumer, customer, student) and with their physical location (this or that country, city, students in the benchs, professors at the chair, payment for working hours ans for staying at the counter, legacy publishing).

See also Notes for “Internet of Things not Internet of Objects”.





The example with the marbles

In a discussion about the possible advantages for secure computing with the  GLC actors model, I came up with this analogy, which I want to file here, not to get lost in the flow of exchanges:

Mind that this is only a thought  experiment, which might not be accurate in all aspects in it’s representation of the kind of computation with GLC or more accurately with chemlambda.

Imagine a large pipe, with a diameter of 1 m say, and 3 m long, to have an image. It is full of marbles, all identical in shape. It is so full that if one forces a marble at one end then a marble (or sometimes more) have to get out by the other end.

Say Alice is on one end of the pipe and Bob is at the other end. They agreed previously to communicate in the most primitive manner, namely by the spilling  of a small (say like ten) or a big (for example like 50)   marbles at their respective ends. The pipe contains maybe 10^5   or 10^6 marbles, so both these numbers are small.

There is also Claire who, for some reason, can’t see the ends of Alice and Bob, but the pipe has a window at the middle and Claire can see about 10% of the marbles from the pipe, those which are behind the window.

Let’s see how the marbles interact. Having the same shape, and because the pipe is full of them, they are in a local configuration which minimizes the volume (maybe not all of them, but here the analogy is mum about this). When a marble (or maybe several) is forced at Alice’s end of the pipe, there are lots of movements which accommodate the new marbles with the old ones. The physics of marbles is known, is the elastic contact between them and there is a fact in the platonic sky which says that for any local portion of the pipe the momentum and energy are conserved, as well as the volume of the marbles. The global conservation of these quantities is an effect of those (as anybody versed in media mechanics can confirm to you).

Now, Claire can’t get anything from looking  by the window. At best Claire remarks complex small movements, but there is no clear way how this happens (other than if she looks at a small number of them then she might figure out the local mechanical ballet imposed by the conservation laws), not are Alice’s marbles marching towards Bob’s end.

Claire can easily destroy the communication, for example by opening her window and getting out some buckets of marbles, or even by breaking the pipe. But this is not getting Claire closer to understanding what Alice and Bob are talking about.

Claire could of course claim that i the whole pipe was transparent, she could film the pipe and then reconstruct the communication. But in this case Claire would be the goddess of the pipe and nothing would be hidden to her. Alice and Bob would be her slaves because Claire would be in a position which is equivalent to having a window at each end of the pipe.



  • each marble is a GLC actor
  • they interact locally, by known and simple rules
  • this is an example of signal transduction
  • which encrypts itself, more  communication makes the decoding harder. It is the same problem which is encountered when observing a living system, for example a cell. You may freeze it (and therefore kill it) and look at it but you won’t see how it functions. You can observe it alive, but it is complex by itself, you never see, or only rare glimpses of meaning.
  • the space (of the pipe) represents  an effect of the local, decentralized, asynchronous interactions.

Beneath under there is just local interaction, via the moves which act on patterns of graphs which are split between actors. But this locality gives space, which is an emergent, global effect of these distinctions which communicate.

Two chemical molecules which react are one composite molecule which reduces itself, splitted between two actors (one per molecule). The molecules react when they are close is the same as saying that their associated actors interact when they are in the neighboring relation.  And the reaction modifies not only the respective molecules, but also the neighboring relation between actors, i.e. the reaction makes the molecules to move through space. The space is transformed as well as the shape of the reactants, which looks from an emergent perspective as if the reactants move through some passive space.

Concretely, each actor has a piece of the big graph, two actors are neighbours if there is an arrow of the big graph which connects their respective pieces, the reduction moves can be applied only on patterns which are splitted between two actors and as an effect, the reduction moves modify both the pieces and the arrows which connect the pieces, thus the neighbouring of actors.

What we do in the distributed GLC project is to use actors to transform the Net into a space. It works exactly because space is an effect of locality, on one side, and of universal simple interactions (moves on graphs) on the other side.


Morlocks and eloi in the Internet of Things

For any fan of Neal Stephenson and Cory Doctorow,  the contents of the following opinion piece on goals and applications of the Internet of Things (IoT) should be no great surprise.

I am using the post Technical Machine – Designing for Humans as a study case.

[ Technical Machine is the company which builds  the Tessel. This is a product with a great potential! I wish I could use tessels for   the purpose explained in the post Experimental alife IoT with Tessel .  ]

This nice post is interesting in itself, but it is also an example of the shifting of the ideology concerning the Internet of Things.

I extract two contradictory quotes from the post and then I discuss them (and explain why they seem to me contradictory).

(1) ” A completely interactive tool, one that seamlessly incorporates humans as a piece of the system, is a tool that people don’t even think about. That’s the end goal: Ubiquitous Computing as Mark Weiser imagined it. Every object is an embedded device, and as the user, you don’t even notice the calm flow of optimization.
The Nest thermostat is a good example of this sort of calm technology. The device sits on your wall, and you don’t spend much time interacting with it after the initial setup. It learns your habits: when you’re home, when you’re not, what temperatures you want your house to be at various points in the day. So you, as the user, don’t think about it. You just live in a world that’s better attuned to you.”


(2) “I think that one of the most interesting things we’ll see in the near future is the creation of non-screen interfaces. Interacting with technology, we rely almost solely on screens and buttons. But in the physical world, we use so many other interfaces. […] there’s a lot of fascinating work going on to receive outputs from humans. […] The implications there are amazing: you can wire up your own body as an electrical input into any electrical system– like a computer, or a robot, or whatever else you might build. You can control physical and digital things just by thinking really hard or by twitching your fingers.”


Now the discussion. Why are (1) and (2) contradictory?

I shall explain this by using the morlocks/eloi evocative oversimplification.

From historical reasons maybe the morlocks (technical nerds) are trained/encouraged/selected to hate discussions, human exchanges and interactions in general. Their dream technology is one like in (1), i.e. one which does not talk with the humans, but quietly optimize (from the morlock pov) the eloi environment.

On the contrary, the eloi love to talk, love to interact one with the others. In fact the social Net is a major misuse of morlock technology by eloi. Instead of a tool for fast and massive share of data, as the morlocks designed it, the Net became a very important (most important?) fabric of human interactions, exchanging lolcats images and sweet little nonsenses which make the basis of everyday empathic interaction with our fellow humans. And much more: the eloi prefer to use this (dangerous) tool for communicating, even if they know that the morlocks are sucking big data from them. They (the eloi) would prefer by far to not be in bayesian bubbles, but that’s life, they are using opportunistically things they don’t understand how they work, despite being told to be more careful.

The quote (2) show that people start to think about the IoT as an even more powerful tool of communication. OK, we have this nice technology which baby-sits us and we live calm lives because quietly the machine optimizes the little details without asking us. But, think that we can use the bit IoT machine for more than conversations. We can use it as the bridge which unites the virtual and the meat spaces, we can make real things  from discussions and we can discuss about real objects.

This is a much more impressive application of the IoT than the one which optimizes our daily life. It is something which would allow to make our dreams come true, literary! And collaboratively.

I have argued before about that, noticing that “thing” means both an assembly and a discussion (idea taken via Kenneth Olwig) and object is nothing but the result,  or outcome of a discussion, or evidence for a discussion. See the more at the post Notes for Internet of Things not Internet of objects.

It’s called “Internet of Things” and not “Internet of Objects” and it seems that morlocks start to realize this.





Experimental alife IoT with Tessel

Here is an idea for testing the mix between the IoT and chemlambda.  This post almost qualifies as an What if? one but not quite, because in principle it might be done right now, not in the future.

Experiments have to start from somewhere in order to arrive eventually to something like a Microbiome OS.

Take Tessel.

Tessel is a microcontroller that runs JavaScript.
It’s Node-compatible and ships with Wifi built in.

Imagine that there is one GLC actor per Tessel device. The interactions between GLC actors may be done partially via the Wifi.

The advantage is that one may overlap the locality of graph rewrites of chemlambda with space locality.


Each GLC actor has as data a chemlambda molecule, with it’s in and out free arrows (or free chemical bonds) tagged with names of other actors.

A bond between two actors form, in the chemlambda+Tessel world, if the actors are close enough to communicate via Wifi.

Look for example at the beta move, done via the behaviour 1 of GLC actors. This may involve up to 6 actors, namely the two actors which communicate via the graphic beta move and at most 4 other actors which have to modify their tags of neighbouring actors names as a result of the move. If all are locally connected by Wifi then this becomes straightforward.

What would be the experiment then? To perform distributed, decentralized computations with chemlambda (in particular to do functional programming in a decentralized way) which are also sprawled over the physical world.  The Tessel devices involved in the computation don’t have to be all in possible Wifi connections with the others, on the contrary, only local connections would be enough.

Moreover, the results of the computations could as well have physical effects (in the sense that the states of the actors could produce effects in the real world) and as well the physical world could be used as input for the computation (i.e. the sensors connected to Tessel devices could modify the state of the actor via a core-mask mechanism).

That would play the role of a very primitive, but functional, experimental ancestor of a Microbiome OS.





FAQ: chemlambda in real and virtual worlds

Q1. is chemlambda different than previous models, like the algorithmic chemistry of Fontana and Buss, or the CHAM of Berry and Boudol?

A1. Yes. In chemlambda we work with certain graphs made of nodes (atoms) and bond (arrows), call such a graph a  molecule.  Then:

  • (A) We work with individual molecules, not with populations of molecules. The molecules encode information in their shape, not in their number.
  • (B) different from algorithmic chemistry, the application and abstraction are atoms of the molecules.
  • (C) There are no variables in chemlambda and there is no need to introduce one species of molecules per variable, like in the previous models.
  • (D) chemlambda and it’s associated computing model (distributed GLC)  work well in a decentralized world, there is no need for having a global space or a global time for the computation.

There is a number of more technical differences, like (non exhaustively):

  • (E)  molecules  are not identified with their functions. Technically, chemlambda rejects eta reduction, so even for those molecules which represent lambda terms, they are not identified (as happens when we use eta reduction) with their function. This calls for an “extreme” functional programming style.
  • (F) only a small part of the chemlambda molecules correspond to lambda terms (there is a lambda calculus “sector”).
  • (G) there is no  global semantics.

Q2.  is chemlambda a kind of computing with chemical reaction networks (CRNs)?

A2. No. Superficially, there are resemblances, and really one may imagine CRNs based on chemlambda, but this is not what we are doing.


Q3. Why do you think chemlambda has something to tell about the real or even about the living world?

A3. Because the real world, in it’s fundamental workings, does not seem to care about 1D language based constructs which we cherish in our explanation. The real and especially the living world seems to be based on local, asynchronous interactions which are much more like signal transductions and much less like information passing. (See How is different signal transduction from information theory? )
Everything happens locally, in nontrivial but physical ways, leading to emergent complex behaviours. Nature does not care about coordinates and names of things or abstractions, unless they are somehow physically embodied. This is the way chemlambda functions.

Q4. Why do you think chemlambda has something to say about the virtual  world of the Net?

A4. Because it puts the accent on alife instead of AI, on decentralization instead of pyramidal constructs.  A microbial ecology like internet is much more realistic to hope to realize than one based on benevolent pyramidal AI constructs (be them clouds, or other corporations constructs). Because real and virtual worlds are consistent only locally.

Q5. What about the Internet of Things?

A5. We hope to give to the IoT the role of the bridge which unites two kinds of computations real-virtual, under the same chemistry.

Q6. What would happen in your dream world?

A6. There are already some (fake) news about it here: what if





Autodesk releases SeaWater (another WHAT IF post)

[ This is another  WHAT IF  post  which  responds to the challenge formulated in  Alife vs AGI.  You are welcome to suggest another one or to make your own.]

The following is a picture of a random splash of sea water, magnified 25 times [source]


As well, it could be  just a representation of the state of the IoT in a small neighbourhood of you, according to the press release describing SeaWater, the new product of Autodesk.

“SeaWater is a design tool for the artificial life based decentralized Internet of Things. Each of the tiny plankton beings which appear in the picture is actually a program, technically called a GLC actor. Each plankton being has it’s own umwelt, it’s own representation of the medium which surrounds it. Spatially close beings in the picture share the same surrounding and thus they can interact. Likewise, the tiny GLC actors interact locally one with another,  not in real space, but on the Net. There is no real space in the Net, instead, SeaWater represents them closer when they do interact.

Sea Water is a tool for Net designers. We humans are visual beings. A lot of our primate brains powers can be harnessed for designing the alife decentralized computing which form the basis of the Internet of Things.

It improves very much the primitive tools which give things like this picture [source]




Context. Recall that IoT is only a bridge between two worlds: the real one, where life is ruled by real chemistry and the artificial one, based on some variant of an artificial chemistry, aka  chemlambda.

As Andrew Hessel points out, life is a programming language (chemically based),  as well as the virtual world. They are the same, sharing the same principles of computation. The IoT is a translation tool which unites these worlds and lets them be one.

This is the far reaching goal. But in the meantime we have to learn how to design this. Imagine that we may import real beings, say microbes, to our own unique Microbiome OS.  There is no fundamental difference between synthetic life, artificial life and real life, at least at this bottom level.

Instead of aiming for human or superhuman artificial intelligence, the alife decentralized computing community wants to build a world where humans are not treated like bayesian units by  pyramidal centralized constructs.  There is an immense computing power already at the bottom of alife, where synthetic biology offers many valuable lessons.


UPDATE.  This is real: Autodesk Builds Its Own Virus, as the Software Giant Develops Design Tools for Life Itself.

Microbes take over and then destroy the HAL 9000 prototype

Today was a big day for the AI specialists and their brainchild, the HAL 9000. Finally, the decision was made to open the isolation bubble which separated the most sophisticated artificial intelligence from the Net.  They  expected that  somehow their brainchild will survive unharmed  when exposed to the extremely dynamic medium of decentralized, artificial life based computation we all use every day.

As the video made by Jonathan Eisen shows,  in about 9   seconds after the super-intelligence was taken out of the quarantine and relayed to the Net “microbes take over and then destroy the” HAL 9000 prototype.

After the experiment, one of the creators of the HAL 9000 told us: “Maybe we concentrated too much on higher level aspects of the mind. We aim for understanding intelligence and rational behaviour, but perhaps we should learn this lesson from Nature, namely that real life is a wonderful, complex tangle of local, low level interactions, and that rational mind is a very fragile epiphenomenon. We tend to take for granted the infrastructure of life which runs in the background.”

“I was expecting this result” said a Net designer. “The strong point of the Alife decentralized functioning of the Net is exactly this: as the microbes, the Net needs no semantics to function. This is what keeps us free from the All Seeing Eye, corporation clouds based Net which was the rule some years ago. This is what gives everybody the trust to use the Net.”



This is another post  which  respond to the challenge from Alife vs AGI.  You are welcome to suggest another one or to make your own.


From a stain on the wall to five visual languages

Do you know about the “stain on the wall”  creativity technique of Leonardo da Vinci? Here is a quote [source used]:

I will not forget to insert into these rules, a new theoretical invention for knowledge’s sake, which, although it seems of little import and good for a laugh, is nonetheless, of great utility in bringing out the creativity in some of these inventions.    This is the case if you cast your glance on any walls dirty with such stains or walls made up of rock formations of different types.  If you have to invent some scenes, you will be able to discover them there in diverse forms, in diverse landscapes, adorned with mountains, rivers, rocks, trees, extensive plains, valleys, and hills. You can even see different battle scenes and movements made up of unusual figures,  faces with strange expressions,  and myriad things which you can  transform into a complete and proper form constituting part of similar walls and rocks. These are like the sound of bells, in whose tolling, you hear names and words that your imagination conjures up.

I propose to you  five graphical formalisms, or visual languages, towards the goal of “computing with space”.

They all come from a “stain on the wall”,  reproduced here  (is the beginning of the article What is a space? Computations in emergent algebras and the front end visual system, arXiv:1009.5028),  with  some links  to more detailed explanations and related material which I invite you to follow.

Or better, to threat them as  a stain on the wall. To share, to dream about, to create, to discuss.

In mathematics “spaces” come in many flavours. There are vector spaces, affine spaces, symmetric spaces, groups and so on. We usually take such objects as the stage where the plot of reasoning is laid. But in fact what we use, in many instances,are properties of particular spaces which, I claim, can be seen as coming from a particular class of computations.

There is though a “space” which is “given” almost beyond doubt, namely the physical space where we all live. But as it regards perception of this space, we know now that things are not so simple. As I am writing these notes, here in Baixo Gavea, my eyes are attracted by the wonderful complexity of a tree near my window. The nature of the tree is foreign to me, as are the other smaller beings growing on or around the tree.  I can make some educated guesses about what they are: some are orchids, there is a smaller, iterated version of the big tree. However, somewhere in my brain, at a very fundamental level, the visible space is constructed in my head, before the stage where I a capable of recognizing and naming the objects or beings that I see.


The five visual languages are to be used with the decentralized computing model called Distributed GLC.  They point to different aspects, or they try to fulfil different goals.

They are:



Microbiome OS

Your computer could be sitting alone and still be completely outnumbered for your operating system  is home to  millions of tiny passengers – chemlambda molecules.

The programs making the operating system of your computer are made up of around ten million code lines, but you harbour a hundred million artificial life molecular beings. For every code line in your ancient windows OS, there are 100 virtual bacterial ones. This is your ‘microbiome’ OS and it has a huge impact on your social  life, your ability to  interact with the Internet of Things and more. The way you use your computer, in turn, affect them. Everything from the forums we visit  to the way we use the Internet for our decentralized computations  influences the species of bacteria that take up residence in our individual mocrobiome OS.


Text adapted from the article Microbiome: Your Body Houses 10x More Bacteria Than Cells, which I found by reading this G+ post by Lacerant Plainer.

This is a first example of a post which would respond to the challenge from Alife vs AGI. For commodity of the reader I reproduce it further:

In  this post I want to propose a challenge.  What I have in mind, rather vague  but might be fun, would be to develop through exchanges a “what if” world, where, for example, not AI is the interesting thing when it comes about computers, but artificial biology. Not consciousness, but metabolism, not problem solving, but survival. Also related to the IoT which is a bridge between two worlds. Now, the virtual world could be as alive as the real one. Alive in the Avida sense,  in the sense that it might be like a jungle, with self-reproducing, metabolic artificial beings occupying all virtual niches, beings which are designed by humans, for various purposes. The behaviour of these virtual creatures is not limited to the virtual, due to the IoT bridge.  Think that if I can play a game in a virtual world (i.e. interact both ways with a virtual world) then why not a virtual creature can’t interact with the real world? Humans and social manipulations included.

If you start to think about this possibility, then it looks a bit like this. OK, let’s write such autonomous, decentralized, self sustained computations to achieve a purpose. May be any purpose which can be achieved by computation, be it secure communications, money replacements, or low level AI city management. What stop others to write their creatures, one for example for the fun of it,  of writing across half of the world the name Justin by building at right GPS coordinates sticks with small mirrors on top, so that from orbit all shine the pixels of that name.  Recall the IoT bridge and the many effects in the real world which can be achieved by really distributed, but cooperative computations and human interactions. Next: why don’t write a virus to get rid of all these distributed jokes of programs which run low level in all phones, antennas and fridges? A virus to kill those viruses. A super quick self-reproducer to occupy as much as possible of the cheap computing  capabilities. A killer of it. And so on. A seed, like in Neal Stephenson, only that the seed is not real, but virtual, and it does not work on nanotechnology, but on any technology connected to the net via IoT.

Stories? Comics? Fake news? Jokes? Should be fun!



Chemlambda, universality and self-multiplication

Together with Louis Kauffman, we submitted  the following article:

M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication,   arXiv:1403.8046


The article abstract is:

We present chemlambda (or the chemical concrete machine), an artificial chemistry with the following properties: (a) is Turing complete, (b) has a model of decentralized, distributed computing associated to it, (c) works at the level of individual (artificial) molecules, subject of reversible, but otherwise deterministic interactions with a small number of enzymes, (d) encodes information in the geometrical structure of the molecules and not in their numbers, (e) all interactions are purely local in space and time.

This is part of a larger project to create computing, artificial chemistry and artificial life in a distributed context, using topological and graphical languages.

Please enjoy the nice text and the 21 figures!

In this post I want to explain in few words what is this larger project, because it is something which is open to anybody to contribute and play with.

We look at the real living world as something ruled by chemistry. Everywhere in the real world there are local chemical interactions, local in space and time. There is no global, absolute, point of view which is needed to give meaning to this alive world.  Viruses, bacteria and even prebiotic chemical entities form the scaffold of this world, until very recently, when the intelligent armchair philosophers appeared and invented what they call “semantics”. Before the meaning of objects, there was life.

Likewise, we may imagine a near future where the virtual world of the Internet is seamlessly interlinked with the real world, by means of the Internet of Things and artificial chemistry.

Usually we are presented a future where   artificial intelligences  and rational machines   give expert advices  or make decisions based on statistics of real life interactions between us humans or between us and the objects we manipulate. This future is one of gadgets, useful objects and  virtual bubbles for the bayesian generic human. Marginally, in this future, we humans, we may chit chat and ask corporations for better gadgets or for more useful objects. This is the future of cloud computing, that is centralized distributed computing.

This future world does not look at all like the real world.

Because the real world is not centralized. Because individual entities which participate in the real world do live individual lives and have individual interactions.

Because we humans want to discuss and interact with others more than we want better gadgets.

We think  then about a future of a virtual world based on  decentralized computing with an artificial chemistry, a world where  individual entities, real or virtual,  interact by the means of an artificial chemistry, instead of being baby-sitted by statistically benevolent artificial intelligences.

Moreover, the Internet of Things, the bridge between the real and the virtual world, should be designed as a translation tool between real chemistry and artificial chemistry. Translation of what? Of  decentralized purely local computations.

This is the goal of the project, to see if such a future is possible.

It is a fun goal and there is much to learn and play with. It is by no means something which appeared from nowhere, instead it is a natural project, based on lots of bits of past and present research.


The first thread of the Moirai

I just realized that maybe +Carl Vilbrandt  put the artificial life thread of ideas in my head, with this old comment at the post Ancient Turing machines (I): the three Moirai:

Love the level of free association  between computation and Greek philosophy. Very creative.
In this myth computation = looping cycles of life. by the Greek goddess of Mnemosyne (one of the least rememberer of the gods requires the myth of the Moirai to recall how the logic of life / now formalized as Lisp works.
As to the vague questions:
1. Yes they seem to be the primary hackers of necessity.
2. Yes The emergent the time space of spindle of necessity can only be by the necessary computational facts of matter.
3. Of course at this scale of myth of wisdom it a was discreet and causal issue. Replication of them selfs would have been no problem.
Lisp broken can’t bring my self write in any other domain. So lisp is the language of life. With artifical computation resources science can at last define life and design creatures.

Thank you!

When I wrote that post, things were not as clear to me as now. Basically I was just trying to generate all graphs of GLC (in the newer version all molecules of the artificial chemistry called “chemlambda“) by using the three Moirai as a machine.

This thread is now a big dream and a project in the making, to unify the meatspace with the virtual space by using the IoT as a bridge between the real chemistry of the real world and the artificial chemistry of the virtual world.

The true Internet of Things, decentralized computing and artificial chemistry

A thing is a discussion between several participants.  From the point of view of each participant, the discussion manifests as an interaction between the participant with the other participants, or with itself.

There is no need for a global timing of the interactions between participants involved in the discussion, therefore we talk about an asynchronous discussion.

Each participant is an autonomous entity. Therefore we talk about a decentralized discussion.

The thing is the discussion and the discussion is the thing.

When the discussion reaches an agreement, the agreement is an object. Objects are frozen discussions, frozen things.

In the true Internet of Things, the participants can be humans or virtual entities. The true internet of Things is the thing of all things, the discussion of all discussions. Therefore the true Internet of Things has to be asynchronous and decentralized.

The objects of the true Internet of Things are the objects of discussions. For example a cat.

Concentrating exclusively on objects is only a manifestation of the modern aversion of having a conversation. This aversion manifests in many ways (some of them extremely useful):

  • as a preference towards analysis, one of the tools of the scientific method
  • as the belief in the semantics, as if there is a meaning which can be attached to an object, excluding any discussion about it
  • as externalization of discussions, like property rights which are protected by laws, like the use of the commons
  • as the belief in objective reality, which claims that the world is made by objects, thus neglecting the nature of objects as agreements reached (by humans) about some selected aspects of reality
  • as the preference towards using bottlenecks and pyramidal organization as a mean to avoid discussions
  • as various philosophical currents, like pragmatism, which subordinates things (discussions) to their objects (although it recognizes the importance of the discussion itself,  as long as it is carefully crippled in order that it does not overthrow the object’s importance).

Though we need agreements, we need to rely on objects (as evidence), there is no need to limit the future true Internet of Things to an Internet of Objects.


We already have something  called Internet of Things, or at least something which will become an Internet of Things, but it seems to be designed as an Internet of Objects. What is the difference? Read Notes for “Internet of things not Internet of objects”.

Besides humans, there will be  the other participants in the  IoT,  in fact the underlying connective mesh which should support the true Internet of Things.  My proposal is to use an artificial chemistry model mixed with the actor model, in order to have only the strengths of both models:

  1.   decentralized,
  2. does not need an overlooking controller,
  3. it works without  needing to have a meaning, purpose or in any other ways  being oriented to problem solving
  4. does not need to halt
  5. inputs, processing and output have the same nature (i.e. just chemical molecules and their proximity-based interactions).

without having the weaknesses:

  1.  the global view of Chemical Reaction Networks,
  2. the generality of behaviours of the actors in the actor model, which forces the model to be seen as a high level, organizing the way of thinking about particular computing tasks, instead of being a very low level, simple and concrete model.


With these explanations, please go and read again  three  older posts and a page, if interested to understand more:


What is new in distributed GLC?

We have seen that several parts or principles of distributed GLC are well anchored in previous, classical research.  There are three such ingredients:

There are several new things, which I shall try to list them.

1.  It is a clear, mathematically well formulated model of computation. There is a preparation stage and a computation stage. In the preparation stage we define the “GLC actors”, in the computation stage we let them interact. Each GLC actor interact with others, or with itself, according to 5 behaviours.  (Not part of the model  is the choice among  behaviours, if several are possible at the same moment.  The default is  to impose to the actors to first interact with others (i.e. behaviours 1, 2, in this order)  and if no interaction is possible then proceed with internal behaviours 3, 4, in this order. As for the behaviour 5, the interaction with external constructs, this is left to particular implementations.)

2.  It is compatible with the Church-Turing notion of computation. Indeed,  chemlambda (and GLC) are universal.

3. The evaluation  is not needed during computation (i.e. in stage 2). This is the embodiment of “no semantics” principle. The “no semantics” principle actually means something precise, is a positive thins, not a negative one. Moreover, the dissociation between computation and evaluation is new in many ways.

4. It can be used for doing functional programming without the eta reduction. This is a more general form of functional programming, which in fact is so general that it does not uses functions. That is because the notion of a function makes sense only in the presence of eta reduction.

5. It has no problems into going outside, at least apparently, Church-Turing notion of computation. This is not a vague statement, it is a fact, meaning that GLC and chemlambda have sectors (i.e. parts) which are used to represent lambda terms, but also sectors which represent other formalisms, like tangle diagrams, or in the case of GLC also emergent algebras (which are the most general embodiment of a space which has a very basic notion of differential calculus).


All these new things are also weaknesses of distributed GLC because they are, apparently at least, against some ideology.

But the very concrete formalism of distributed GLC should counter this.

I shall use the same numbering for enumerating the ideologies.

1.  Actors a la Hewitt vs Process Calculi.  The GLC actors are like the Hewitt actors in this respect.  But they are not as general as Hewitt actors, because they can’t behave anyhow. On the other side, is not very clear if they are Hewitt actors, because there is not a clear correspondence between what can an actor do and what can a GLC actor do.

This is an evolving discussion. It seems that people have very big problems to  cope with distributed, purely local computing, without jumping to the use of global notions of space and time. But, on the other side, biologists may have an intuitive grasp of this (unfortunately, they are not very much in love with mathematics, but this changes very fast).

2.   distributed GLC is a programming language vs is a machine.  Is a computer architecture or is a software architecture? None. Both.  Here the biologist are almost surely lost, because many of them (excepting those who believe that chemistry can be used for lambda calculus computation) think in terms of logic gates when they consider computation.

The preparation stage, when the actors are defined, is essential. It resembles with choosing the right initial condition in a computation using automata. But is not the same, because there is no lattice, grid, or preferred topology of cells where the automaton performs.

The computation stage does not involve any collision between molecules mechanism, be it stochastic or deterministic. That is because the computation is purely local,  which means in particular that (if well designed in the first stage) it evolves without needing this stochastic or lattice support. During the computation the states of the actors change, the graph of their interaction change, in a way which is compatible with being asynchronous and distributed.

That is why here the ones which are working in artificial chemistry may feel lost, because the model is not stochastic.

There is no Chemical reaction network which concerts the computation, simply because a CRN is aGLOBAL notion, so not really needed. This computation is concurrent, not parallel (because parallel needs a global simultaneity relation to make sense).

In fact there is only one molecule which is reduced, therefore distributed GLC looks more like an artificial One molecule computer (see C. Joachim Bonding More atoms together for a single molecule computer).  Only it is not a computer, but a program which reduces itself.

3.  The no semantics principle is against a strong ideology, of course.  The fact that evaluation may be not needed for computation is  outrageous (although it might cure the cognitive dissonance from functional programming concerning the “side effects”, see  Another discussion about math, artificial chemistry and computation )

4.  Here we clash with functional programming, apparently. But I hope that just superficially, because actually functional programming is the best ally, see Extreme functional programming done with biological computers.

5.  Claims about going outside Church-Turing notion of computation are very badly received. But when it comes to distributed, asynchronous computation, it’s much less clear. My position here is that simply there are very concrete ways to do geometric or differential like “operations” without having to convert them first into a classical computational frame (and the onus is on the classical computation guys to prove that they can do it, which, as a geometer, I highly doubt, because they don’t understand or neglect space, but then the distributed asynchronous aspect come and hits  them when they expect the least.)


Conclusion:  distributed GLC is great and it has a big potential, come and use it. Everybody  interested knows where to find us.  Internet of things?  Decentralized computing? Maybe cyber-security? You name it.

Moreover, there is a distinct possibility to use it not on the Internet, but in the real physical world.


The graphical moves of projective conical spaces (I)

This post continues from A simple explanation with types of the hexagonal moves of projective spaces .  Here I put together all the story of projective conical spaces, seen as a graph rewriting system, in the same style as (the emergent algebra sector of) the graphic lambda calculus.

What you see here is part of the effort to show that there is no fundamental difference between geometry and computation.

Moreover, this graph rewriting system can be used, along the same lines as GLC and chemlambda, for:

  •  artificial chemistry
  • a model for distributed computing
  • or for thinking about an “ethereal” spatial substrate of the Internet of Things, realized as a very low level (in terms of resources needs) decentralized computation,

simply by adapting the Distributed GLC  model for this graph rewriting system, thus transforming the moves (like the hexagonal moves) into interactions between actors.


All in all,  this post (and the next one) completes the following list:


1. The set of “projective graphs” PROJGRAPH.  These graphs are made by a finite number of nodes and arrows, obtained by assembling:

  •  4 valent nodes called (projective) dilations (nodes), with 3 arrows pointing to the node and one arrow pointing from the node. The set of 4 arrows is divided into

4 = 3+1

with 1 incoming arrow and the remaining 3 (two incoming and 1 outcoming) . Moreover, there is a cyclical order on those 3 arrows.

  • Each dilation node is decorated  by a Greek letter like \varepsilon, \mu, ...,  which denotes an element of a commutative group \Gamma. The group operation of \Gamma is denoted multiplicatively.  Usual choices for \Gamma are the real numbers with addition, or the integers with addition, or the positive numbers with multiplication. But any commutative group will do.
  • arrows which don’t point to, or which don’t come from any nodes are accepted
  • as well as loops with no node.
  • there are also 3 valent nodes, called “fanout nodes”, with one incoming arrow and two outcoming arrows, along with a cyclic order of the arrows (thus we know which is the outcoming left arrow and which is the outcoming right arrow).
  • moreover, there is a 1-valent termination node, with only 1 incoming arrow.

Sounds like a mouthful?  Let’s think like this: we can colour the arrows of the 4 valent dilation nodes with two colours, such that

  • both colours are used
  • there are 2 more incoming arrows coloured like the outcoming arrow.

I shall call this colours “O” and “X”,  think about them as being types, if you want. What matters is when two colours are equal or different, and not which colour is “O” and which is “X”.

From this collection of graphs we shall choose a sub-collection, called PROJGRAPH, of “projective graphs”, with the property that we can colour all the arrows of such a graphs, such that:

  • the 3 arrows of a fanout node are always coloured with the same colour (no matter which, “O” or “X”)


  • the 4 arrows of a 4 valent dilation node are coloured such that the special 1 incoming arrow is coloured differently than the other 3 arrows.

With the colour indications, we can simplify the drawing of the 4 valent nodes, like indicated in the examples from this figure.


Thus, the condition that a graph (made of 4 valent and 3 valent nodes) is in PROJGRAPH is global. That means that there is no  a priori upper bound on the number of nodes and arrows which have to be checked by an  algorithm which determines if the graph is in PROJGRAPH.

In the next post we shall see the moves, which are all local.


Is the Seed possible?

Is the Seed possible? Neal Stephenson, in the book The Diamond Age, presents the idea of the Seed, as opposed to the Feed.

The Feed is a hierarchical network of pipes and matter compilers (much like  an Internet of Things done not with electronics, but with nanotechnology, I’d say).

The Seed is a different technology. I selected some  paragraphs from the book, which describe the Seed idea.

“I’ve been working on something,” Hackworth said. Images of a nanotechnological system, something admirably compact and elegant, were flashing over his mind’s eye. It seemed to be very nice work, the kind of thing he could produce only when he was concentrating very hard for a long time. As, for example, a prisoner might do.
“What sort of thing exactly?” Napier asked, suddenly sounding rather tense.
“Can’t get a grip on it,” Hackworth finally said, shaking his
head helplessly. The detailed images of atoms and bonds had been replaced, in his mind’s eye, by a fat brown seed hanging in space, like something in a Magritte painting. A lush bifurcated curve on one end, like buttocks, converging to a nipplelike point on the other.

CryptNet’s true desire is the Seed—a technology that, in their diabolical scheme, will one day supplant the Feed, upon which our society and many others are founded. Protocol, to us, has brought prosperity and peace—to CryptNet, however, it is a contemptible system of oppression. They believe that information has an almost mystical power of free flow and self-replication, as water seeks its own level or sparks fly upward— and lacking any moral code, they confuse inevitability with Right. It is their view that one day, instead of Feeds terminating in matter compilers, we will have Seeds that, sown on the earth, will sprout up into houses, hamburgers, spaceships, and books—that the Seed
will develop inevitably from the Feed, and that upon it will be
founded a more highly evolved society.

… her dreams had been filled with seeds for the last several years, and that every story she had seen in her Primer had been replete with them: seeds that grew up into castles; dragon’s teeth that grew up into soldiers; seeds that sprouted into giant beanstalks leading to alternate universes in the clouds; and seeds, given to hospitable, barren couples by itinerant crones, that grew up into plants with bulging pods that contained happy, kicking babies.

Arriving at the center of the building site, he reached into his bag and drew out a great seed the size of an apple and pitched it into the soil. By the time this man had walked back to the spiral road, a tall shaft of gleaming crystal had arisen from the soil and grown far above their heads, gleaming in the sunlight, and branched out like a tree. By the time Princess Nell lost sight of it around the corner, the builder was puffing contentedly and looking at a crystalline vault that nearly covered the lot.

All you required to initiate the Seed project was the rational,
analytical mind of a nanotechnological engineer. I fit the bill
perfectly. You dropped me into the society of the Drummers like a seed into fertile soil, and my knowledge spread through them and permeated their collective mind—as their thoughts spread into my own unconscious. They became like an extension of my own brain.


Now, suppose the following.

We already have an Internet of Things, which would serve as an interface between the virtual world and the real world, so there is really not much difference between the two, in the specific sense that something in the former could easily produce effects in the latter.

Moreover, instead of nanotechnology, suppose that we are content with having, on the Net, an artificial chemistry which would mirror the real chemistry of the world, at least in it’s functioning principles:

  1.   it works in a decentralized, distributed way
  2. does not need an overlooking controller, because all interactions are possible only when there is spatial and temporal proximity
  3. it works without  needing to have a meaning, purpose or in any other ways  being oriented to problem solving
  4. does not need to halt
  5. inputs, processing and output have the same nature (i.e. just chemical molecules and their proximity-based interactions).

In this  world, I see a Seed as a dormant, inactive artificial chemical molecule.  When the Seed is planted (on the Net),

  1. it first grows into a decentralized, autonomous network (i.e. it starts to multiply, to create specialized parts, like a real seed which grows into a tree),
  2. then it starts computing (in the chemical sense, it starts to self-process it’s structure)
  3. and interacts with the real world (via the sensors and effectors available via the IoT) until it creates something in the real world.



  •  clearly, the artificial chemistry I am thinking about is chemlambda
  •  the principles of the sort of working of this artificial chemistry are those of the Distributed GLC


How to plant a Seed (I)

In  The Diamond Age there is the Feed and, towards the end, appears the Seed.

There are, not as many as I expected, but many places where this Seed idea of Neal Stephenson is discussed. Most of them discuss it in relation to the Chinese  Ti yong  way of life, following the context where the author embeds the idea.

Some compare the Seed idea with open source.

For me, the Seed idea becomes interesting when is put together with distributed, decentralized computing. How to make a distributed Seed?

If you start thinking about this, it makes even more sense if you add one more ingredient: the Internet of Things.

Imagine a small, inactive, dormant, virtual thing (a Seed) which is planted somewhere in the fertile ground of  the  IoT. After that it becomes active, it grows, becomes a distributed, decentralized computation. Because it lives in the IoT it can have effects in the physical world, it can interact with all kinds of devices connected with the IoT, thus it can become a Seed in the sense of Neal Stephenson.

Chemlambda is a new kind of  artificial chemistry, which is intended to be used in distributed computing, more specifically in decentralized computing.  As a formalism it is a variant of graphic lambda calculus, aka GLC.  See the page  Distributed GLC for details of this project.

So, I am thinking about how to plant a chemlambda Seed. Concretely, what could pass for a Seed in chemlambda and in what precise sense can be planted?

In the next post I shall give technical details.

Mathematics, things, objects and brains

This is about my understanding of the post Mathematics and the Real by Louis Kauffman.

I start from this quote:

One might hypothesize that any mathematical system will find natural realizations. This is not the same as saying that the mathematics itself is realized. The point of an abstraction is that it is not, as an abstraction, realized. The set { { }, { { } } } has 2 elements, but it is not the number 2. The number 2 is nowhere “in the world”.

Recall that there are things and objects. Objects are real, things are discussions. Mathematics is made of things. In Kauffman’s example the number 2 is a thing and the set { { }, { { } } } is an object of that thing.

Because an object is a reification of a thing. It is therefore real, but less interesting than the thing, because it is obtained by forgetting (much of) the discussion about it.

Reification is not a forgetful functor, though. There are interactions in both directions, from things to objects and from objects to things.

Indeed, in the rhino thing story, a living rhinoceros is brought in Europe. The  sight of it was new. There were remnants of ancient discussions about this creature.

At the beginning that rhinoceros was not an object, not a thing. For us it is a thing though, and what I am writing about it is part of that thing.

From the discussion about that rhinoceros, a new thing emerged. A rhinoceros is an armoured beast which has a horn on its back which is used for killing elephants.

The rhino thing induced a wave of reifications:  nearby the place where that rhino was seen for the first time in Portugal, the Manueline Belém Tower  was under construction at that moment. “The tower was later decorated with gargoyles shaped as rhinoceros heads under its corbels.[11]” [wiki dixit]

Durer’s rhino, another reification of that discussion. And a vector of propagation of the discussion-thing. Yet another real effect, another  object which was created by the rhino thing is “Rinoceronte vestido con puntillas (1956) by Salvador Dalí in Puerto Banús, Marbella, Spain” [wiki dixit].

Let’s take another example. A discussion about the reglementations of the sizes of cucumbers and carrots to be sold in EU is a thing. This will produce a lot of reifications, in particular lots of correct size cucumbers and carrots and also algorithms for selecting them. And thrash, and algorithms for dispensing of that trash. And another discussions-things, like is it moral to dump the unfit carrots to the trash instead of using them to feed somebody who’s in need? or like the algorithm which states that when you go to the market, if you want to find the least poisoned vegetables then you have to pick them among those which are not the right size.

The same with the number 2, is a thing. One of it’s reifications is the set { { }, { { } } }. Once you start to discuss about sets, though, you are back in the world of things.

And so on.

I argue that one should understand from the outset that mathematics is distinct from the physical. Then it is possible to get on with the remarkable task of finding how mathematics fits with the physical, from the fact that we can represent numbers by rows of marks |  , ||, |||, ||||, |||||, ||||||, … (and note that whenever you do something concrete like this it only works for a while and then gets away from the abstraction living on as clear as ever, while the marks get hard to organize and count) to the intricate relationships of the representations of the symmetric groups with particle physics (bringing us back ’round to Littlewood and the Littlewood Richardson rule that appears to be the right abstraction behind elementary particle interactions).

However, note that   “the marks get hard to organize and count” shows only a limitation of the mark algorithm as an object, and there are two aspects of this:

  • to stir a discussion about this algorithm, thus to create a new thing
  • to recognize that such limitations are in fact limitations of our brains in isolation.

Because, I argue, brains (and their working) are real.  Thoughts are objects, in the sense used in this post! When we think about the number 2, there is a reification of out thinking about the number 2 in the brain.

Because brains, and thoughts, are made of an immensely big number of chemical reactions and electromagnetic  interactions, there is no ghost in these machines.

Most of our brain working is “low level”, that is we find hard to account even for the existence of it, we have problems to find the meaning of it, we are very limited into contemplating it in whole, like a self-reflecting mirror. We have to discuss about it, to make it into a thing and to contemplate instead derivative objects from this discussion.

However, following the path of this discussion, it may very well be that brains working thing can be understood as structure processing, with no need for external, high level, semantic, information based meaning.

After all, chemistry is structure processing.

A proof of principle argument for this is Distributed GLC.

The best part, in my opinion, of Kauffman’s post is, as it should, the end of it:

The key is in the seeing of the pattern, not in the mechanical work of the computation. The work of the computation occurs in physicality. The seeing of the pattern, the understanding of its generality occurs in the conceptual domain.

… which says, to my mind at least, that computation (in the usual input-output-with-bits-in-between sense) is just one of the derivative objects of the discussion about how brains (and anything) work.

Closer to the brain working thing, including the understanding of those thoughts about mathematics, is the discussion about “computation” as structure processing.

UPDATE: A discussion started in this G+ post.


Notes for “Internet of things not Internet of objects”

1.   Kevin Ashton  That ‘Internet of things’ thing

Conventional diagrams of the Internet include servers and routers and so on, but they leave out the most numerous and important routers of all: people.
The problem is, people have limited time, attention and accuracy—all of which means they are not very good at capturing data about things in the real world.
  • not things, objects!  Ashton writes about objects.
  • people are not good at capturing data, so let’s filter (i.e. introduce a bottleneck) the data for them, thank you!
  • however, people arrive to gather around  ideas and to discuss  despite the fact that “conventional diagrams of the Net leave out people”.
  • By having public discussions around an “idea” people arrive to filter creatively the information dump without resorting to artificial bottlenecks.  Non-human bottleneck stifle discussions!

Replaced further:

  • things by objects
  • ideas by things.
We’re physical, and so is our environment. Our economy, society and survival aren’t based on things or information—they’re based on objects. You can’t eat bits, burn them to stay warm or put them in your gas tank. Things and information are important, but objects matter much more. Yet today’s information technology is so dependent on data originated by people that our computers know more about things  than objects.
This looks like the credo of the Internet of Objects!
Do we want this?
2.     What are, for people, things and objects?

Here is a depiction of a thing [source]:


A thing  was the governing assembly  made up of the free people of the community, meeting in a place called a thingstead.
(“thing” in Germanic societies,  “res” for Romans, etc.)
Heidegger (The Thing):

Near to us are what we usually call things. The jug is a thing. What is a jug? We say: a vessel.  As a jug, the vessel is something self-sustained,  self-supporting, or independent.

An independent, self-supporting thing may become an object if we place it before us.

An object is a reification of a thing.
[Kenneth Olwig: “Heidegger, Latour and The Reification of Things:The Inversion and Spatial Enclosure of the Substantive Landscape of Things – the Lake District Case”, Geografiska Annaler: Series B 2013 Swedish Society for Anthropology and Geography]
An object is therefore real,  but all about thing and thingstead is lost.
Reification generally refers to making something real…
Reification (computer science), making a data model for a previously abstract concept.
3.  An example of a thing and some of it’s reifications:
Quotes  and images from here:
On 20 May 1515, an Indian rhinoceros arrived in Lisbon from the Far East.
After a relatively fast voyage of 120 days, the rhinoceros was finally unloaded in Portugal, near the site where the Manueline Belém Tower was under construction. The tower was later decorated with gargoyles shaped as rhinoceros heads under its corbels.[11]
A rhinoceros had not been seen in Europe since Roman times: it had become something of a mythical beast, occasionally conflated in bestiaries with the “monoceros” (unicorn), so the arrival of a living example created a sensation.
The animal was examined by scholars and the curious, and letters describing the fantastic creature were sent to correspondents throughout Europe. The earliest known image of the animal illustrates a poemetto by Florentine Giovanni Giacomo Penni, published in Rome on 13 July 1515, fewer than eight weeks after its arrival in Lisbon.

Valentim Fernandes, , saw the rhinoceros in Lisbon shortly after it arrived and wrote a letter describing it to a friend in Nuremberg in June 1515.  A second letter of unknown authorship was sent from Lisbon to Nuremberg at around the same time, enclosing a sketch by an unknown artist. Dürer saw the second letter and sketch in Nuremberg. Without ever seeing the rhinoceros himself, Dürer made two pen and ink drawings,[23] and then a woodcut was carved from the second drawing, the process making the print a reversed reflection of the drawing.[19][24]

The German inscription on the woodcut, drawing largely from Pliny’s account,[13] reads:

On the first of May in the year 1513 AD [sic], the powerful King of Portugal, Manuel of Lisbon, brought such a living animal from India, called the rhinoceros. This is an accurate representation. It is the colour of a speckled tortoise,[25] and is almost entirely covered with thick scales. It is the size of an elephant but has shorter legs and is almost invulnerable. It has a strong pointed horn on the tip of its nose, which it sharpens on stones. It is the mortal enemy of the elephant. The elephant is afraid of the rhinoceros, for, when they meet, the rhinoceros charges with its head between its front legs and rips open the elephant’s stomach, against which the elephant is unable to defend itself. The rhinoceros is so well-armed that the elephant cannot harm it. It is said that the rhinoceros is fast, impetuous and cunning.[26]
Comment: you can see here a thing taking shape.
Despite its errors, the image remained very popular,[5] and was taken to be an accurate representation of a rhinoceros until the late 18th century.
The pre-eminent position of Dürer’s image and its derivatives declined from the mid-to-late-18th century, when more live rhinoceroses were transported to Europe, shown to the curious public, and depicted in more accurate representations.
Until the late 1930s, Dürer’s image appeared in school textbooks in Germany as a faithful image of the rhinoceros;[6] in German the Indian rhinoceros is still called the Panzernashorn, or “armoured rhinoceros”. It remains a powerful artistic influence, and was the inspiration for Salvador Dalí‘s 1956 sculpture, Rinoceronte vestido con puntillas (Rhinoceros dressed in lace), which has been displayed at Puerto Banús, in Marbella, since 2004.
Comment: that is an object! You can stick an RFID to it and it has clear GPS coordinates.
4.     Bruno Latour (From Realpolitik to Dingpolitik, or How to Make Things Public), writing about “object-oriented democracy”:

Who is to be concerned? What is to be considered? How to represent the sites where people meet to discuss their matters of concern?

How does the Internet of Objects respond to these questions about things and thingsteads?

People are going to use the Internet of Objects as an Internet of Things. How can we help them (us!) by designing a thing-friendly Internet of Things?

My guess and proposal is to try to put space (i.e. thingstead) into the IoT.  By design.
5.   Not the RFID space.  Not the GPS space.  This may be useful for the goal of inhuman optimization, but will not promote by itself the conversation needed to have around things and their reifications, the objects.
People are going to divert the ways of the IoT, designed with  this lack of appetite for human communication, as they succeeded previously!
For understanding why RFID and GPS  are not sufficient, let’s imagine, like Borges, that the world is a library.
  • RFID – name of the book
  • GPS – place on the shelf
Is this enough for me, reader, who wants to retrieve (and discuss with other readers about) a book without knowing it’s title, nor it’s position on a shelf?
No!  I have to call a librarian (the bottleneck), an inhuman and very efficient one, true,  who will give me a list of possible titles and who will fetch the book from the right shelf. I don’t have direct access to the library, nor my friends which may have different ideas about the possible titles and shelves where the book might be.
The librarian will optimize the book-searching and book-fetching, will optimize all this not for me, or for you, or for our friends, but for a bayesian individual in a bayesian society. (see Bayesian society)
What I would like is to have access to my library (in the big Universal Library) and to be able to share my spatial competences of using my library with my friends. That means a solution for the following problem, which  Mark Changizi  mentions in relation to e-books (but I think is relevant instead for the human IoT)

The Problem With the Web and E-Books Is That There’s No Space for Them

My personal library serves as extension of my brain. I may have read all my books, but I don’t remember most of the information. What I remember is where in my library my knowledge sits, and I can look it up when I need it. But I can only look it up because my books are geographically arranged in a fixed spatial organization, with visual landmarks. I need to take the integral of an arctangent? Then I need my Table of Integrals book, and that’s in the left bookshelf, upper middle, adjacent to the large, colorful Intro Calculus book.

6.  What else?  These notes are already rich enough, therefore please be free to stop reading, if you feel like.
Actually, this is a technical problem: how to create space where there is none, without using arbitrary names (RFID) or global (but arbitrary) coordinates (GPS)?
It is the same problem which we encounter in neuroscience: how the brain makes sense of space without using any external geometrical expertise? how to explain the “brain as a geometry engine” (as Koenderink) when there is no rigorous  model of computation for this brain behaviour?
There may be a point in holding that many of the better-known brain processes are most easily understood in terms of differential geometrical calculations running on massively parallel processor arrays whose nodes can be understood quite directly in terms of multilinear operators (vectors, tensors, etc).
In this view brain processes in fact are space.
I have two proposals for this, which go far beyond explanations which may fit into a post.  I put them  here only for the sake of giving an explanation of the motivations I have, and maybe for inviting the interested people to gather for discussing about these things.
It is about “computing with space”, which is the middle name of this blog.  The first name, chorasimilarity, is made by gluing Plato’s notion of space “chora” with  (self-)”similarity”, which is, I believe the essence of transforming space from a “vessel” into a self-sustaining, self-supporting thingstead.
The first proposal is to concentrate on completely asynchronous, purely local  models of distributing computing as a low-level basis for the architecture of a true IoT.
For example: mesh networks. (Thank you Peter Waaben.)
I know of only one model of computation which satisfy the previously mentioned demands and also  solves the problem of putting space into the Net:
It is based on actors which gather in an agora to discuss things that matter.  Literally!
But there is long way to  arrive to a proof of principle, at least, for such a space-rich IoT, which brings me to the second proposal, which (may) be too technical for this post, alluded here: A me for space.

Doing my homework: Heidegger, Latour on things

Heidegger The Thing, quotes:

All distances in time and space are shrinking. […] Man […] now receives instant information […] of events which he formerly learned about only years later, if at all.

Yet the frantic abolition of all distances brings no nearness; for the nearness does not consist in shortness of distance. […] Short distance is not in itself nearness. Nor is great distance remoteness.

What is this uniformity in which everything is neither far nor near — is, as it were, without distance?

Everything gets lumped together into uniform distancelessness. How? Is not this merging of everything into the distanceless more unearthy than everything bursting apart?

Near to us are what we usually call things. The jug is a thing. What is a jug? We say: a vessel […] As a jug, the vessel is something self-sustained […] self-supporting, or independent.

An independent, self-supporting thing may become an object if we place it before us.

Bruno Latour, From Realpolitik to Dingpolitik, or How to Make Things Public, quotes:

[…] an object-oriented democracy tries […] to bring tohether two different meanings of the word representation that have been kept separate in theory although they have remained always mixed in practice.

The first one […] designates the ways to gather the legitimate people around some issue.

The second one […] presents, or rather represents what is the object of concern to the eyes and ears of those who have been assembled around it.

Who is to be concerned?

What is to be considered?

How to represent the sites where people meet to discuss their matters of concern?

The short history of the rhino thing

Do you remember the story of the six blind men and the elephant?

It was six men of Indostan
To learning much inclined,
Who went to see the Elephant
(Though all of them were blind),
That each by observation
Might satisfy his mind.

Each blind man generalizes from his local perception to the whole elephant.


They don’t arrive to a consensus about what the elephant is.

And so these men of Indostan
Disputed loud and long,
Each in his own opinion
Exceeding stiff and strong,
Though each was partly in the right,
And all were in the wrong!

Why? Probably because their blindness means a lack of geometrical expertise. Coupled with their unwillingness for conversation, they don’t succeed into transforming the elephant into a thing.

But what is a thing? It is not an object, it is a conversation, and in the same time is the conversation about something.

An object is a  reification of a thing. Reality is made by objects, by consequence. For more along this line of thinking (inspired  by Kenneth Olwig) see Internet of things not internet of objects.

But this is only a story, right?

This  is the real history about the rhino thing. I am not talking about living rhinoceros, I am talking about how it went about the appearance of the rhino thing.

You may see this  history as an evolved version of the story of the six blind men and the elephant, where the six blind men arrive to have a conversation about the elephant and they succeed into transforming the elephant into a thing (i.e. they agree about the qualities, shape, location and uses of the elephant, as they felt it).

Only it is not about an elephant, but about a rhinoceros. Details, if you missed the link, HERE.


Later on, the rhino thing becomes an object, in fact many objects, among them a emblem used by an Italian duke


and a  real, 3D sculpture made by a Spanish artist (which you can feel and locate using GPS coordinates).


Internet of things not internet of objects

What is a thing? And what is that “thing” thing from the “Internet of things”? Is not, I think, what is supposed to be.

Here is a depiction of a thing [source]:


A thing is an assembly, a communication based entity. I learned this by reading another excellent article by Kenneth Olwig: “Heidegger, Latour and The Reification of Things:The Inversion and Spatial Enclosure of the Substantive Landscape of Things – the Lake District Case”, Geografiska Annaler: Series B 2013 Swedish Society for Anthropology and Geography.  Cite from the 1st page:

Thing has thus undergone a process by which things went from being substantive judicially founded meetings in which knowing people assembled (as in parliaments) to discuss, and thereby constitute matters of common concern, or common things that matter, to becoming physical objects, or things as matter. […]

The word reification is here used to mean: ‘to regard (something abstract) as a material or concrete thing’ (Merriam-Webster 1994, reification). The modern meaning of things and landscape, it will be argued, can only be grasped by understanding the way it is the outcome of a process of revolutionary inversion […], or turning inside–out, by which the meaning of common things has been spatialized, enclosed, unified, individualized, privatized, materialized and reified as a constituent of the mental landscape of modernity.

Reification of a thing is an object. Btw, this seems to be a different notion of object than the one of Peirce, see here (and feel free to contradict me).

Let’s go now to Kevin Ashton’s That ‘Internet of things’ thing, cite:

We’re physical, and so is our environment. Our economy, society and survival aren’t based on ideas or information—they’re based on things. You can’t eat bits, burn them to stay warm or put them in your gas tank. Ideas and information are important, but things matter much more. Yet today’s information technology is so dependent on data originated by people that our computers know more about ideas than things.
Yes, the human world is based on things, but Ashton seems to think is based on objects. And then the Internet of things is that thing obtained by bringing back to objects what they lack: communication. But we don’t know how to do this, because we distrust communication, historically. Look for example at how communication is seen: as a process of sending a message through a channel.
We need more, we need things back, we need to think about things, not objects.About communication, not about computing.

A me for space

On the upper part of the stele of Hammurabi’s code of laws we see Shamash giving to Hammurabi the sumerian royal insignia [source]:


The royal insignia are not a 1 and a 0, as speculated by Neal Stephenson in Snow crash, but the “holy measuring rod and line“, which is a me, i.e [source]

Another important concept in Sumerian theology, was that of me. The me were universal decrees of divine authority. They are the invocations that spread arts, crafts, and civilization. The me were assembled by Enlil in Ekur and given to Enki to guard and impart to the world, beginning with Eridu, his center of worship. From there, he guards the me and imparts them on the people. He directs the me towards Ur and Meluhha and Dilmun, organizing the world with his decrees. Later, Inanna comes to Enki and complains at having been given too little power from his decrees. In a different text, she gets Enki drunk and he grants her more powers, arts, crafts, and attributes – a total of ninety-four me. Inanna parts company with Enki to deliver the me to her cult center at Erech. Enki recovers his wits and tries to recover the me from her, but she arrives safely in Erech with them. (Kramer & Maier 1989: pp. 38-68)

A me for space.  A program for using space.


This is a good introduction to the main subject of this post. My main interest lies into understanding “computing with space”, which is a (more general?) form of computing (than the one covered by lambda calculus), based on some variant of graphic lambda calculus.  In this formalism we see trivalent graphs, subjected to graph rewrites. Some of these graphs can be associated to lambda calculus terms, i.e. to programs. Other graphs can be associated to emergent algebras, which, I claim, are the natural house of computing with space. They are programs for using space, they are mes for space, which you (or your brain, hypothetically) may use without appeal to geometric expertise, as Koenderink  claims in Brain a geometry engine that the visual brain does.

This is a rather different image about what space is, than the one offered by physicists.  Think about space in terms of universal programs for using it, instead about space as made by points, or as a smooth of fractal passive receptacle.

The graphic lambda calculus has to be modified in certain ways, which I shall explain in this and later posts, as it happened with the chemical concrete machine, which is another modification of graphic lambda calculus, Turing universal, which expresses programs (lambda calculus terms) as molecules and graph rewrites as enzymes.

The strangest, but possibly the most powerful feature of these graphic languages is that they don’t use names and they don’t need evaluations for being able to perform reductions (i.e. program executions). This feature makes them good candidates for trying to use them as models of brain mechanism, because in no biological brain you will find physically implementations of names, alphabets or evaluations in the CS sense.


Enough with the controversial claims. Concretely, the graphic formalism which I want to develop is already sketched in the following posts:

and in the series

The elementary gates, or nodes, of the formalism I am looking for will be (expressed in terms of graphic lambda calculus, but really taken afterwards as fundamental, not derived)


Add to these the fan-in and fan-out gates from the chemical concrete machine.

The main move is the \varepsilon– beta move, (see a proof for this move in the graphic lambda calculus formalism, in the post  Better than extended beta move  )


which combines the graphic beta move with the move R2 of emergent algebra inspiration.

In few words, the “me for space” which I propose is a deformation of the chemical concrete machine formalism with a scale parameter.