Chemical actors

UPDATE: Clearly needed a mix of the ActorScript of Carl Hewitt with GLC and chemlambda. Will follow in the months to come!


Thinking out loud about a variant of the actor model (chapter 3 here), which uses graphic lambda calculus or the chemical concrete machine. The goal is to arrive to a good definition of an Artificial Chemical Connectome.

Have remarks? Happy to read them!

A chemical actor is the following structure:

  • a graph A \in GRAPH  (or a molecule A \in MOLECULES)
  • with a unique ID name ID
  • with a numbering (tagging)  of a (possibly empty) part of it’s free arrows
  • with a behaviour, to be specified further.


A communication is:

  • a graph  B \in GRAPH  (or a molecule)
  • with a source ID and a target ID
  • with a part of free arrows tagged with tags compatible (i.e. the same) with the ones from the graph from the source ID
  • with another part of free arrows tags with tags compatible with the ones from the graph from the target ID


The actor target ID receives a communication from the actor source ID and it becomes:


At this point the actor which has target ID exhibit the following behaviour:

  • performs one, several, or in a given order, etc of graph rewrites (only the + unidirectional moves)  which involve at least an arrow between A and B
  • following a given algorithm, splits into a new actor and a communication B’ which has as target arrows the one from the lower part of the previous figure (but with another target ID)
  • or creates a new actor by using only (-) moves

Remark: the numbers n, m could be uniformly bounded to 2, or 4, or 6, according to user’s wishes. Take a look at the Ackermann machine, for inspiration.

Chemical concrete machine on figshare

Just published the Chemical concrete machine on figshare.  Should be cited as

Chemical concrete machine. Marius Buliga. figshare.

The description is a bit more telling than the abstract of the article (see why by reading the suggested posts below):
This article introduces an artificial, algorithmic chemistry based on local graph rewrites applied to a class of trivalent graphs named molecules. The graph rewrites are seen as chemical reactions with enzymes. The enzymes are tagged with the name of the respective move.  This artificial chemistry is Turing complete, because it allows the implementation of untyped lambda calculus with beta reduction (but without eta reduction). This opens the possibility  to use this artificial chemistry for constructing artificial chemical connectomes.
Here, on this open notebook/blog, you may read more about this here:
The article is available also as  arXiv:1309.6914 [cs.FL].

The Y combinator in graphic lambda calculus and in the chemical concrete machine

A simpler computation than the one with the Ackermann machine concerns the Y combinator.  Seen as a chemical reaction network, it looks like this for graphic lambda calculus. In the figure “A” is any graph in GRAPH which has only one exit arrow, for example a combinator graph.


One might prefer to not use the GLOBAL FAN-OUT move. That’s possible, by passing to the chemical concrete machine formalism. The chemical reaction network is a bit different. (Notice the move PROP+, which is a composite move defined in the post Chemical concrete machine, detailed (VI). )


Lots of interesting things happen, among them we notice the appearance of  stacks of “elementary units from the last post. so in the chemical concrete machine version of the behaviour of the Y combinator, the machine counts the number of times A should be replicated, if known (that’s a kind of lazy evaluation, if evaluation would make sense in this formalism).


A machine for computing the Ackermann function in graphic lambda calculus

The Ackermann function is a computable function which is not primitive recursive. It can be expressed in untyped lambda beta  calculus (which is the version that I use).

UPDATE (April 2015): see the much better demos for this in chemlambda, like Ackermann(2,2)=7 computation. It is moreover random and the pet example for a proof of principle experiment with a hypothetical molecular computer.

Let’s see how does it look in graphic lambda calculus. I shall explain how the computation of it unfolds in at least one more post. In this one I shall concentrate at defining a sort of machine which does the computation of this function.

Very, briefly, I introduce some macros (i.e. notations) for the relevant pieces which are used in this Ackermann machine.


These pacman macros are used for defining the Ackermann machine:


Some explanations and remarks are in order.

1. As you see, the Ackermann machine looks like a pacman munching pacmans, some of them controlled by other pacmans.

2. The Ackermann machine as presented here is not the graph obtained from the lambda calculus term which represents the Ackermann function (explanations in a future post).

3. The coloured rectangles can be any graph in GRAPH with two entries and two exits. Depending on what these graphs are, the Ackermann machine does different things, one of them being the computation of the Ackermann function. The fact that such graphs are taken with two entries and two exits is reminiscent of the modelling of B-type neural networks with graphic lambda calculus, where synapses are also such graphs (this needs more explanations, are for later).

4. In particular, the coloured rectangles can be chosen in a way related to the Church encoding of the numerals. Indeed, look at the following figure, which contains examples of graphs which can be used as the coloured rectangles in the Ackermann machine.


It’s clear what they are: a kind of stacks which accumulate copies of the same “elementary unit”, or otherwise they are empty (i.e. the first graph). I use these to obtain the Church numerals:


5. The same can be done with the chemical concrete machine, of course. This observation is relevant because the way the machine functions (by local graph rewrites), when expressed in the chemical concrete machine formalism, will look like a chemical reaction network.


Model of computation vs programming language in molecular computing

An interesting discussion (?) started in the comments of this John Baez  G+ post concerns differences between “model of computation” and “programming language” denominations (in that post, in particular, for Petri nets).

I reproduce here what I think that are relevant bits for this discussion and later, after possibly several updates, I shall try to write something useful by using these bits.

1.  Turing machine and lambda calculus are both models of computation, but lambda calculus is also  a programming language and Turing machine is not.

2. Zachariah Hargis makes the point of comparing this model of computation  vs  programming language distinction as related to the one  made by Leibniz between calculus ratiocinator  and  lingua characteristica. (Among other references, note to self to explore further.)

3. Chemical reaction networks (CRNs)  is one fundamental ingredient of molecular computing, no matter what formalization of CRNs is used. Don’t believe that all “computational part” of a CRN is a Petri net (because it is often very important which are concretely the molecules and reactions involved in the CRN, not only the abstract number of species and reaction rates between those).

4. Petri nets, as used in chemistry, are a tool for getting quantitative information from CRNs, once the CRNs are designed by other means. Petri nets might be useful for thinking about CRNs, but not necessary for designing CRNs.

5.  CRNs  which are designed by using, or embody in some sense lambda calculus is a much more interesting path towards a “programming language”, and a classical one (Fontana and Buss Algorithmic chemistry), than the more studied engineering style implementation of imperative programming and by force copy-paste of TM thinking into bio computing.

6. Among the specifications for a “programming language for chemistry” are the following:

  • (a) geometrical (in particular no use of currying, along more obvious features as being parallel, asynchronous, the latter being achievable already by many well known, non exotic programming languages),
  • (b) purely syntactic (in particular no names management, nor input-output signalling),
  • (c) (maybe an aspect of (b), but not sure)  no use of evaluation in computation (strategy).

(The chemical concrete machine satisfies these requirements and moreover it contains lambda calculus, thus showing that  such a “language” is possible. However, the chemical concrete machine is based on a made-up, artificial chemistry, thus it provides only a proof of principle for the existence of such a “language”. Or is it?  Biochemists help is needed to identify, or search for real chemical reactions which could be used for implementing the chemical concrete machine in reality.)

7. The problem of tacit axioms in history of the Church-Turing thesis might be especially relevant for biochemists, and it could also be mentioned as an argument in favour of making a distinction between “model of computation” and “programming language”:  any model of computation uses some tacit axioms, while a programming language does not (only at the semantic level concerning making (human) sense about the results and ways of functioning of the programming language  such tacit axioms are used in this case). For biochemists, to not use such tacit axioms is a must  when they try to find scientifically valid explanations. CS does well when ignoring these, in most of the cases.


Computing with space in the internet of things

My primary interest is “computing with space”.  Some remarks by  Allen Francom  make me believe that it might be a way to give a “space” to the internet of things by using graphic lambda calculus. By that I mean the following: the internet of things is a huge collection of objects which are somewhere on Earth, each carrying a very small processor. So the objects are in the physical space, but they change their places all the time, and their processors are tiny, so one better use tiny programs and simple executions for syncronizing their relative places one with respect to the others.

That is a kind of “computing with space” and it fits well with the following problem with e-books raised by  by Mark Changizi, who says that  e-books lack space and gives as an example his (physical library). He says he knows that book A is the one close to that calculus book he uses when he need some formulae, and that’s the way he is going to find book A. While, in a virtual library there is no space, the relative positions of a e-book wrt to others is irrelevant, because when you click a link is like you teleport (and ignore space) from one place to another. My interpretation of this is: the way Changizi finds a book in his library is different from the way a robot Changizi would find the book, because the robot Changizi would need the coordinates of the book A with respect to  a corner of the bookshelf, or some information like this, then it would go and fetch the book. On the contrary, the human Changizi (‘s brain) acquired a geometric competence by accumulating these simple bits of knowledge about his library, namely tht book A is near the calculus book, and the calculus book is used when he needs formulae, etc.

Is this a problem which the internet of things will have? Surely. Is this solved? I have no idea, but I know how to solve it, by satisfying also the “tiny programs and executions” constraint. This involves exactly that part (called by me “emergent algebra sector”) which gives the kind of programs for using space in a very concentrated form, not needing “geometric expertise” to generate and manipulate them. Then, imagine that each object from the internet of things has a relative representation of some  of his neighbours, under the form of a small graph, like the graphs from graphic lambda calculus or the molecules from the chemical concrete machine. These graphs are generated from simple rules concerning proximity relations (like the keys thing is on the couch thing, or the socks thing is in the drawer thing), more on that later, and changes in proximity relations are like moves acting on such tiny graph molecules. Then, if you want to know where you lost the keys thing, instead of endowing the keys with a GPS, you just need to evaluate (i.e. decorate, like in the discussions about decorations of graphic lambda calculus graphs) these tiny graphs and find out that the key thing is i the drawer thing.
Here is a more mathematical explanation. Suppose you have a set T of things, objects. You want to describe how they are in space without eventually using any “geometric expertise” (like using a GPS coordinate system, for example).

As you know, the set of things T is the same as the trivial groupoid of things T^{2}, with arrows being pairs of things (a,b) and with composition of arrows being (a,b)(b,c) = (a,c).  Now, let us define first what is the “space” around one object from T, let’s call it “e“. (The same works for each object).

For defining this you take a commutative group (could be (0,\infty) or could be the free commutative group over an alphabet (containing for example “NEAR” an “FAR”, such that NEAR^{-1} = FAR, etc). Call this group the “scale group”. Then to any pair of objects (a,b) and any scale \varepsilon, associate a “virtual pair at scale epsilon”  (a,b)_{\varepsilon} = (b(\varepsilon)a, b), where b(\varepsilon)a  is like a term in lambda calculus constructed from variable names “a” and “b”, only that it does not satisfy the lambda calculus definition, but the emergent algebra definition. Of course you can see such terms as graphs in graphic lambda calculus made by the epsilon gates, along with the emergent algebra moves from graphic lambda calculus, as in

Moreover, you can mix them with lambda calculus terms, as it is possible in graphic lambda calculus.

So we have (in our platonic universe, because in practice we can’t “have” an infinite set of all things possible) a collection of term-like objects made from a, b, etc and from scales epsilon, mu, etc. We may imagine them visually either as binary trees, like when we represent graphically compositions of operations,
or as graphs in graphic lambda calculus, if we apply to them the algorithm for eliminating variable names. This is a kind of collection of all programs for using a space (each one is a me for space). (Mind you that there is no constraint, like euclidean space, or other.) The space around one object, say e, is obtained from this construction applied to the groupoid of pairs of pairs with arrows ((a,e),(b,e))  and the scale acts like this ((a,e),(b,e))_{\varepsilon} = ((a,e)_{\varepsilon}, (b,e)_{\varepsilon}).  (Remark: in the epsilon deformed groupoid the composition of arrows is transported by the epsilon deformation, so the composition changed from trivial to more interesting.)
If you are still reading this and is too long, the shorter description is that the space around an object e is defined in terms of virtual geometric relations (mes for space)  between all the other things from the set T (which is finite, of course), with respect to $latex  e$, and that each such virtual geometric relation is a small graph in graphic lambda calculus. More than this, suppose you want to translate from a virtual geometric relation to another (or from a virtual place to another). Such a “translation” is itself a small graph, actually made by only 4 nodes.
And here comes the final part: in order to know where the things from T are, one with respect to the others, it is enough to give,  to attribute to each pair of objects (a,b) a “virtual geometric relation” in the space around b, i.e. a graph. If the things move one with respect  to the other, they can update their relative geometric relations by moves in graphic lambda calculus.
If you want to know where a thing is in the physical space then you have to start from somewhere (some object) and decorate the virtual space relations,as they appear as graphs. This is like evaluations of  lambda calculus terms.

In conclusion, I am very interested in practical applications of this, and I can at least formulate a list of needs, some of them with an IT flavour, others applied math. As for the pure math needs, they are pure fun and play. All this together would lead to really interesting things. One of the things to keep in mind is that would also endow the internet of things with a synthetic chemical connectome, as an extension of The chemical connectome of the internet .


Competitivity and creativity in academia

Via Mark Changizi, I arrived to this post at It’s OK to be smart , which has as a conclusion: [for PhD students] “… faculty jobs ARE the alternative career”.  Went from here to  The Tenure Games, which lot of people know. The problem is big, one hears about particular cases all the time  and a natural question is:  is academia facing a real problem or it is just about people who don’t  stand the heat of the fierce competition?

I think there is a more important side question: is academia delivering? It should, because it selects the best of the best, right?

Don’t believe my suggestion to the contrary. Let’s see another, older academia in action, more than a century ago: [source]

The most famous art competition for students was the Prix de Rome. The winner of the Prix de Rome was awarded a fellowship to study at the Academie francaise’s school at the Villa Medici in Rome for up to five years. To compete, an artist had to be of French nationality, male, under 30 years of age, and single. He had to have met the entrance requirements of the Ecole and have the support of a well-known art teacher. The competition was grueling, involving several stages before the final one, in which 10 competitors were sequestered in studios for 72 days to paint their final history paintings. The winner was essentially assured a successful professional career.

The ultimate achievement for the professional artist was election to membership in the Academie  francaise and the right to be known as an academician.

All this to the result, a 100 years after, of decorating a chocolate box. Art followed another, much more creative path.


This is part of the same problem as academic publishing, what is intriguing is that  the same parallel works.