Tag Archives: actor model

Actors, space and Movable Feast Machine at ALIFE14


  1. David H. Ackley, Trent R. Small, Indefinitely Scalable Computing = Artificial Life Engineering
  2. Lance R. Williams, Self-Replicating Distributed Virtual Machines

presented at ALIFE14.

Both articles look great and the ideas are very close to my actual interests. Here is why:

  • it is about distributed computing
  • using actors
  • some things which fall under functional programming
  • and space.

The chemlambda and distributed GLC project  also has a paper there: M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication see the better arXiv version because it has links inside: arXiv:1403.8046.

The resemblance with the mentioned papers and the chemlambda and distributed GLC is that our (fully theoretical, helas) model is also about distributed computing, using actors, lambda calculus and space.

The differences are many, though, and I hope that these might lead to interesting interactions.

Further I describe the main difference, with the understanding that all this is very new for me, a mathematician, so I might be wrong in my grasping of the MFM (please correct me if so).

In the MFM the actors are atoms in a passive (i.e. predefined) space. In the distributed GLC the actors have as states graphs called molecules (more precisely g-patterns).

[Here is the moment to thank, first, to Stephen P. King who noticed me about the two articles.  Second, Stephen works on something which may be very similar to the MFM, as far as I understand, but I have to strongly stress that the distributed GLC  does NOT use a botnet, nor the actors are nodes in a chemlambda graph!]

In distributed GLC the actors interact by message passing to others actors with a known ID. Such message passing provokes a change in the states of the actors which corrsponds to one of the graph rewrites (moves of chemlambda). As an effect the connectivities between the actors change (where connectivity between an actor :alice and :bob means that :alice has as state a g-pattern with one of the free ports decorated with :bob ID). Here the space is represented by these connectivities and it is not passive, but an effect of the computation.

In the future I shall use and cite, of course, this great research subject which was unknown to me. For example the article Lance R. Williams Robust Evaluation of Expressions by Distributed Virtual Machines already uses actors!  What more I am not aware of? Please tell, thanks!




Lambda calculus and the fixed point combinator in chemlambda (VI)

This is the 6th  (continuing from part I  and part II  and part III and part IV and part V)   in a series of expository posts where we put together in one place the pieces from various places about:

  • how is treated lambda calculus in chemlambda
  • how it works, with special emphasis on the fixed point combinator.

I hope to make this presentation  self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)


This series of posts may be used as a longer, more detailed version of sections

  • The chemlambda formalism
  • Chemlambda and lambda calculus
  • Propagators, distributors, multipliers and guns
  • The Y combinator and self-multiplication

from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication,  arXiv:1403.8046 [cs.AI],  which is accepted in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York, (go see the presentation of Louis Kauffman if you are near the event.) Here is a link to the published  article, free, at MIT Press.


In this post I want to concentrate on the mechanism of self-multiplication for g-patterns coming from lambda terms (see  part IV   where the algorithm of translation from lambda terms to g-patterns is explained).

Before that, please notice that there is a lot to talk about an important problem which shall be described later in detail. But here is it, to keep an eye on it.

Chemlambda in itself is only a graph rewriting system. In part V is explained that  the beta reduction from lambda calculus needs an evaluation strategy in order to be used. We noticed that  in chemlambda the self-multiplication is needed in order to prove that one can do beta reduction as the beta move.

We go towards the obvious conclusion that in chemlambda reduction (i.e. beta move) and self-multiplication are just names used for parts of the computation. Indeed, the clear conclusion is that there is a computation which can be done with chemlambda, which has some parts where we use the beta move (and possibly some COMB, CO-ASSOC, CO-COMM, LOC PRUNING) and some other parts we use DIST and FAN-IN (and possibly some of the moves COMB, CO-ASSOC, CO-COMM, LOC PRUNING). These two parts have as names reduction and self-multiplication respectively, but in the big computation they mix into a whole. There are only moves, graphs rewrites applied to a molecule.

Which brings the problem: chemlambda in itself is not sufficient for having a model of computation. We need to specify how, where, when the reductions apply to molecules.

There may be many variants, roughly described as: sequential, parallel, concurrent, decentralized, random, based on chemical reaction network models, etc

Each model of computation (which can be made compatible with chemlambda) gives a different whole when used with chemlambda. Until now, in this series there has been no mention of a model of computation.

There is another aspect of this. It is obvious that chemlambda graphs  form a larger class than lambda terms, and also that the graph rewrites apply to more general situations  than beta reduction (and eventually an evaluation strategy).  It means that the important problem of defining a model of computation over chemlambda will have influences over the way chemlambda molecules “compute” in general.

The model of computation which I prefer  is not based on chemical reaction networks, nor on process calculi, but on a new model, inspired from the Actor Model, called  the distributed GLC. I shall explain why I believe that the Actor Model of Hewitt is superior to those mentioned previously (with respect to decentralized, asynchronous computation in the real Internet, and also in the real world), I shall explain what is my understanding of that model and eventually the distributed GLC proposal by me and Louis Kauffman will be exposed in all details.

4.  Self-multiplication of a g-pattern coming from a lambda term.

For the moment we concentrate on the self-multiplication phenomenon for g-patterns which represent lambda terms. In the following is a departure from the ALIFE 14 article. I shall not use the path which consists into going to combinators patterns, nor I shall discuss in this post why the self-multiplication phenomenon is not confined in the world of g-patterns coming from lambda terms. This is for a future post.

In this post I want to give an image about how these g-patterns self-multiply, in the sense that most of the self-multiplication process can be explained independently on the computing model. Later on we shall come back to this, we shall look outside lambda calculus as well and we shall explore also the combinator molecules.

OK, let’s start. In part V has been noticed that after an application of the beta rule to the g-pattern

L[a,x,b] A[b,c,d] C[c]  FOTREE[x,a1,…,aN] B[a1,…,aN, a]

we obtain (via COMB moves)

C[x] FOTREE[x,a1,…,aN] B[a1,…,aN,d]

and the problem is that we have a g-pattern which is not coming from a lambda term, because it has a FOTREE in the middle of it. It looks like this (recall that FOTREEs are figured in yellow and the syntactic trees are figured in light blue)

structure_12The question is: what can happen next?  Let’s simplify the setting by taking the FOTREE in the middle as a single fanout node, then we ask what moves can be applied further to the g-pattern

C[x] FO[x,a,b]

Clearly we can apply DIST moves. There are two DIST moves, one for the application node, the other for the lambda node.

There is a chain of propagation of DIST moves through the syntactic tree of C, which is independent on the model of computation chosen (i.e. on the rules about which, when and where rules are used), because the syntactic tree is a tree.

Look what happens. We have the propagation of DIST moves (for the application nodes say) first, which produce two copies of a part of the syntactic tree which contains the root.


At some point we arrive to a pattern which allows the application of a DIST move for a lambda node. We do the rule:


We see that fanins appear! … and then the propagation of DIST moves through the syntactic tree continues until eventually we get this:

structure_9So the syntactic tree self-multiplied, but the two copies are still connected by FOTREEs  which connect to left.out ports of the lambda nodes which are part of the syntactic tree (figured only one in the previous image).

Notice that now (or even earlier, it does not matter actually, will be explained rigorously why when we shall talk about the computing model, for the moment we want to see if it is possible only) we are in position to apply the FAN-IN move. Also, it is clear that by using CO-COMM and CO-ASSOC moves we can shuffle the arrows of the FOTREE,  which is “conjugated” with a fanin at the root and with fanouts at the leaves, so that eventually we get this.


The self-multiplication is achieved! It looks strikingly like the anaphase [source]

800px-Anaphase.svgfollowed by telophase [source]




Synapse servers: plenty of room at the virtual bottom

The Distributed GLC is a decentralized asynchronous model of computation which uses chemlambda molecules. In the model, each molecule is managed by an actor,  which has the molecule as his state, and the free ports of the  molecule are tagged with other actors names. Reductions  between molecules (like chemical reactions) happen  only for those molecules which have actors who know  one the other, i.e. only between molecules managed by actors  :Alice and :Bob say, such that:

  • the state of :Alice (i.e. her molecule) contains a half of the graph_{1} pattern of a move, along with a free port tagged with :Bob name.
  • the state of :Bob contains the other half of the graph_{1} pattern, with a free port tagged with :Alice name
  • there is a procedure based exclusively on communication by packets, TCP style, [UPDATE: watch this recent video of an interview of Carl Hewitt!]  which allow to perform the reduction on both sides and which later informs the eventual other actors which are neighbors (i.e. appear as tags in :Alice or :Bob state) about possible new tags at their states, due to the reduction which happened (this can be done for example by performing the move either via introduction of new invisible nodes in the chemlambda molecules, or via the introduction of Arrow elements, then followed by combing moves).

Now, here is possibly a better idea. To explore. One which connects to a thread which is not developed for the moment (anybody interested? contact me)   neural type computation with chemlambda and GLC .

The idea is that once the initial configuration of actors and their initial states are set, then why not move the actors around and make the possible reductions only if the actors :Alice and :Bob are in the same synapse server.

Because the actor IS the state of the actor, the rest of stuff a GLC actor knows to do is so trivially easy so  that it is not worthy do dedicate one program per actor running some place fixed. This way, a synapse server can do thousands of reductions on different actors datagrams (see further) in the same time.


  • be bold and use connectionless communication, UDP like, to pass the actors states (as datagrams) between servers called “synapse servers”
  • and let a synapse server to check datagrams to see if by chance there is a pair which allow a reduction, then perform in one place the reduction, then let them walk, modified.

There is so much place for the artificial chemistry chemlambda at the bottom of the Internet layers that one can then add some learning mechanisms to the synapse servers. One is for example this: suppose that a synapse server matches two actors datagrams and finds there are more than one possible reductions between them. Then the synapse server asks his neighbour synapse servers (which perhaps correspond to a virtual neuroglia) if they encouter this configuration. It chooses then (according to a simple algorithm) which reduction to make based on the info coming from its neighbours in the same glial domain and tags the packets which   result after  the reduction (i.e. adds to them in some field) a code for the mode which was made. Successful choices are those which have descendants which are active, say after more than $n$ reductions.

Plenty of possibilities, plenty of room at the bottom.

Distributed GLC discussion

This is my contribution to a discussion about the distributed GLC model of computation and about the associated gui.

Read carefully.  It has a fractal structure.

Basics about chemlambda graphs and the GUI

The chemlambda graphs (molecules) are not flowcharts. One just has to specify certain (known) graphs with at most 4 nodes and how to replace them with other known simple graphs. That is all.

That means that one needs:

– a file which specifies what kind of graphs are used (by giving the type of nodes and arrows)

– a file which specifies which are the patterns (i.e. graphs) and which are the rewrite moves

– and a program which takes these files as input and a graph and does things, like checking if the graph is of the kind described in file 1, if there are any patterns from the file 2 and do the rewrite in a place where the user wants, do a sequence of rewrites until it forks, if the user wants, take as input a lambda expression given by the user and transform it into a graph.

– then there is the visualization of the graphs program(s), that is the hard part, but it is already done in multiple places. Means that one has to write only the possible conversions of file formats from and to the viz tool.

That is the minimal configuration.


There are various reasons why one wants to be able to decorate the graphs, locally, as a supplementary thing, but in no way is this needed for the basic process.

Concerning decorations, one needs a file which specifies how to decorate arrows and which are the relations coming from nodes. But this is not part of the computation.

If we want to make it more powerful then it gets more complex because if we want to do symbolic computations of decorations (like elimination of a relation coming from a node) then probably it is better to output a file of decorations of arrows and relations from nodes and input it in a symbolic soft, like mathematica or something free, there is no need to reinvent the wheel.

After the graph rewrite you loose the decoration, that’s part of the fun which makes decorations less interesting and makes supposedly the computation more secure.

But that depends on the choice of decoration rules.

For example, if you decorate with types then you don’t loose the decoration after the move. Yes, there are nodes and arrows which disappear, but outside of the site where the move was applied, the decorations don’t change.

In the particular case of using types as decorations, there is another phenomenon though. If you use the decoration with types for graphs which don’t represent lambda terms then you will get relations between types, relations which are supplementary. a way of saying this is that some graphs are not well typed, meaning that the types form an algebra which is not free (you can’t eliminate all relations). But the algebra you get, albeit not free, is still an interesting object.

So the decoration procedure and the computation (reduction) procedure are orthogonal. You may decorate a fixed graph and you may do symbolic algebraic computations with the decorations, in an algebra generated by the graph, in the same way as a know generates an algebra called quandle. Or you may reduce the graph, irrespective of the decorations, and get another graph. Decorations of the first graph don’t persist, a priori, after the reduction.

An exception is decoration with types, which persists outside the place where the reduction is performed. But there is another problem, that even if the decoration with types satisfies locally (i.e. at the arrows of each node) what is expected from types, many (most) of the graphs don’t generate a free algebra, as it would be expected from algebras of types.

The first chemical interpretation

There is the objection that the reductions can’t be like chemical reactions because the nodes (atoms) can’t appear or disappear, there should be a conservation law for them.

Correct! What then?

The reduction, let’s pick one – the beta move say – is a chemical reaction of the graph (molecule) with an enzyme which in the formalism appears only with the name “beta enzyme” but is not specified as a graph in chemlambda. Then, during the reaction, some nodes may disappear, in the sense that they bond to the beta enzyme and makes it inactive further.

So, the reduction A –>beta B appears as the reaction

A + beta = B + garbage

How moves are performed

Let’s get a bit detailed about what moves (graph rewrites) mean and how they are done. Every move says replace graph_1 with graph_2 , where graph_1, graph_2 are graphs with a small number of nodes and arrows (and also “graph” may well be made only by two arrows, like is the case for graph_2 for the beta move).

So, now you have a graph G. Then the program looks for graph_1 chunks in G and adds some annotation (perhaps in an annotation file it produces). Then there may be script which inputs the graph G and the annotation file into the graph viz tool, which has as effect, for example, that the graph_1 chunk appears phosphorescent on the screen. Or say when you hover with the mouse over the graph_1 chunk then it changes color, or there is an ellipse which encircles it and a tag saying “beta”.

Suppose that the user clicks, giving his OK for performing the move. Then on the screen the graph changes, but the previous version is kept in the memory, in case the user wants to go back (the moves are all reversible, but sometimes, like in the case of the beta move, the graph_2 is too common, is everywhere, so the use of both senses of some moves is forbidden in the formalism, unless it is used in a predefined sequence of moves, called “macro move”).

Another example would be that the user clicks on a button which says “go along with the reductions as long as you can do it before you find a fork in the reduction process”. Then, of course it would be good to keep the intermediate graphs in memory.

Yet another example would be that of a node or arrow of the graph G which turn out to belong to two different interesting chunks. Then the user should be able to choose which reduction to do.

It would be good to have the possibility to perform each move upon request,


the possibility to perform a sequence of moves which starts from a first one chosen by the user (or from the only one available in the graph, as is the case for many graphs coming from lambda terms which are obtained by repeated currying and nesting)


the possibility to define new, composed moves at once, for example you notice that there is a chunk graph_3 which contains graph_1 and after reduction of graph_1 to graph_2 inside graph_3, the graph_3 becomes graph_4; graph_4 contains now a chunk graph_1 of another move, which can be reduced and graph_4 becomes graph_5. Now, you may want to say: I save this sequence of two moves from graph_3 to graph_5 as a new move. The addition of this new move does not change the formalism because you may always replace this new move with a sequence of two old moves

Practically the last possibility means the ability to add new chunks graph_3 and graph_5 in the file which describes the moves and to define the new move with a name chosen by the user.


Finally, you may want to be able to either select a chunk of the input graph by clicking on nodes and arrows, or to construct a graph and then say (i.e. click a button) that from now on that graph will be represented as a new type of node, with a certain arity. That means writing in the file which describes the type of nodes.

You may combine the last two procedures by saying that you select or construct a graph G. Then you notice that you may reduce it in an interesting way (for whatever further purposes) which looks like this:

– before the chain of reduction you may see the graph G as being made by two chunks A and B, with some arrows between some nodes from chunk A and some nodes from chunk B. After the reduction you look at the graph as being made by chunks C, D, E, say.

– Then you “save” your chunks A, B, C, D, E as new types of nodes (some of them may be of course just made by an old node, so no need to save them) and you define a new move which transforms the chunk AB into the chunk CDE (written like this only because of the 1D constraints of writing, but you see what I mean, right?).

The addition of these new nodes and moves don’t change the formalism, because there is a dictionary which transforms each new type of node into a graph made of old nodes and each new move into a succession of old moves.

How can this be done:

– use the definition of new nodes for the separation of G into A, B and for the definition of G’ (the graph after the sequence of reductions) into C,D,E

– save the sequence of moves from G to G’ as new composite move between G and G’

– produce a new move which replaces AB with CDE

That’s interesting how should work properly, probably one should keep both the move AB to CDE and the move G to G’, as well as the translations of G into AB and G’ into CDE.

We’re getting close to actors, but the first purpose of the gui is not to be a sandbox for the distributed computation, that would be another level on top of that.

The value of the sequence of moves saved as a composite move is multiple:

– the graph_3 which is the start of the sequence contains graph_1 which is the start of another move, so it always lead to forks: one may apply the sequence or only the first move

– there may be a possible fork after you do the first reduction in graph_3, in the sense that there may be another chunk of another move which could be applied

GLC actors

The actors are a special kind of decoration which transform (some) moves (at the choice of the user) into interaction devices.

You decorate the nodes of a graph G with actors names (they are just names, for the moment, at your choice). As a convention let’s say that we denote actor names by :a , :b , etc

You also decorate arrows with pairs of names of actors, those coming from the decorations of nodes, with the convention that (:a, :a) is identified (in the user mind) with :a (nothing surprising here, think about the groupoid over a set X, which is the set of “arrows” X \times X and X appears as the set of objects of the groupoid and it identifies with the set of pairs (x,x) with x \in X).

Now, say you have a move from graph_1 to graph_2. Then, as in the boldfaced previous description, but somehow in the opposite sense, you define graphs A, B such that graph_1 is AB and graphs C,D such that graph_2 is CD.

Then you say that you can perform the reduction from graph_1 to graph_2 only if the nodes of A are all decorated with :a and the nodes of :b are decorated with :b, a different name than :a.

After reduction you decorate the nodes of C with :a and the nodes of D with :b .

In this way the actors with identities :a and :b change their state during the reduction (i.e. the graph made by the nodes decorated with :a and the arrows decorated with (:a,:a) change, same for :b).

The reduction can be done for the graph G only at chunks graph_1 which are decorated as explained.

To explain what actor :Bob is doing it matters from which point of view. Also, what is the relation between actors and the chemical interpretation, how they fit there?

So let’s take it methodically.

The point of view of the GUI

If we discuss from the point of view of playing with the gui, then the user of the gui has global, God’s view over what happens. That means the the user of the gui can see the whole graph at one moment, the user has a clock which is like a global notion of time. So from this point of view the user of the gui is the master of space and time. He sees the fates of :Bob, :Alice, :Claire, :Dan simultaneously. The user has the right in the gui world to talk about parallel stuff happening (i.e. “at the same time”) and sequential stuff happening (to the same actor or actors). The user may notice that some reductions are independent, in the sense that wrt to the user’s clock the result is the same if first :Bob interacts with :Alice and then :Claire interacts with :Dan or conversely, which makes the user think that there is some notion more general than parallelism, i.e. concurrency.

If we discuss from the point of view of :Bob, it looks different. More later.

Let’s stay at the user of the gui point of view and think about what actors do. We shall use the user’s clock for reference to time and the user’s point of view about space (what is represented on the screen via the viz tool) to speak about states of actors.

What the user does:

– he defines the graph types an the rules of reduction

– he inputs a graph

– he decorates it with actors names

– he click some buttons from time to time ( deus ex machina quote : “is a plot device whereby a seemingly unsolvable problem is suddenly and abruptly resolved by the contrived and unexpected intervention of some new event, character, ability or object.” )

At any moment the actor :Bob has a state.

Definition: the state of :Bob at the moment t is the graph formed by the nodes decorated with the name :Bob, the arrows decorated by (:Bob, :Bob) and the arrows decorated by (:Bob, :Alice), etc .

Because each node is decorated by an actor name, it follows that there are never shared nodes between different actors, but there may be shared arrows, like an arrow decorated (:Bob, :Alice), which is both belonging to :Bob and :Alice.

The user thinks about an arrow (:Bob, :Alice) as being made of two half arrows:

– one which starts at a node decorated with :Bob and has a free end, decorated with :Alice ; this half arrow belongs to the state of :Bob

– one which ends at a node decorated with :Alice and has a free start, decorated with :Bob ; this half arrow belongs to the state of :Alice

The user also thinks that the arrow decorated by (:Bob, :Alice) shows that :Bob and :Alice are neighbours there. What means “there”? Is like you, Bob, want to park your car (state of Bob) and the your front left tyre is close to the concrete margin (i.e. :Alice), but you may consider also that your back is close to the trash bin also (i.e :Elvis).

We may represent the neighboring relations between arrows as a new graph, which is obtained by thinking about :Bob, :Alice, … as being nodes and by thinking that an arrow decorated (:Bob, :Alice) appears as an arrow from the node which represents :Bob to the node which represents :Alice (of course there may be several such arrows decorated (:Bob, :Alice) ).

This new graph is called “actors diagram” and is something used by the gui user to put order in his head and to explain to others the stuff happening there.

The user calls the actors diagram “space”, because he thinks that space is nothing but the neighboring relation between actors at a moment in time (user’s time). He is aware that there is a problem with this view, which supposes that there is a global time notion and a global simultaneous view on the actors (states), but says “what the heck, I have to use a way to discuss with others about what’s happening in the gui world, but I will show great caution and restraint by trying to keep track of the effects of this global view on my explanation”.

Suppose now that there is an arrow decorated (:Bob, :Alice) and this arrow, along with the node from the start (decorated with :Bob) and the node from the end (decorated with :Alice) is part of the graph_1 of one of the graph rewrites which are allowed.

Even more general, suppose that there is a graph_1 chunk which has the form AB with the sub-chunk A belonging to :Alice and the sub-chunk B belonging to Bob.

Then the reduction may happen there. (Who initiates it? Alice, Bob, the user’s click ? let’s not care about this for a moment, although if we use the user’s point of view then Alice, Bob are passive and the user has the decision to click or not to click.)

This is like a chemical reaction which takes into consideration also the space. How?

Denote by Alice(t) and Bob(t) the respective states of :Alice and :Bob at the moment t. Think about the states as being two chemical molecules, instead of one as previously.

Each molecule has a reaction site: for Alice(t) the reaction site is A and for Bob(t) the reaction site is B.

They enter in the reaction if two conditions are satisfied:

– there is an enzyme (say the beta enzyme, if the reduction is the beta) which can facilitate the reaction (by the user’s click)

– the molecules are close in space, i.e. there is an arrow from A to B, or from B to A

So you see that it may happen that Alice(t) may have inside a chunk graph which looks like A and Bob(t) may have a chunk graph which looks like B, but if the chunks A, B are not connected such that AB forms a chunk which is like the graph_1 of the beta move, then they can’t react because (physical interpretation, say) they are not close in space.

The reaction sites of Alice(t) and Bob(t) may be close in space, but if the user does not click then they can’t react because there is no beta enzyme roaming around to facilitate the reaction.

If they are close and if there is a beta enzyme around then the reaction appears as

Alice(t) + Bob(t) + beta = Alice(t+1) + Bob(t+1) + garbage

Let’s see now who is Alice(t+1) and Bob(t+1). The beta rewrite replaces graph_1 (which is AB) by graph_2 (which is CD). C will belong to Alice(t+1) and D will belong to Bob(t+1). The rest of Alice(t) and Bob(t) is inherited unchanged by Alice(t+1) and Bob(t+1).

Is this true? What about the actors diagram, will it change after the reaction?

Actually graph_1, which is AB, may have (and it usually does) other arrows besides the ones decorated with (:Bob, :Alice). For example A may have arrows from A to the rest of Alice(t), i.e. decorated with (:Alice, :Alice), same for B which may have others arrows from B to the rest of B(t), which are decorated by (:Bob, :Bob).

After the rewrite (chemical reaction) these arrows will be rewired by the replacement of AB by CD, but nevertheless the new arrows which replace those will be decorated by (:Alice, :Alice) (because they will become arrows from C to the rest of Alice(t+1)) and (:Bob, :Bob) (same argument). All in all we see that after the chemical reaction the molecule :Alice and the molecule :Bob may loose or win some nodes (atoms) and they may suffer some internal rewiring (bonds), so this looks like :Alice and :Bob changed the chemical composition.

But they also moved as an effect of the reaction.

Indeed, graph_1, which is AB, may have other arrows besides he ones decorated with (:Bob, :Alice) , (:Bob, :Bob) or (:Alice, :Alice). The chunk A (which belongs to Alice(t)) may have arrows which connect it with :Claire, i.e. there may be arrows from A to another actor, Claire, decorated with (:Alice, :Claire), for example.

After the reaction which consist in the replacement of AB by CD, there are rewiring which happened, which may have as effect the apparition of arrows decorated (:Bob, :Claire), for example. In such a case we say that Bob moved close to Claire. The molecules move this way (i.e. in the sense that the neighboring relations change in this concrete way).

Pit stop

Let’s stop here for the moment, because there is already a lot. In the next message I hope to talk about why the idea of using a Chemical reaction network image is good, but still global, it is a way to replace the user’s deus ex machina clicks by random availability of enzymes, but still using a global time and a global space (i.e. the actors diagrams). The model will be better also than what is usually a CRN based model, where the molecules are supposed to be part of a “well stirred” solution (i.e. let’s neglect space effects on the reaction), or they are supposed to diffuse in a fixed space (i.e let’s make the space passive). The model will allow to introduce global notions of entropy.

Such a CRN based model deserves a study for itself, because it is unusual in the way it describes the space and the chemical reactions of the molecules-actors as aspects of the same thing.

But we want to go even further, towards renouncing at the global pov.

The example with the marbles

In a discussion about the possible advantages for secure computing with the  GLC actors model, I came up with this analogy, which I want to file here, not to get lost in the flow of exchanges:

Mind that this is only a thought  experiment, which might not be accurate in all aspects in it’s representation of the kind of computation with GLC or more accurately with chemlambda.

Imagine a large pipe, with a diameter of 1 m say, and 3 m long, to have an image. It is full of marbles, all identical in shape. It is so full that if one forces a marble at one end then a marble (or sometimes more) have to get out by the other end.

Say Alice is on one end of the pipe and Bob is at the other end. They agreed previously to communicate in the most primitive manner, namely by the spilling  of a small (say like ten) or a big (for example like 50)   marbles at their respective ends. The pipe contains maybe 10^5   or 10^6 marbles, so both these numbers are small.

There is also Claire who, for some reason, can’t see the ends of Alice and Bob, but the pipe has a window at the middle and Claire can see about 10% of the marbles from the pipe, those which are behind the window.

Let’s see how the marbles interact. Having the same shape, and because the pipe is full of them, they are in a local configuration which minimizes the volume (maybe not all of them, but here the analogy is mum about this). When a marble (or maybe several) is forced at Alice’s end of the pipe, there are lots of movements which accommodate the new marbles with the old ones. The physics of marbles is known, is the elastic contact between them and there is a fact in the platonic sky which says that for any local portion of the pipe the momentum and energy are conserved, as well as the volume of the marbles. The global conservation of these quantities is an effect of those (as anybody versed in media mechanics can confirm to you).

Now, Claire can’t get anything from looking  by the window. At best Claire remarks complex small movements, but there is no clear way how this happens (other than if she looks at a small number of them then she might figure out the local mechanical ballet imposed by the conservation laws), not are Alice’s marbles marching towards Bob’s end.

Claire can easily destroy the communication, for example by opening her window and getting out some buckets of marbles, or even by breaking the pipe. But this is not getting Claire closer to understanding what Alice and Bob are talking about.

Claire could of course claim that i the whole pipe was transparent, she could film the pipe and then reconstruct the communication. But in this case Claire would be the goddess of the pipe and nothing would be hidden to her. Alice and Bob would be her slaves because Claire would be in a position which is equivalent to having a window at each end of the pipe.



  • each marble is a GLC actor
  • they interact locally, by known and simple rules
  • this is an example of signal transduction
  • which encrypts itself, more  communication makes the decoding harder. It is the same problem which is encountered when observing a living system, for example a cell. You may freeze it (and therefore kill it) and look at it but you won’t see how it functions. You can observe it alive, but it is complex by itself, you never see, or only rare glimpses of meaning.
  • the space (of the pipe) represents  an effect of the local, decentralized, asynchronous interactions.

Beneath under there is just local interaction, via the moves which act on patterns of graphs which are split between actors. But this locality gives space, which is an emergent, global effect of these distinctions which communicate.

Two chemical molecules which react are one composite molecule which reduces itself, splitted between two actors (one per molecule). The molecules react when they are close is the same as saying that their associated actors interact when they are in the neighboring relation.  And the reaction modifies not only the respective molecules, but also the neighboring relation between actors, i.e. the reaction makes the molecules to move through space. The space is transformed as well as the shape of the reactants, which looks from an emergent perspective as if the reactants move through some passive space.

Concretely, each actor has a piece of the big graph, two actors are neighbours if there is an arrow of the big graph which connects their respective pieces, the reduction moves can be applied only on patterns which are splitted between two actors and as an effect, the reduction moves modify both the pieces and the arrows which connect the pieces, thus the neighbouring of actors.

What we do in the distributed GLC project is to use actors to transform the Net into a space. It works exactly because space is an effect of locality, on one side, and of universal simple interactions (moves on graphs) on the other side.


Quantum experimenters Alice and Bob are actors (NTC vs TC, II)

… i.e. if you turn the diagrams which explain quantum protocols by 90 deg, then the graph rewrites which are used in the NTC (topology does not compute) categorical quantum mechanics transform into TC (topology computes) diagrams. (See NTC vs TC part 1 for the terminology.)

And Alice and Bob become GLC actors.  Indeed, they  behave like actors from Hewitt Actor model and moreover communicate by the intermediary of graph rewrites.

I shall come back to this with many details, for the moment I can’t do decent drawings, they will appear soon.

It is intriguing, right? Yes, because when you turn the diagrams by 90 deg, the time and space “directions” change places. Moreover, the Alice from the experiment 1 (described by what appears in categorical quantum mechanics as the process 1, a decorated graph) is united with the Alice from the experiment 2 (described by what appears as the process 2, which is equivalent by graph rewrites with the process 1). That is, if you turn the diagrams by 90 deg, the two instances of Alice (or all the instances of Alice from all the intermediary steps of the graph rewrites) form the GLC actor Alice. Same for Bob, of course.

What this equivalence means, is intriguing. Very much intriguing.



How is different signal transduction from information theory?

They look different.

“Signal transduction occurs when an extracellular signaling[1] molecule activates a specific receptor located on the cell surface or inside the cell. In turn, this receptor triggers a biochemical chain of events inside the cell, creating a response.[2] Depending on the cell, the response alters the cell’s metabolism, shape, gene expression, or ability to divide.[3] The signal can be amplified at any step. Thus, one signaling molecule can cause many responses.[4]

Signal transduction involves the binding of extracellular signalling molecules and ligands to cell-surface receptors that trigger events inside the cell. The combination of messenger with receptor causes a change in the conformation of the receptor, known as receptor activation. This activation is always the initial step (the cause) leading to the cell’s ultimate responses (effect) to the messenger. Despite the myriad of these ultimate responses, they are all directly due to changes in particular cell proteins. Intracellular signaling cascades can be started through cell-substratum interactions; examples are the integrin that binds ligands in the extracellular matrix and steroids.[13]

[wiki source]


It seems that signal transduction involves:

  • messenger molecule
  • receptor molecule
  • the messenger reacts with the receptor, which changes conformation (receptor activation)
  • receptor activation triggers other chemical reactions

Let’s condense further:

  • molecule M (messenger) binds to molecule R (receptor)
  • the complex MR changes shape (a spatial notion, in a generalized sense)
  • which triggers other reactions between (the new) R with other neighboring molecules.

It is interesting for me because that is how distributed GLC works:

  • the actor M reacts with the actor R
  • after interaction both molecules may change, the one which belongs to the actor M and the one which belongs to the actor R,
  • but also the actors adjacencies may change as well, due to the reductions involving the reaction between M and R (this corresponds to the receptor activation)
  • which triggers other interactions between GLC actors.


Information theory, on the other side, concerns a sender, a channel and a receiver. The sender sends messages through the channel to the receiver.

Completely different frames.

One may, of course, partially ignore the mechanism (signal transduction) and look instead at the environment as a sender, to the cell as a receiver and to the cell’s membrane as a channel (just an example).

But sender, channel and receiver look (to me) as mind constructs which are useful for the human trying to make a sense of what is happening, from outside. What is happening though,  is signal transduction.




Autodesk releases SeaWater (another WHAT IF post)

[ This is another  WHAT IF  post  which  responds to the challenge formulated in  Alife vs AGI.  You are welcome to suggest another one or to make your own.]

The following is a picture of a random splash of sea water, magnified 25 times [source]


As well, it could be  just a representation of the state of the IoT in a small neighbourhood of you, according to the press release describing SeaWater, the new product of Autodesk.

“SeaWater is a design tool for the artificial life based decentralized Internet of Things. Each of the tiny plankton beings which appear in the picture is actually a program, technically called a GLC actor. Each plankton being has it’s own umwelt, it’s own representation of the medium which surrounds it. Spatially close beings in the picture share the same surrounding and thus they can interact. Likewise, the tiny GLC actors interact locally one with another,  not in real space, but on the Net. There is no real space in the Net, instead, SeaWater represents them closer when they do interact.

Sea Water is a tool for Net designers. We humans are visual beings. A lot of our primate brains powers can be harnessed for designing the alife decentralized computing which form the basis of the Internet of Things.

It improves very much the primitive tools which give things like this picture [source]




Context. Recall that IoT is only a bridge between two worlds: the real one, where life is ruled by real chemistry and the artificial one, based on some variant of an artificial chemistry, aka  chemlambda.

As Andrew Hessel points out, life is a programming language (chemically based),  as well as the virtual world. They are the same, sharing the same principles of computation. The IoT is a translation tool which unites these worlds and lets them be one.

This is the far reaching goal. But in the meantime we have to learn how to design this. Imagine that we may import real beings, say microbes, to our own unique Microbiome OS.  There is no fundamental difference between synthetic life, artificial life and real life, at least at this bottom level.

Instead of aiming for human or superhuman artificial intelligence, the alife decentralized computing community wants to build a world where humans are not treated like bayesian units by  pyramidal centralized constructs.  There is an immense computing power already at the bottom of alife, where synthetic biology offers many valuable lessons.


UPDATE.  This is real: Autodesk Builds Its Own Virus, as the Software Giant Develops Design Tools for Life Itself.

Bacterial conjugation is beta reduction

I come back to the idea from the post   Click and zip with bacterial conjugation , with a bit more details. It is strange, maybe, but perhaps is less strange than many other ideas circulating on the Net around brains and consciousness.


The thing is that bacteria can’t act based on semantics, they are more stupid than us. They have physical or chemical mechanisms which obviate the need to use semantics filters.

Bacteria are more simpler than brains, of course, but the discussion is relevant to brains as collections of cells.

The idea: bacterial conjugation is a form of  beta reduction!

On one side we have a biological phenomenon, bacterial conjugation. On the other side we have a logic world concept, beta reduction, which is the engine that moves lambda calculus, one of the two pillars of computation.

What is the relation between semantics, bacterial conjugation and beta reduction?

Lambda calculus is a rewrite system, with the main rewrite being beta reduction. A rewrite system, basically, says that whenever you see a certain pattern in front of you then you can replace this pattern by another.

Graphic lambda calculus is a graph rewrite system which is more general than lambda calculus. A graph rewrite system is like a rewrite system which used graphs instead of lines of text, or words. If you see certain  graphical patterns then you can replace them by others.

Suppose  that Nature uses (graphical) rewrite systems in the biological realm, for example suppose that bacteria interactions can be modeled by a graph rewrite system. Then,  there has to be a mechanism which replaces the recognition of pattern which involves two bacteria in interaction.

When two bacteria interact there are at least two ingredients:  spatial proximity (SP) and chemical interaction (CI).

SP is something we can describe and think about easily, but from the point of view of a microbe our easy description is void. Indeed, two bacteria in SP can’t be described as pairs of coordinate numbers which are numerically close, unless if each of the microbes has an internal representation of a coordinate system, which is stupid to suppose. Moreover, I think is too much to suppose that each microbe has an internal representation of itself and of it’s neighbouring microbes. This is a kind of a bacterial cartesian theater.

You see, even trying to describe what could be SP for a pair of bacteria does not make much sense.

CI happens when SP is satisfied (i.e. for bacteria in spatial proximity). There is of course a lot of randomness into this, which has to be taken into account, but it does not replace the fact that SP is something hard to make sense from the pov of bacteria.

In Distributed GLC we think about bacteria as actors (and not agents) and about SP as connections between actors. Those connections between actors change in a local, asynchronous way, during the CI (which is the proper graph rewrite, after the pattern between two actors in SP is identified).

In this view, SP between actors, this mysterious almost philosophical relation which is forced upon us after we renounce at the God eye point of view, is described as an edge in the actors diagram.

Such an edge, in Distributed GLC, it is always related to   an oriented edge (arrow) in the GLC (or chemlambda) graph which is doing the actual computation. Therefore, we see that arrows in GLC or chemlambda graphs (may) have more interpretation than being chemical bonds in (artificial) chemistry molecules.

Actually, this is very nice, but hard to grasp: there is no difference between CI and SP!

Now, according to the hypothesis from this post and from the previous one, the mechanism which is used by bacteria for graph rewrite is to grow pili.

The following image (done with the tools I have access to right now) explains more clearly how bacterial conjugation may be (graphic) beta reduction.


In the upper part of the figure we see the  lambda abstraction node (red)  and the application node (green)  as encoded by crossings. They are strange crossings, see the post  Zipper logic and knot diagrams . Here the crossings are representing with a half of the upper passing thread half-erased.

Now, suppose that the lambda node is (or is managed by) a bacterial cell and that the application node is (managed by) anther bacterium cell. The fact that they are in SP is represented in the first line under the blue separation line in the picture. At the left of the first row (under the blue horizontal line) , SP is represented by the arrow which goes from the lambda node (of the bacterium at left) and the application node (of the bacterium at right). At the right of the first row, this SP arrow is represented as the upper arc which connects the two crossings.

Now the process of pattern recognition begin. In Nature, that is asymmetric: one of the bacterial cells grow a pilus. In this diagrammatic representation, things are symmetric (maybe a weakness of the description). The pilus growth is represented as the CLICK move.

This brings us to the last row of the image. Once the pattern is recognized (or in place) the graph reduction may happen by the ZIP move. In the crossing diagram this is represented by a R2 move, which itself is one of the ways to represent (graphic) beta moves.

Remark that in this process we have two arcs:  the upper arc from the RHS crossing diagram (i.e the arc which represents the SP) and the lower arc appeared after the CLICK move (i.e. the pilus connecting the two bacteria).

After the ZIP move we get two (physical) pili, this corresponds to the last row in the diagram of bacterial conjugation, let me reproduce it again here from the wiki source:



After the ZIP move the arc which represents SP is representing a pilus as well!


Click and zip with bacterial conjugation

Bacterial conjugation may be a tool for doing the CLICK and ZIP in the real world.  Alternatively, it may serve as inspiration for designing the behaviour 1 of a GLC actor in distributed GLC.

The description of bacterial conjugation, as taken from the linked wiki page:


Conjugation diagram 1- Donor cell produces pilus. 2- Pilus attaches to recipient cell and brings the two cells together. 3- The mobile plasmid is nicked and a single strand of DNA is then transferred to the recipient cell. 4- Both cells synthesize a complementary strand to produce a double stranded circular plasmid and also reproduce pili; both cells are now viable donors.

Step 2 looks like  a CLICK move from zipper logic:


Step 4 looks like a ZIP move:


Not convinced? Look then at the CLICK move as seen when zippers are made of crossing diagrams:


On the other side, the ZIP move is a form of graphic beta move.  Which is involved in the behaviour 1 of GLC actors.

Imagine that each bacteria is an actor.  You have a pair of (bacteria/actors) which (are in proximity/connected in the actor diagram) and they proceed to (bacterial conjugation/behaviour 1). In the most general form, which actually involves up to 6 actors, the bacteria :a and :b interact like this:



(in this figure we see only two nodes, each one belonging to one actor.)  The case of bacterial conjugation is when there are only two actors, i.e. :a = :c = :f  and :b = :d = :e . Each of the new arrows which appeared after the graphic beta move could be seen as a pilus.

Easy to describe it, but the mechanism of bacterial conjugation is fascinating. Can it be harnessed for (this type of) computation?

UPDATE:  searching for “plasmid computing”,  found Computing with DNA by operating on plasmids by T. Head, G. Rozenberg, R.S. Bladergroen, C.K.D. Breek, P.H.M. Lommerse, H.P. Spaink, BioSystems 57 (2000) 87 – 93.

They have a way to compute with plasmids. In this post is argued that bacterial conjugation itself (regardless of the plasmid involved) can be used as the main graph rewrite for doing (a graphic version of) lambda calculus, basically.



Why Distributed GLC is different from Programming with Chemical Reaction Networks

I use the occasion to bookmark the post at Azimuth Programming with Chemical Reaction Networks, most of all because of the beautiful bibliography which contains links to great articles which can be freely downloaded. Thank you John Baez for putting in one place such an essential list of articles.

Also, I want to explain very briefly why CRNs are not used in Distributed GLC.

Recall that Distributed GLC  is a distributed computing model which is based on an artificial chemistry called chemlambda, itself a variant (slightly different) of graphic lambda calculus, or GLC.

There are two stages of the computation:

  1. define the initial participants at the computation, each one called an “actor”. Each actor is in charge of a chemlambda molecule. Molecules of different actors may be connected, each such connection being interpreted as a connection between actors.  If we put together all molecules of all actors then we can glue them into one big molecule. Imagine this big molecule as a map of the world and actors as countries, each coloured with a different colour.  Neighbouring countries correspond to connected actors. This big molecule is a graph in the chemlambda formalism. The graph which has the actors as nodes and neighbouring relations as edges is called the “actors diagram” and is a different graph than the big molecule graph.
  2. Each actor has a name (like a mail address, or like the colour of a country). Each actor knows only the names of neighbouring actors. Moreover, each actor will behave only as a function of the molecule it has and according to the knowledge of his neighbouring actors behaviour. From this point, the proper part of the computation, each actor is by itself. So, from this point on we use the way of seeing of the Actor Model of Carl Hewitt.  Not the point of view of Process Algebra. (See  Actor model and process calculi.)  OK, each actor has 5 behaviours, most of them consisting into graph rewrites of it’s own molecule or between molecules of neighbouring actors. These graph rewrites are like chemical reactions between molecules and enzymes, one enzyme per graph rewrite. Finally, the connections between actors (may) change as a result of these graph rewrites.

That is the Distributed GLC model, very briefly.

It is different from Programming with CRN because of several reasons.

1.  Participants at the computation are individual molecules. This may be unrealistic for real chemistry and lab measurements of chemical reactions, but this is not the point, because the artificial chemistry chemlambda is designed to be used on the Internet. (However, see the research project on  single molecule quantum computer).

2. There is no explicit stochastic behaviour. Each actor in charge of it’s molecule behaves deterministically. (Or not, there is nothing which stops the model to be modified by introducing some randomness into the behaviour of each actor, but that is not the point here). There are not huge numbers of actors, or some average behaviour of those.

That is because of point 1. (we stay at the level of individual molecules and their puppeteers, their actors) and also because we use the Actor Model style, and not Process Algebra.

So, there is an implicit randomness, coming from the fact that the computation is designed Actor Model style, i.e. such that it may work differently, depending on the particular physical  timing of messages which are sent between actors.

3.  The computation is purely local. It is also decentralized. There is no corporate point of view of counting the number of identical molecules, or their proportion in a global space – global time solution.  This is something reasonable from the point of view of a distributed computation over the Internet.


All this being said,  of course that it would be interesting to see what happens with CRNs of reactions of molecules in chemlambda.  May be very instructive, but this would be a different model.

That is why Distributed GLC does not use the CRN point of view.


The true Internet of Things, decentralized computing and artificial chemistry

A thing is a discussion between several participants.  From the point of view of each participant, the discussion manifests as an interaction between the participant with the other participants, or with itself.

There is no need for a global timing of the interactions between participants involved in the discussion, therefore we talk about an asynchronous discussion.

Each participant is an autonomous entity. Therefore we talk about a decentralized discussion.

The thing is the discussion and the discussion is the thing.

When the discussion reaches an agreement, the agreement is an object. Objects are frozen discussions, frozen things.

In the true Internet of Things, the participants can be humans or virtual entities. The true internet of Things is the thing of all things, the discussion of all discussions. Therefore the true Internet of Things has to be asynchronous and decentralized.

The objects of the true Internet of Things are the objects of discussions. For example a cat.

Concentrating exclusively on objects is only a manifestation of the modern aversion of having a conversation. This aversion manifests in many ways (some of them extremely useful):

  • as a preference towards analysis, one of the tools of the scientific method
  • as the belief in the semantics, as if there is a meaning which can be attached to an object, excluding any discussion about it
  • as externalization of discussions, like property rights which are protected by laws, like the use of the commons
  • as the belief in objective reality, which claims that the world is made by objects, thus neglecting the nature of objects as agreements reached (by humans) about some selected aspects of reality
  • as the preference towards using bottlenecks and pyramidal organization as a mean to avoid discussions
  • as various philosophical currents, like pragmatism, which subordinates things (discussions) to their objects (although it recognizes the importance of the discussion itself,  as long as it is carefully crippled in order that it does not overthrow the object’s importance).

Though we need agreements, we need to rely on objects (as evidence), there is no need to limit the future true Internet of Things to an Internet of Objects.


We already have something  called Internet of Things, or at least something which will become an Internet of Things, but it seems to be designed as an Internet of Objects. What is the difference? Read Notes for “Internet of things not Internet of objects”.

Besides humans, there will be  the other participants in the  IoT,  in fact the underlying connective mesh which should support the true Internet of Things.  My proposal is to use an artificial chemistry model mixed with the actor model, in order to have only the strengths of both models:

  1.   decentralized,
  2. does not need an overlooking controller,
  3. it works without  needing to have a meaning, purpose or in any other ways  being oriented to problem solving
  4. does not need to halt
  5. inputs, processing and output have the same nature (i.e. just chemical molecules and their proximity-based interactions).

without having the weaknesses:

  1.  the global view of Chemical Reaction Networks,
  2. the generality of behaviours of the actors in the actor model, which forces the model to be seen as a high level, organizing the way of thinking about particular computing tasks, instead of being a very low level, simple and concrete model.


With these explanations, please go and read again  three  older posts and a page, if interested to understand more:


What is new in distributed GLC?

We have seen that several parts or principles of distributed GLC are well anchored in previous, classical research.  There are three such ingredients:

There are several new things, which I shall try to list them.

1.  It is a clear, mathematically well formulated model of computation. There is a preparation stage and a computation stage. In the preparation stage we define the “GLC actors”, in the computation stage we let them interact. Each GLC actor interact with others, or with itself, according to 5 behaviours.  (Not part of the model  is the choice among  behaviours, if several are possible at the same moment.  The default is  to impose to the actors to first interact with others (i.e. behaviours 1, 2, in this order)  and if no interaction is possible then proceed with internal behaviours 3, 4, in this order. As for the behaviour 5, the interaction with external constructs, this is left to particular implementations.)

2.  It is compatible with the Church-Turing notion of computation. Indeed,  chemlambda (and GLC) are universal.

3. The evaluation  is not needed during computation (i.e. in stage 2). This is the embodiment of “no semantics” principle. The “no semantics” principle actually means something precise, is a positive thins, not a negative one. Moreover, the dissociation between computation and evaluation is new in many ways.

4. It can be used for doing functional programming without the eta reduction. This is a more general form of functional programming, which in fact is so general that it does not uses functions. That is because the notion of a function makes sense only in the presence of eta reduction.

5. It has no problems into going outside, at least apparently, Church-Turing notion of computation. This is not a vague statement, it is a fact, meaning that GLC and chemlambda have sectors (i.e. parts) which are used to represent lambda terms, but also sectors which represent other formalisms, like tangle diagrams, or in the case of GLC also emergent algebras (which are the most general embodiment of a space which has a very basic notion of differential calculus).


All these new things are also weaknesses of distributed GLC because they are, apparently at least, against some ideology.

But the very concrete formalism of distributed GLC should counter this.

I shall use the same numbering for enumerating the ideologies.

1.  Actors a la Hewitt vs Process Calculi.  The GLC actors are like the Hewitt actors in this respect.  But they are not as general as Hewitt actors, because they can’t behave anyhow. On the other side, is not very clear if they are Hewitt actors, because there is not a clear correspondence between what can an actor do and what can a GLC actor do.

This is an evolving discussion. It seems that people have very big problems to  cope with distributed, purely local computing, without jumping to the use of global notions of space and time. But, on the other side, biologists may have an intuitive grasp of this (unfortunately, they are not very much in love with mathematics, but this changes very fast).

2.   distributed GLC is a programming language vs is a machine.  Is a computer architecture or is a software architecture? None. Both.  Here the biologist are almost surely lost, because many of them (excepting those who believe that chemistry can be used for lambda calculus computation) think in terms of logic gates when they consider computation.

The preparation stage, when the actors are defined, is essential. It resembles with choosing the right initial condition in a computation using automata. But is not the same, because there is no lattice, grid, or preferred topology of cells where the automaton performs.

The computation stage does not involve any collision between molecules mechanism, be it stochastic or deterministic. That is because the computation is purely local,  which means in particular that (if well designed in the first stage) it evolves without needing this stochastic or lattice support. During the computation the states of the actors change, the graph of their interaction change, in a way which is compatible with being asynchronous and distributed.

That is why here the ones which are working in artificial chemistry may feel lost, because the model is not stochastic.

There is no Chemical reaction network which concerts the computation, simply because a CRN is aGLOBAL notion, so not really needed. This computation is concurrent, not parallel (because parallel needs a global simultaneity relation to make sense).

In fact there is only one molecule which is reduced, therefore distributed GLC looks more like an artificial One molecule computer (see C. Joachim Bonding More atoms together for a single molecule computer).  Only it is not a computer, but a program which reduces itself.

3.  The no semantics principle is against a strong ideology, of course.  The fact that evaluation may be not needed for computation is  outrageous (although it might cure the cognitive dissonance from functional programming concerning the “side effects”, see  Another discussion about math, artificial chemistry and computation )

4.  Here we clash with functional programming, apparently. But I hope that just superficially, because actually functional programming is the best ally, see Extreme functional programming done with biological computers.

5.  Claims about going outside Church-Turing notion of computation are very badly received. But when it comes to distributed, asynchronous computation, it’s much less clear. My position here is that simply there are very concrete ways to do geometric or differential like “operations” without having to convert them first into a classical computational frame (and the onus is on the classical computation guys to prove that they can do it, which, as a geometer, I highly doubt, because they don’t understand or neglect space, but then the distributed asynchronous aspect come and hits  them when they expect the least.)


Conclusion:  distributed GLC is great and it has a big potential, come and use it. Everybody  interested knows where to find us.  Internet of things?  Decentralized computing? Maybe cyber-security? You name it.

Moreover, there is a distinct possibility to use it not on the Internet, but in the real physical world.


Notes for “Internet of things not Internet of objects”

1.   Kevin Ashton  That ‘Internet of things’ thing

Conventional diagrams of the Internet include servers and routers and so on, but they leave out the most numerous and important routers of all: people.
The problem is, people have limited time, attention and accuracy—all of which means they are not very good at capturing data about things in the real world.
  • not things, objects!  Ashton writes about objects.
  • people are not good at capturing data, so let’s filter (i.e. introduce a bottleneck) the data for them, thank you!
  • however, people arrive to gather around  ideas and to discuss  despite the fact that “conventional diagrams of the Net leave out people”.
  • By having public discussions around an “idea” people arrive to filter creatively the information dump without resorting to artificial bottlenecks.  Non-human bottleneck stifle discussions!

Replaced further:

  • things by objects
  • ideas by things.
We’re physical, and so is our environment. Our economy, society and survival aren’t based on things or information—they’re based on objects. You can’t eat bits, burn them to stay warm or put them in your gas tank. Things and information are important, but objects matter much more. Yet today’s information technology is so dependent on data originated by people that our computers know more about things  than objects.
This looks like the credo of the Internet of Objects!
Do we want this?
2.     What are, for people, things and objects?

Here is a depiction of a thing [source]:


A thing  was the governing assembly  made up of the free people of the community, meeting in a place called a thingstead.
(“thing” in Germanic societies,  “res” for Romans, etc.)
Heidegger (The Thing):

Near to us are what we usually call things. The jug is a thing. What is a jug? We say: a vessel.  As a jug, the vessel is something self-sustained,  self-supporting, or independent.

An independent, self-supporting thing may become an object if we place it before us.

An object is a reification of a thing.
[Kenneth Olwig: “Heidegger, Latour and The Reification of Things:The Inversion and Spatial Enclosure of the Substantive Landscape of Things – the Lake District Case”, Geografiska Annaler: Series B 2013 Swedish Society for Anthropology and Geography]
An object is therefore real,  but all about thing and thingstead is lost.
Reification generally refers to making something real…
Reification (computer science), making a data model for a previously abstract concept.
3.  An example of a thing and some of it’s reifications:
Quotes  and images from here:
On 20 May 1515, an Indian rhinoceros arrived in Lisbon from the Far East.
After a relatively fast voyage of 120 days, the rhinoceros was finally unloaded in Portugal, near the site where the Manueline Belém Tower was under construction. The tower was later decorated with gargoyles shaped as rhinoceros heads under its corbels.[11]
A rhinoceros had not been seen in Europe since Roman times: it had become something of a mythical beast, occasionally conflated in bestiaries with the “monoceros” (unicorn), so the arrival of a living example created a sensation.
The animal was examined by scholars and the curious, and letters describing the fantastic creature were sent to correspondents throughout Europe. The earliest known image of the animal illustrates a poemetto by Florentine Giovanni Giacomo Penni, published in Rome on 13 July 1515, fewer than eight weeks after its arrival in Lisbon.

Valentim Fernandes, , saw the rhinoceros in Lisbon shortly after it arrived and wrote a letter describing it to a friend in Nuremberg in June 1515.  A second letter of unknown authorship was sent from Lisbon to Nuremberg at around the same time, enclosing a sketch by an unknown artist. Dürer saw the second letter and sketch in Nuremberg. Without ever seeing the rhinoceros himself, Dürer made two pen and ink drawings,[23] and then a woodcut was carved from the second drawing, the process making the print a reversed reflection of the drawing.[19][24]

The German inscription on the woodcut, drawing largely from Pliny’s account,[13] reads:

On the first of May in the year 1513 AD [sic], the powerful King of Portugal, Manuel of Lisbon, brought such a living animal from India, called the rhinoceros. This is an accurate representation. It is the colour of a speckled tortoise,[25] and is almost entirely covered with thick scales. It is the size of an elephant but has shorter legs and is almost invulnerable. It has a strong pointed horn on the tip of its nose, which it sharpens on stones. It is the mortal enemy of the elephant. The elephant is afraid of the rhinoceros, for, when they meet, the rhinoceros charges with its head between its front legs and rips open the elephant’s stomach, against which the elephant is unable to defend itself. The rhinoceros is so well-armed that the elephant cannot harm it. It is said that the rhinoceros is fast, impetuous and cunning.[26]
Comment: you can see here a thing taking shape.
Despite its errors, the image remained very popular,[5] and was taken to be an accurate representation of a rhinoceros until the late 18th century.
The pre-eminent position of Dürer’s image and its derivatives declined from the mid-to-late-18th century, when more live rhinoceroses were transported to Europe, shown to the curious public, and depicted in more accurate representations.
Until the late 1930s, Dürer’s image appeared in school textbooks in Germany as a faithful image of the rhinoceros;[6] in German the Indian rhinoceros is still called the Panzernashorn, or “armoured rhinoceros”. It remains a powerful artistic influence, and was the inspiration for Salvador Dalí‘s 1956 sculpture, Rinoceronte vestido con puntillas (Rhinoceros dressed in lace), which has been displayed at Puerto Banús, in Marbella, since 2004.
Comment: that is an object! You can stick an RFID to it and it has clear GPS coordinates.
4.     Bruno Latour (From Realpolitik to Dingpolitik, or How to Make Things Public), writing about “object-oriented democracy”:

Who is to be concerned? What is to be considered? How to represent the sites where people meet to discuss their matters of concern?

How does the Internet of Objects respond to these questions about things and thingsteads?

People are going to use the Internet of Objects as an Internet of Things. How can we help them (us!) by designing a thing-friendly Internet of Things?

My guess and proposal is to try to put space (i.e. thingstead) into the IoT.  By design.
5.   Not the RFID space.  Not the GPS space.  This may be useful for the goal of inhuman optimization, but will not promote by itself the conversation needed to have around things and their reifications, the objects.
People are going to divert the ways of the IoT, designed with  this lack of appetite for human communication, as they succeeded previously!
For understanding why RFID and GPS  are not sufficient, let’s imagine, like Borges, that the world is a library.
  • RFID – name of the book
  • GPS – place on the shelf
Is this enough for me, reader, who wants to retrieve (and discuss with other readers about) a book without knowing it’s title, nor it’s position on a shelf?
No!  I have to call a librarian (the bottleneck), an inhuman and very efficient one, true,  who will give me a list of possible titles and who will fetch the book from the right shelf. I don’t have direct access to the library, nor my friends which may have different ideas about the possible titles and shelves where the book might be.
The librarian will optimize the book-searching and book-fetching, will optimize all this not for me, or for you, or for our friends, but for a bayesian individual in a bayesian society. (see Bayesian society)
What I would like is to have access to my library (in the big Universal Library) and to be able to share my spatial competences of using my library with my friends. That means a solution for the following problem, which  Mark Changizi  mentions in relation to e-books (but I think is relevant instead for the human IoT)

The Problem With the Web and E-Books Is That There’s No Space for Them

My personal library serves as extension of my brain. I may have read all my books, but I don’t remember most of the information. What I remember is where in my library my knowledge sits, and I can look it up when I need it. But I can only look it up because my books are geographically arranged in a fixed spatial organization, with visual landmarks. I need to take the integral of an arctangent? Then I need my Table of Integrals book, and that’s in the left bookshelf, upper middle, adjacent to the large, colorful Intro Calculus book.

6.  What else?  These notes are already rich enough, therefore please be free to stop reading, if you feel like.
Actually, this is a technical problem: how to create space where there is none, without using arbitrary names (RFID) or global (but arbitrary) coordinates (GPS)?
It is the same problem which we encounter in neuroscience: how the brain makes sense of space without using any external geometrical expertise? how to explain the “brain as a geometry engine” (as Koenderink) when there is no rigorous  model of computation for this brain behaviour?
There may be a point in holding that many of the better-known brain processes are most easily understood in terms of differential geometrical calculations running on massively parallel processor arrays whose nodes can be understood quite directly in terms of multilinear operators (vectors, tensors, etc).
In this view brain processes in fact are space.
I have two proposals for this, which go far beyond explanations which may fit into a post.  I put them  here only for the sake of giving an explanation of the motivations I have, and maybe for inviting the interested people to gather for discussing about these things.
It is about “computing with space”, which is the middle name of this blog.  The first name, chorasimilarity, is made by gluing Plato’s notion of space “chora” with  (self-)”similarity”, which is, I believe the essence of transforming space from a “vessel” into a self-sustaining, self-supporting thingstead.
The first proposal is to concentrate on completely asynchronous, purely local  models of distributing computing as a low-level basis for the architecture of a true IoT.
For example: mesh networks. (Thank you Peter Waaben.)
I know of only one model of computation which satisfy the previously mentioned demands and also  solves the problem of putting space into the Net:
It is based on actors which gather in an agora to discuss things that matter.  Literally!
But there is long way to  arrive to a proof of principle, at least, for such a space-rich IoT, which brings me to the second proposal, which (may) be too technical for this post, alluded here: A me for space.

What’s a GLC actor?

A GLC actor it’s an entity which has some data and 5 possible behaviours. See section 3 from

GLC actors, artificial chemical connectomes, topological issues and knots

for details and examples.

Data.  Every actor has a name. The notation used is that the actor :a is the name of the actor a.  Every actor has also, as data, a GLC graph which has some of his half-arrows decorated with names of other actors.

You can use these decorations in order to assembly a big GLC graph from the pieces which reside in all actors data, by gluing the half arrows along the given decorations.

But mind you that the behaviours of actors are not a consequence of the existence of this global object, the big GLC graph.

Behaviours. The main idea is that the graph rewrites of the big GLC graph can be done asynchronously, by purely local interactions between the actors which have the small pieces of the big GLC graph.

There are two types of graph rewrites, which are treated differently.

  • the graphic beta move (and the FAN-IN move if we use chemlambda instead of GLC) which is performed between two actors, as a form of interaction between those. This is behaviour 1.
  • FAN-OUT and pruning moves, which are performed internally in one actor. This is behaviour 3.

Besides this, we have:

Behaviour 2: name change. An actor can pass one node to another actor. It has to be an actor which he knows, i.e. one which has an address which appears on one of the half-arrows of the mentioned node.

Behaviour 4: new actors. An actor can split into two, thus he can create a new actor, provided it’s piece of GLC graph has two disconnected parts.

Behaviour 5: interaction with cores. Suppose you use external IT constructs, like databases, counters or other. Call this  “cores”.  You make an actor from such a core by connecting it to a “mask” (which is  GLC graph) and by defining moves which are performed between the mask and the core. Basically such a move means a translation from external format into GLC graphs. This can be always done, because GLC is Turing universal. In the paper is given the example of a counter as a core.


Some of the behaviours of a GLC actors resemble with those of a Hewitt Actor, like creation of new actors. Is not clear though if GLC actors are actors in the sense of Hewitt.

What is clear is that the distributed computing with GLC actors is kind of like a play.


Open peer-review call for arXiv:1312.4333

Open peer-review is a healthy alternative of the classical peer-review.  If there is any value in the peer-review process — and there is — it comes from it being open and dynamically changing.

Peer-review should be used for communication, for improving one’s and others work.  Not for authority stamps, nor for alpha – omega male monkey business.

With all this in mind, but also with a clear, declared interest into communication of research, I make this experimental call for open peer review to the readers of this blog. The inspiration comes from this kind post by David Roberts.


Useful material for the discussion:

Coming from a collaboration which was previously mentioned (Louis Kauffman, a team from ProvenSecure Solutions, me), we want to develop and also explore the possibilities given by the GLC actor model of distributed computing.

A real direction of research is the one of endowing the Net (or parts of it, or, … there are even more strange variants, like no part of it especially) with an artificial chemical connectome, thus mimicking the way real brains (we think that they) work.

If you think “consciousness” then hold your horses, cowboy! Before that, we really have to understand (and exploit to our benefice) all kinds of much more basic aspects, all kinds of (hundreds of) autonomous mechanisms and processes which are part of the brain works, which structure our world (view), which help and also  limit our thinking, which are, most of them, ignored by the logicians but explored already by neuroscience and cognitive science.

So, yes, indeed, we want to change the way we think about distributed computing, make it more biological like, but we don’t want to fall into the trap of thinking we have the ultimate solution toward consciousness, nor do we want to build, or believe we can do it, a Skynet. Instead, we want to take it slowly, in a scientific way.

Here we need your help! The research, up to now reported in arXiv:1312.4333 (with links to other sources) and in this open notebook, is based on some nontrivial ideas which are easy to formulate, but hard to believe.

Peer-review them, please! Show us where we need to improve, contradict us where we are wrong, contribute in an open way! By being open, you will automatically be acknowledged.

Suggestions about how this peer-review can be done are welcome!

UPDATE: Refurio Anachro linked the article to the spnetwork.  And moreover started a thread, with this post, about lambda calculus! Thank you!

Theatrical distributed computing

…looks like an apt name for several threads from this blog, which are now converging to a common point.

To make it more clear, here is a modification of the figure which appears in the post Theatron as an eye.


The red decorations changed. Let’s see.

  • DESIGNERS are the new programmers. They are no longer write programs, but instead they design good behaving distributed GLC computations. They deserve the front seats for observing the actors.
  • ACTORS  occupy their right place in the greek theater: the orchestra.  They are, of course, GLC actors, which are released into the wild after the preparation stage, effected by the designers. Each actor is an autonomous, reactive entity, which lives by exchanging very short, highly schematized messages with other actors (like the script one actor has, given by the director-designer; there is not very important what the actor says, only that it is communicating a repertoire of basic — emotional, in real theater — universal messages). There are many actors in the greek theater (assimilated here with the members of the chorus, to be clear), they are almost indiscernable one from another, they dress the same, they look the same, they have a very limited way of expression, but together they form the powerful chorus. Like neurons in a brain, they are in a limited variety, but they “compute” in a distributed and asynchronous way, without exchanging messages which need external competences to make sense.
  • the USERS of this distributed computation, aka theatrical performance, are the beneficiary of the show.
  • ACTORS CREATION: new actors are entering through the PARODOS, that’s related to the fact that the creation of new actors is a process which looks like the cells binary fission. This is implemented in chemlambda, look for example how the B,C,K, W combinators are multiplying in this post.
  • CORES: behind the skene is what should not be visible to the public or actors. This is a interface with other computing devices. like for example the coloured rectangles representing various stacks in the post A machine for computing the Ackermann function in graphic lambda calculus.  Recall that each core has to be embedded in a mask (which IS the actor which is communicating with that core), or, masks are decorating the  skene.
  • Finally, all this happens in the NET. Worldwide, circled by the sun [annalema].

Rewiring of neurons, seen as actors

This is a continuation of the thread concerning the mix of the Actor Model (AM) with the graphic lambda calculus (GLC) and/or the chemical concrete machine (chemlambda).

A GLC neuron was first defined here, in the freedom sector of the GLC. I shall use the term “neuron”, in a more restrictive sense than previously, as being a GLC actor (example here) which has only one exit half-arrow, possibly continued by a tree of fan-out nodes, which are seen as the axon of the neuron.

I don’t know how to constrain an AM computation with GLC actors so they are all neurons at the preparation stage ad they remain neurons during the computation.

Instead, I wonder if there is any chance to model real neurons this way. That is why I try to obtain a prediction concerning a rewiring pattern.  If this model of computation is pertinent for real neural networks, then we should see this rewiring pattern in the wild.


Let’s contemplate the following chain of reductions, as seen from the GLC actors point of view. I shall use in fact the chemlambda formalism, but I shall change a bit the notation from the coloured nodes to nodes marked by letters, as in GLC. The fan-in node will be denoted by the letter \phi. Recall that the fan-in node  from  chemlambda replaces the dilation node from GLC (actually we reuse the dilation node of GLC and change it into a fan-in node by replacing the emergent algebra moves with the FAN-IN and DIST moves of chemlambda).

The first reduction involves doing an internal DIST move in the actor a. neuron_actor_2

In red, we see the actor diagram, it does not change during this internal interaction.

The next step involves a graphic beta move:

neuron_actor_3As you see, the actors diagram changed as an effect of the interaction between actors a and b.

The next step is a name change interaction between the actors a and c, namely a gives a fan-out node to c.


That’s it. What is this having to do with neurons?

We may see the fan-out node from the initial configuration, belonging to the actor a, as the axon of a. Further, we may identify actors a $ and d, and also actors b and g. In this way, the whole sequence of three interactions can be rewritten like this:


Notice how I changed the convention of drawing the actors diagram:

  • I took care to indicate the orientation of the links between actors
  • the fan-out node from the initial configuration was externalized, to look more like an axon, and likewise for the final configuration.

This gives a prediction: if neurons can be modelled as GLC actors, then we should observe in reality the following change of wiring between neurons.


It involves the rewiring of 5 neurons! Are there somewhere in the neuroscience literature patterns of wiring of 5 neurons like the ones from this figure? Is there any evidence concerning a rewiring like in this figure?

Actor model distributed computing with graphic lambda calculus, angel needed

Seriously! It’s done. An actor like model, which uses graphic lambda calculus for doing distributed computing (i.e. parallel, asynchronous, no controller) . No evaluation needed, everything is local. There is no signal circulating far through the wires, only small local exchanges. In this respect it resembles to what is happening in chemical reaction networks, but it goes much further than this.

Looks perfectly feasible, needs angel investor.

UPDATE: Let me add a bit more details about this. The awesome thing is not in the actor model side, alone, nor in the graphic lambda calculus or chemlambda alone. Is in the combination of those. The graphic lambda calculus brings the possibility to do distributed computing (and even far away from the usual logic realm, into the field of computing with space, see A me for space). The space which is needed by graphic lambda calculus is, of course, the network. But the network lacks an essential ingredient of space: the proximity relations. This is where the actor model is useful.

Secondly, it is well known that in the usual sense computation needs two ingredients: reduction and evaluation. They appear under a form or another in any computation model I am aware of, excepting the one of graphic lambda calculus + actors! Here, only local (if we rely purely on chemlambda instead of graphic lambda calculus) reduction moves are used, after an initial stage of preparation of the computation.

This has implication into understanding how brains “compute”. Indeed, there are two things (at least, but let’s simplify) which happen in a brain: one is (electrical) signal transmitted (chemically mediated in the synapses) and neural network rewiring (as a form of “learning”). The computation model I propose may be used in two places:

  • in the understanding of the computational aspects of the chemical connectome of a brain (by looking for real embodiments of the chemlambda formalism, i.e. by identifying some real chemical molecules which interact as the molecules of chemlambda)
  • in the understanding of the rewiring mechanism. The graphic lambda calculus + actors proposes that the rewiring alone is a form of computation, one related to pure local reduction, while the signal transmissions are related more to evaluation aspects.  In this respect, neurons (or maybe family of neurons nurtured by one glial cell) are actors and the physical neural network is like an actors diagram.

Un-currying with Actors

This is an illustration of a sequential computation with actors and graphic lambda calculus, based on the ideas from  Actors for the Ackermann machine .  It describes the inverse of the currying process described in  Currying by using zippers and an allusion to the cartesian theater.

Without further ado, let’s watch the play.

The preparation consists in introducing the cast and the initial actors diagram.


The play starts. The actor c introduces a to b and dies, or maybe goes playing elsewhere.


We see the states of the actors a and b, the other main character d is waiting for his line, while the figuration b1, ..., bN wait in the back of the stage, in artful positions.

The rest of the process consists in unzipping a graphic lambda calculus zipper. Zippers are very useful, because the unzipping process is sequential, i.e. it can happen in only one way. Therefore, by using zippers we can force a naturally distributed parallel computation to happen in a sequential way!


At the end of the computation, the two actors a and b, which represent the two parts of the zipper, cancel one another. Eventually the graph D is now connected with the output to the address :out and with the inputs to the (addresses of the) figuration actors b1, ..., bN.

The graph D corresponds to the graph A from  the post   Currying by using zippers and an allusion to the cartesian theater  and the computation just described is a reversed computation of the one described in the mentioned previous post.


As a side remark, graphic lambda calculus suggests that currying and un-currying should be eliminated at the preparation stage of a distributed computation. However, I took this example of computation with actors and GLC as a simple illustration and also for stressing that zippers can impose, if needed, an order of reduction moves in GLC.


Actors for the Ackermann machine (II) Pure communication.

Continues from Actors for the Ackermann machine . In this post we see the first interaction between the actors.

Notations: each actor a has an address :a and a stack a^{*}.

Let the play begin. The cast is the following:


At the beginning of the performance, the actors are in the following configuration


Only a with b can interact (i.e. the only moves in GLC which may happen between actors are those between the mentioned ones). Their interaction is a form of pure communication (via the graphic beta move). Two such moves are possible, in succession. This is described in the next figure, along with the changes in the actors configuration.


The performance is thrilling: the actor a is almost exhausted after forcing  the actor b to drop down his mask.  In the process  a lost his friends  d and f (with his buddy e) in favour of b.  Only c is still taking the side of a.   What will happen next?

At this stage, no other interaction is possible without revealing what b is really thinking (what’s in his stack b^{*}). The first act is over.


Actors for the Ackermann machine

Continues A machine for computing the Ackermann function in graphic lambda calculus , by a preliminary analysis of the proposed Ackermann machine in GLC from the Actor Model point of view.


Very brief notes:

  • see Hewitt Actor Model, lambda calculus and graphic lambda calculus   for the actor notation (or decoration)
  • binary fission is an apt name for the replacement of the GLOBAL FAN-OUT from GLC by the DIST moves from chemlambda (because eventually I am interested in using the purely local formalism of chemlambda for the Ackermann machine).
  • “mask” is a new name instead of the “pacman” name used in the first post on the Ackermann machine.
  • “stack” might correspond to the “mailbox” in the actor model, but it’s internal to the actor and interacts only with the mask.

All this has a working hypothesis basis, I’m thinking out loud – btw, if you look then  you can also talk here 🙂 . Everything is subject to changes, remember this is an open notebook (which can also be cited as explained at the right side of the post).


Hewitt Actor Model, lambda calculus and graphic lambda calculus

In Actor Model of Computation: Scalable Robust Information Systems, arXiv:1008.1459,  Hewitt writes, at p 7 that “lambda calculus cannot implement concurrency”. At p.13-14 he writes:

… the semantics of the lambda calculus were expressed using variable substitution in which the values of parameters were substituted into the body of an invoked lambda expression. The substitution model is unsuitable for concurrency because it does not allow the capability of sharing of changing resources.

Correct. However, the graphic lambda calculus does not have this problem because reductions don’t need evaluations. See  The graphic beta move, with details to understand this better.
All the arguments listed further apply also verbatim to graphic lambda calculus! (Hewitt loc cit, p. 14)

Inspired by the lambda calculus, the interpreter for the programming language Lisp [McCarthy et. al. 1962] made use of a data structure called an environment so that the values of  parameters did not have to be substituted into the body of an invoked lambda expression.
Note that in the definition in ActorScript [Hewitt 2011] of the lambda-calculus below:

  • All operations are local.
  • The definition is modular in that each lambda calculus programming language construct is an Actor.
  • The definition is easily extensible since it is easy to add additional programming language constructs.The definition is easily operationalized into efficient concurrent implementations. The definition easily fits into more general concurrent computational frameworks for many-core and distributed computation.

Hewitt continues:

That Actors which behave like mathematical functions exactly correspond with those definable in the lambda calculus provides an intuitive justification for the rules of the lambda  calculus:

  • Lambda identifiers: each identifier is bound to the address of an Actor. The rules for free and bound identifiers correspond to the Actor rules for addresses.
  • Beta reduction: each beta reduction corresponds to an Actor receiving a message. Instead of performing substitution, an Actor receives addresses of its arguments.

Further I try to understand this, as a particular case of the  Chemical actors   sketch.

We have Actors, we have addresses and we have lambda identifiers (it’s about usual lambda calculus, not the better graphic lambda calculus). The idea, as far as I understand, is to “bound” lambda identifiers to the address of an Actor. The second idea is “each beta reduction corresponds to an Actor receiving a message”.

I want a way to identify Actors with graphs in GRAPH or with molecules in MOLECULES, if I wish to use the chemlambda. Actually, because, after some thinking, is better to think about  Actors as if they are a sort of decoration of the arrows of a graph, almost like a decoration with types. (I shall not use the epsilon gates further, just concentrating on the pure logic side)


We shall have actors A, B, etc, with actor names :A. :B. There are other names, like :?  which is a placeholder for indetermination (that’s just a name, if you don’t like this one then change it with  😕 , or whatever), there is also :talk and :listen names. These are not names of actors.

By using these rules we can decorate all half-arrows (meaning that we may have arrows with two decorations. For concreteness, look at the example of the combinator K.


I shall take another Actor named A and make it interact with the Actor K, in order to mimick the KUV \rightarrow U couple of reductions.


Here “1” and “2” mean anything you want, from “they can be names of actors” to “I don’t care what follows next”.

Now, they are in position to communicate:


What happens next? The :listen:talk decoration indicates that a graphic beta reduction can be applied (cf. Hewitt 2nd suggestion). But we need a rule for decorations with names before and after the graphic beta move. Here is what I propose:


We apply this for our example:


Is that nice or what?  The actor K lost one of it’s “lambda identifiers”, the actor A also simplified a bit, but they still talk one to another:

actor_decor_int_3They did their job and disappeared. What happens next is a matter of other actors.


Remark that graphic lambda calculus easily did what Hewitt wants. This example is not showing any concurrency, though, because here there is no concurrency involved, properly speaking. I only wanted to implement the 2 suggestions of Hewitt with graphic lambda calculus. But take examples where concurrency does appear, like here, to see that everything works really nice. This will be explained in another post.


Video: The Actor Model (everything you wanted to know, but were afraid to ask)

This is a repost of a  very informative discussion: Hewitt, Meijer and Szyperski: The Actor Model (everything you wanted to know, but were afraid to ask). Enjoy!

The following is a note to myself. There are so many interesting things which could be relevant for  chemical actors, but among these I only mention the moment when Hewitt draws an arbiter, as a box with two inputs and two outputs. Look at the Ackermann machine, or look at Pair of synapses, one controlling the other. From the video I see that actors are the primitive units for computation and addresses are the only real  capabilities. Moreover, there are these arbiters, I really need to think if chemlambda and arbiters are not sufficient for this. Time will tell if this is reasonable or stupid.

Chemical actors

UPDATE: Clearly needed a mix of the ActorScript of Carl Hewitt with GLC and chemlambda. Will follow in the months to come!


Thinking out loud about a variant of the actor model (chapter 3 here), which uses graphic lambda calculus or the chemical concrete machine. The goal is to arrive to a good definition of an Artificial Chemical Connectome.

Have remarks? Happy to read them!

A chemical actor is the following structure:

  • a graph A \in GRAPH  (or a molecule A \in MOLECULES)
  • with a unique ID name ID
  • with a numbering (tagging)  of a (possibly empty) part of it’s free arrows
  • with a behaviour, to be specified further.


A communication is:

  • a graph  B \in GRAPH  (or a molecule)
  • with a source ID and a target ID
  • with a part of free arrows tagged with tags compatible (i.e. the same) with the ones from the graph from the source ID
  • with another part of free arrows tags with tags compatible with the ones from the graph from the target ID


The actor target ID receives a communication from the actor source ID and it becomes:


At this point the actor which has target ID exhibit the following behaviour:

  • performs one, several, or in a given order, etc of graph rewrites (only the + unidirectional moves)  which involve at least an arrow between A and B
  • following a given algorithm, splits into a new actor and a communication B’ which has as target arrows the one from the lower part of the previous figure (but with another target ID)
  • or creates a new actor by using only (-) moves

Remark: the numbers n, m could be uniformly bounded to 2, or 4, or 6, according to user’s wishes. Take a look at the Ackermann machine, for inspiration.