Tag Archives: actor model

Actors, space and Movable Feast Machine at ALIFE14

See:

  1. David H. Ackley, Trent R. Small, Indefinitely Scalable Computing = Artificial Life Engineering
  2. Lance R. Williams, Self-Replicating Distributed Virtual Machines

presented at ALIFE14.

Both articles look great and the ideas are very close to my actual interests. Here is why:

  • it is about distributed computing
  • using actors
  • some things which fall under functional programming
  • and space.

The chemlambda and distributed GLC project  also has a paper there: M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication see the better arXiv version because it has links inside: arXiv:1403.8046.

The resemblance with the mentioned papers and the chemlambda and distributed GLC is that our (fully theoretical, helas) model is also about distributed computing, using actors, lambda calculus and space.

The differences are many, though, and I hope that these might lead to interesting interactions.

Further I describe the main difference, with the understanding that all this is very new for me, a mathematician, so I might be wrong in my grasping of the MFM (please correct me if so).

In the MFM the actors are atoms in a passive (i.e. predefined) space. In the distributed GLC the actors have as states graphs called molecules (more precisely g-patterns).

[Here is the moment to thank, first, to Stephen P. King who noticed me about the two articles.  Second, Stephen works on something which may be very similar to the MFM, as far as I understand, but I have to strongly stress that the distributed GLC  does NOT use a botnet, nor the actors are nodes in a chemlambda graph!]

In distributed GLC the actors interact by message passing to others actors with a known ID. Such message passing provokes a change in the states of the actors which corrsponds to one of the graph rewrites (moves of chemlambda). As an effect the connectivities between the actors change (where connectivity between an actor :alice and :bob means that :alice has as state a g-pattern with one of the free ports decorated with :bob ID). Here the space is represented by these connectivities and it is not passive, but an effect of the computation.

In the future I shall use and cite, of course, this great research subject which was unknown to me. For example the article Lance R. Williams Robust Evaluation of Expressions by Distributed Virtual Machines already uses actors!  What more I am not aware of? Please tell, thanks!

_______________________________________________________________

 

 

Lambda calculus and the fixed point combinator in chemlambda (VI)

This is the 6th  (continuing from part I  and part II  and part III and part IV and part V)   in a series of expository posts where we put together in one place the pieces from various places about:

  • how is treated lambda calculus in chemlambda
  • how it works, with special emphasis on the fixed point combinator.

I hope to make this presentation  self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)

_________________________________________________________

This series of posts may be used as a longer, more detailed version of sections

  • The chemlambda formalism
  • Chemlambda and lambda calculus
  • Propagators, distributors, multipliers and guns
  • The Y combinator and self-multiplication

from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication,  arXiv:1403.8046 [cs.AI],  which is accepted in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York, (go see the presentation of Louis Kauffman if you are near the event.) Here is a link to the published  article, free, at MIT Press.

_________________________________________________________

In this post I want to concentrate on the mechanism of self-multiplication for g-patterns coming from lambda terms (see  part IV   where the algorithm of translation from lambda terms to g-patterns is explained).

Before that, please notice that there is a lot to talk about an important problem which shall be described later in detail. But here is it, to keep an eye on it.

Chemlambda in itself is only a graph rewriting system. In part V is explained that  the beta reduction from lambda calculus needs an evaluation strategy in order to be used. We noticed that  in chemlambda the self-multiplication is needed in order to prove that one can do beta reduction as the beta move.

We go towards the obvious conclusion that in chemlambda reduction (i.e. beta move) and self-multiplication are just names used for parts of the computation. Indeed, the clear conclusion is that there is a computation which can be done with chemlambda, which has some parts where we use the beta move (and possibly some COMB, CO-ASSOC, CO-COMM, LOC PRUNING) and some other parts we use DIST and FAN-IN (and possibly some of the moves COMB, CO-ASSOC, CO-COMM, LOC PRUNING). These two parts have as names reduction and self-multiplication respectively, but in the big computation they mix into a whole. There are only moves, graphs rewrites applied to a molecule.

Which brings the problem: chemlambda in itself is not sufficient for having a model of computation. We need to specify how, where, when the reductions apply to molecules.

There may be many variants, roughly described as: sequential, parallel, concurrent, decentralized, random, based on chemical reaction network models, etc

Each model of computation (which can be made compatible with chemlambda) gives a different whole when used with chemlambda. Until now, in this series there has been no mention of a model of computation.

There is another aspect of this. It is obvious that chemlambda graphs  form a larger class than lambda terms, and also that the graph rewrites apply to more general situations  than beta reduction (and eventually an evaluation strategy).  It means that the important problem of defining a model of computation over chemlambda will have influences over the way chemlambda molecules “compute” in general.

The model of computation which I prefer  is not based on chemical reaction networks, nor on process calculi, but on a new model, inspired from the Actor Model, called  the distributed GLC. I shall explain why I believe that the Actor Model of Hewitt is superior to those mentioned previously (with respect to decentralized, asynchronous computation in the real Internet, and also in the real world), I shall explain what is my understanding of that model and eventually the distributed GLC proposal by me and Louis Kauffman will be exposed in all details.

4.  Self-multiplication of a g-pattern coming from a lambda term.

For the moment we concentrate on the self-multiplication phenomenon for g-patterns which represent lambda terms. In the following is a departure from the ALIFE 14 article. I shall not use the path which consists into going to combinators patterns, nor I shall discuss in this post why the self-multiplication phenomenon is not confined in the world of g-patterns coming from lambda terms. This is for a future post.

In this post I want to give an image about how these g-patterns self-multiply, in the sense that most of the self-multiplication process can be explained independently on the computing model. Later on we shall come back to this, we shall look outside lambda calculus as well and we shall explore also the combinator molecules.

OK, let’s start. In part V has been noticed that after an application of the beta rule to the g-pattern

L[a,x,b] A[b,c,d] C[c]  FOTREE[x,a1,…,aN] B[a1,…,aN, a]

we obtain (via COMB moves)

C[x] FOTREE[x,a1,…,aN] B[a1,…,aN,d]

and the problem is that we have a g-pattern which is not coming from a lambda term, because it has a FOTREE in the middle of it. It looks like this (recall that FOTREEs are figured in yellow and the syntactic trees are figured in light blue)

structure_12The question is: what can happen next?  Let’s simplify the setting by taking the FOTREE in the middle as a single fanout node, then we ask what moves can be applied further to the g-pattern

C[x] FO[x,a,b]

Clearly we can apply DIST moves. There are two DIST moves, one for the application node, the other for the lambda node.

There is a chain of propagation of DIST moves through the syntactic tree of C, which is independent on the model of computation chosen (i.e. on the rules about which, when and where rules are used), because the syntactic tree is a tree.

Look what happens. We have the propagation of DIST moves (for the application nodes say) first, which produce two copies of a part of the syntactic tree which contains the root.

structure_7

At some point we arrive to a pattern which allows the application of a DIST move for a lambda node. We do the rule:

structure_8

We see that fanins appear! … and then the propagation of DIST moves through the syntactic tree continues until eventually we get this:

structure_9So the syntactic tree self-multiplied, but the two copies are still connected by FOTREEs  which connect to left.out ports of the lambda nodes which are part of the syntactic tree (figured only one in the previous image).

Notice that now (or even earlier, it does not matter actually, will be explained rigorously why when we shall talk about the computing model, for the moment we want to see if it is possible only) we are in position to apply the FAN-IN move. Also, it is clear that by using CO-COMM and CO-ASSOC moves we can shuffle the arrows of the FOTREE,  which is “conjugated” with a fanin at the root and with fanouts at the leaves, so that eventually we get this.

structure_10

The self-multiplication is achieved! It looks strikingly like the anaphase [source]

800px-Anaphase.svgfollowed by telophase [source]

Telophase.svg____________________________________________________

 

 

Synapse servers: plenty of room at the virtual bottom

The Distributed GLC is a decentralized asynchronous model of computation which uses chemlambda molecules. In the model, each molecule is managed by an actor,  which has the molecule as his state, and the free ports of the  molecule are tagged with other actors names. Reductions  between molecules (like chemical reactions) happen  only for those molecules which have actors who know  one the other, i.e. only between molecules managed by actors  :Alice and :Bob say, such that:

  • the state of :Alice (i.e. her molecule) contains a half of the graph_{1} pattern of a move, along with a free port tagged with :Bob name.
  • the state of :Bob contains the other half of the graph_{1} pattern, with a free port tagged with :Alice name
  • there is a procedure based exclusively on communication by packets, TCP style, [UPDATE: watch this recent video of an interview of Carl Hewitt!]  which allow to perform the reduction on both sides and which later informs the eventual other actors which are neighbors (i.e. appear as tags in :Alice or :Bob state) about possible new tags at their states, due to the reduction which happened (this can be done for example by performing the move either via introduction of new invisible nodes in the chemlambda molecules, or via the introduction of Arrow elements, then followed by combing moves).

Now, here is possibly a better idea. To explore. One which connects to a thread which is not developed for the moment (anybody interested? contact me)   neural type computation with chemlambda and GLC .

The idea is that once the initial configuration of actors and their initial states are set, then why not move the actors around and make the possible reductions only if the actors :Alice and :Bob are in the same synapse server.

Because the actor IS the state of the actor, the rest of stuff a GLC actor knows to do is so trivially easy so  that it is not worthy do dedicate one program per actor running some place fixed. This way, a synapse server can do thousands of reductions on different actors datagrams (see further) in the same time.

Instead:

  • be bold and use connectionless communication, UDP like, to pass the actors states (as datagrams) between servers called “synapse servers”
  • and let a synapse server to check datagrams to see if by chance there is a pair which allow a reduction, then perform in one place the reduction, then let them walk, modified.

There is so much place for the artificial chemistry chemlambda at the bottom of the Internet layers that one can then add some learning mechanisms to the synapse servers. One is for example this: suppose that a synapse server matches two actors datagrams and finds there are more than one possible reductions between them. Then the synapse server asks his neighbour synapse servers (which perhaps correspond to a virtual neuroglia) if they encouter this configuration. It chooses then (according to a simple algorithm) which reduction to make based on the info coming from its neighbours in the same glial domain and tags the packets which   result after  the reduction (i.e. adds to them in some field) a code for the mode which was made. Successful choices are those which have descendants which are active, say after more than $n$ reductions.

Plenty of possibilities, plenty of room at the bottom.

Distributed GLC discussion

This is my contribution to a discussion about the distributed GLC model of computation and about the associated gui.

Read carefully.  It has a fractal structure.

Basics about chemlambda graphs and the GUI

The chemlambda graphs (molecules) are not flowcharts. One just has to specify certain (known) graphs with at most 4 nodes and how to replace them with other known simple graphs. That is all.

That means that one needs:

– a file which specifies what kind of graphs are used (by giving the type of nodes and arrows)

– a file which specifies which are the patterns (i.e. graphs) and which are the rewrite moves

– and a program which takes these files as input and a graph and does things, like checking if the graph is of the kind described in file 1, if there are any patterns from the file 2 and do the rewrite in a place where the user wants, do a sequence of rewrites until it forks, if the user wants, take as input a lambda expression given by the user and transform it into a graph.

– then there is the visualization of the graphs program(s), that is the hard part, but it is already done in multiple places. Means that one has to write only the possible conversions of file formats from and to the viz tool.

That is the minimal configuration.

Decorations

There are various reasons why one wants to be able to decorate the graphs, locally, as a supplementary thing, but in no way is this needed for the basic process.

Concerning decorations, one needs a file which specifies how to decorate arrows and which are the relations coming from nodes. But this is not part of the computation.

If we want to make it more powerful then it gets more complex because if we want to do symbolic computations of decorations (like elimination of a relation coming from a node) then probably it is better to output a file of decorations of arrows and relations from nodes and input it in a symbolic soft, like mathematica or something free, there is no need to reinvent the wheel.

After the graph rewrite you loose the decoration, that’s part of the fun which makes decorations less interesting and makes supposedly the computation more secure.

But that depends on the choice of decoration rules.

For example, if you decorate with types then you don’t loose the decoration after the move. Yes, there are nodes and arrows which disappear, but outside of the site where the move was applied, the decorations don’t change.

In the particular case of using types as decorations, there is another phenomenon though. If you use the decoration with types for graphs which don’t represent lambda terms then you will get relations between types, relations which are supplementary. a way of saying this is that some graphs are not well typed, meaning that the types form an algebra which is not free (you can’t eliminate all relations). But the algebra you get, albeit not free, is still an interesting object.

So the decoration procedure and the computation (reduction) procedure are orthogonal. You may decorate a fixed graph and you may do symbolic algebraic computations with the decorations, in an algebra generated by the graph, in the same way as a know generates an algebra called quandle. Or you may reduce the graph, irrespective of the decorations, and get another graph. Decorations of the first graph don’t persist, a priori, after the reduction.

An exception is decoration with types, which persists outside the place where the reduction is performed. But there is another problem, that even if the decoration with types satisfies locally (i.e. at the arrows of each node) what is expected from types, many (most) of the graphs don’t generate a free algebra, as it would be expected from algebras of types.

The first chemical interpretation

There is the objection that the reductions can’t be like chemical reactions because the nodes (atoms) can’t appear or disappear, there should be a conservation law for them.

Correct! What then?

The reduction, let’s pick one – the beta move say – is a chemical reaction of the graph (molecule) with an enzyme which in the formalism appears only with the name “beta enzyme” but is not specified as a graph in chemlambda. Then, during the reaction, some nodes may disappear, in the sense that they bond to the beta enzyme and makes it inactive further.

So, the reduction A –>beta B appears as the reaction

A + beta = B + garbage

How moves are performed

Let’s get a bit detailed about what moves (graph rewrites) mean and how they are done. Every move says replace graph_1 with graph_2 , where graph_1, graph_2 are graphs with a small number of nodes and arrows (and also “graph” may well be made only by two arrows, like is the case for graph_2 for the beta move).

So, now you have a graph G. Then the program looks for graph_1 chunks in G and adds some annotation (perhaps in an annotation file it produces). Then there may be script which inputs the graph G and the annotation file into the graph viz tool, which has as effect, for example, that the graph_1 chunk appears phosphorescent on the screen. Or say when you hover with the mouse over the graph_1 chunk then it changes color, or there is an ellipse which encircles it and a tag saying “beta”.

Suppose that the user clicks, giving his OK for performing the move. Then on the screen the graph changes, but the previous version is kept in the memory, in case the user wants to go back (the moves are all reversible, but sometimes, like in the case of the beta move, the graph_2 is too common, is everywhere, so the use of both senses of some moves is forbidden in the formalism, unless it is used in a predefined sequence of moves, called “macro move”).

Another example would be that the user clicks on a button which says “go along with the reductions as long as you can do it before you find a fork in the reduction process”. Then, of course it would be good to keep the intermediate graphs in memory.

Yet another example would be that of a node or arrow of the graph G which turn out to belong to two different interesting chunks. Then the user should be able to choose which reduction to do.

It would be good to have the possibility to perform each move upon request,

plus

the possibility to perform a sequence of moves which starts from a first one chosen by the user (or from the only one available in the graph, as is the case for many graphs coming from lambda terms which are obtained by repeated currying and nesting)

plus

the possibility to define new, composed moves at once, for example you notice that there is a chunk graph_3 which contains graph_1 and after reduction of graph_1 to graph_2 inside graph_3, the graph_3 becomes graph_4; graph_4 contains now a chunk graph_1 of another move, which can be reduced and graph_4 becomes graph_5. Now, you may want to say: I save this sequence of two moves from graph_3 to graph_5 as a new move. The addition of this new move does not change the formalism because you may always replace this new move with a sequence of two old moves

Practically the last possibility means the ability to add new chunks graph_3 and graph_5 in the file which describes the moves and to define the new move with a name chosen by the user.

plus

Finally, you may want to be able to either select a chunk of the input graph by clicking on nodes and arrows, or to construct a graph and then say (i.e. click a button) that from now on that graph will be represented as a new type of node, with a certain arity. That means writing in the file which describes the type of nodes.

You may combine the last two procedures by saying that you select or construct a graph G. Then you notice that you may reduce it in an interesting way (for whatever further purposes) which looks like this:

– before the chain of reduction you may see the graph G as being made by two chunks A and B, with some arrows between some nodes from chunk A and some nodes from chunk B. After the reduction you look at the graph as being made by chunks C, D, E, say.

– Then you “save” your chunks A, B, C, D, E as new types of nodes (some of them may be of course just made by an old node, so no need to save them) and you define a new move which transforms the chunk AB into the chunk CDE (written like this only because of the 1D constraints of writing, but you see what I mean, right?).

The addition of these new nodes and moves don’t change the formalism, because there is a dictionary which transforms each new type of node into a graph made of old nodes and each new move into a succession of old moves.

How can this be done:

– use the definition of new nodes for the separation of G into A, B and for the definition of G’ (the graph after the sequence of reductions) into C,D,E

– save the sequence of moves from G to G’ as new composite move between G and G’

– produce a new move which replaces AB with CDE

That’s interesting how should work properly, probably one should keep both the move AB to CDE and the move G to G’, as well as the translations of G into AB and G’ into CDE.

We’re getting close to actors, but the first purpose of the gui is not to be a sandbox for the distributed computation, that would be another level on top of that.

The value of the sequence of moves saved as a composite move is multiple:

– the graph_3 which is the start of the sequence contains graph_1 which is the start of another move, so it always lead to forks: one may apply the sequence or only the first move

– there may be a possible fork after you do the first reduction in graph_3, in the sense that there may be another chunk of another move which could be applied

GLC actors

The actors are a special kind of decoration which transform (some) moves (at the choice of the user) into interaction devices.

You decorate the nodes of a graph G with actors names (they are just names, for the moment, at your choice). As a convention let’s say that we denote actor names by :a , :b , etc

You also decorate arrows with pairs of names of actors, those coming from the decorations of nodes, with the convention that (:a, :a) is identified (in the user mind) with :a (nothing surprising here, think about the groupoid over a set X, which is the set of “arrows” X \times X and X appears as the set of objects of the groupoid and it identifies with the set of pairs (x,x) with x \in X).

Now, say you have a move from graph_1 to graph_2. Then, as in the boldfaced previous description, but somehow in the opposite sense, you define graphs A, B such that graph_1 is AB and graphs C,D such that graph_2 is CD.

Then you say that you can perform the reduction from graph_1 to graph_2 only if the nodes of A are all decorated with :a and the nodes of :b are decorated with :b, a different name than :a.

After reduction you decorate the nodes of C with :a and the nodes of D with :b .

In this way the actors with identities :a and :b change their state during the reduction (i.e. the graph made by the nodes decorated with :a and the arrows decorated with (:a,:a) change, same for :b).

The reduction can be done for the graph G only at chunks graph_1 which are decorated as explained.

To explain what actor :Bob is doing it matters from which point of view. Also, what is the relation between actors and the chemical interpretation, how they fit there?

So let’s take it methodically.

The point of view of the GUI

If we discuss from the point of view of playing with the gui, then the user of the gui has global, God’s view over what happens. That means the the user of the gui can see the whole graph at one moment, the user has a clock which is like a global notion of time. So from this point of view the user of the gui is the master of space and time. He sees the fates of :Bob, :Alice, :Claire, :Dan simultaneously. The user has the right in the gui world to talk about parallel stuff happening (i.e. “at the same time”) and sequential stuff happening (to the same actor or actors). The user may notice that some reductions are independent, in the sense that wrt to the user’s clock the result is the same if first :Bob interacts with :Alice and then :Claire interacts with :Dan or conversely, which makes the user think that there is some notion more general than parallelism, i.e. concurrency.

If we discuss from the point of view of :Bob, it looks different. More later.

Let’s stay at the user of the gui point of view and think about what actors do. We shall use the user’s clock for reference to time and the user’s point of view about space (what is represented on the screen via the viz tool) to speak about states of actors.

What the user does:

– he defines the graph types an the rules of reduction

– he inputs a graph

– he decorates it with actors names

– he click some buttons from time to time ( deus ex machina quote : “is a plot device whereby a seemingly unsolvable problem is suddenly and abruptly resolved by the contrived and unexpected intervention of some new event, character, ability or object.” )

At any moment the actor :Bob has a state.

Definition: the state of :Bob at the moment t is the graph formed by the nodes decorated with the name :Bob, the arrows decorated by (:Bob, :Bob) and the arrows decorated by (:Bob, :Alice), etc .

Because each node is decorated by an actor name, it follows that there are never shared nodes between different actors, but there may be shared arrows, like an arrow decorated (:Bob, :Alice), which is both belonging to :Bob and :Alice.

The user thinks about an arrow (:Bob, :Alice) as being made of two half arrows:

– one which starts at a node decorated with :Bob and has a free end, decorated with :Alice ; this half arrow belongs to the state of :Bob

– one which ends at a node decorated with :Alice and has a free start, decorated with :Bob ; this half arrow belongs to the state of :Alice

The user also thinks that the arrow decorated by (:Bob, :Alice) shows that :Bob and :Alice are neighbours there. What means “there”? Is like you, Bob, want to park your car (state of Bob) and the your front left tyre is close to the concrete margin (i.e. :Alice), but you may consider also that your back is close to the trash bin also (i.e :Elvis).

We may represent the neighboring relations between arrows as a new graph, which is obtained by thinking about :Bob, :Alice, … as being nodes and by thinking that an arrow decorated (:Bob, :Alice) appears as an arrow from the node which represents :Bob to the node which represents :Alice (of course there may be several such arrows decorated (:Bob, :Alice) ).

This new graph is called “actors diagram” and is something used by the gui user to put order in his head and to explain to others the stuff happening there.

The user calls the actors diagram “space”, because he thinks that space is nothing but the neighboring relation between actors at a moment in time (user’s time). He is aware that there is a problem with this view, which supposes that there is a global time notion and a global simultaneous view on the actors (states), but says “what the heck, I have to use a way to discuss with others about what’s happening in the gui world, but I will show great caution and restraint by trying to keep track of the effects of this global view on my explanation”.

Suppose now that there is an arrow decorated (:Bob, :Alice) and this arrow, along with the node from the start (decorated with :Bob) and the node from the end (decorated with :Alice) is part of the graph_1 of one of the graph rewrites which are allowed.

Even more general, suppose that there is a graph_1 chunk which has the form AB with the sub-chunk A belonging to :Alice and the sub-chunk B belonging to Bob.

Then the reduction may happen there. (Who initiates it? Alice, Bob, the user’s click ? let’s not care about this for a moment, although if we use the user’s point of view then Alice, Bob are passive and the user has the decision to click or not to click.)

This is like a chemical reaction which takes into consideration also the space. How?

Denote by Alice(t) and Bob(t) the respective states of :Alice and :Bob at the moment t. Think about the states as being two chemical molecules, instead of one as previously.

Each molecule has a reaction site: for Alice(t) the reaction site is A and for Bob(t) the reaction site is B.

They enter in the reaction if two conditions are satisfied:

– there is an enzyme (say the beta enzyme, if the reduction is the beta) which can facilitate the reaction (by the user’s click)

– the molecules are close in space, i.e. there is an arrow from A to B, or from B to A

So you see that it may happen that Alice(t) may have inside a chunk graph which looks like A and Bob(t) may have a chunk graph which looks like B, but if the chunks A, B are not connected such that AB forms a chunk which is like the graph_1 of the beta move, then they can’t react because (physical interpretation, say) they are not close in space.

The reaction sites of Alice(t) and Bob(t) may be close in space, but if the user does not click then they can’t react because there is no beta enzyme roaming around to facilitate the reaction.

If they are close and if there is a beta enzyme around then the reaction appears as

Alice(t) + Bob(t) + beta = Alice(t+1) + Bob(t+1) + garbage

Let’s see now who is Alice(t+1) and Bob(t+1). The beta rewrite replaces graph_1 (which is AB) by graph_2 (which is CD). C will belong to Alice(t+1) and D will belong to Bob(t+1). The rest of Alice(t) and Bob(t) is inherited unchanged by Alice(t+1) and Bob(t+1).

Is this true? What about the actors diagram, will it change after the reaction?

Actually graph_1, which is AB, may have (and it usually does) other arrows besides the ones decorated with (:Bob, :Alice). For example A may have arrows from A to the rest of Alice(t), i.e. decorated with (:Alice, :Alice), same for B which may have others arrows from B to the rest of B(t), which are decorated by (:Bob, :Bob).

After the rewrite (chemical reaction) these arrows will be rewired by the replacement of AB by CD, but nevertheless the new arrows which replace those will be decorated by (:Alice, :Alice) (because they will become arrows from C to the rest of Alice(t+1)) and (:Bob, :Bob) (same argument). All in all we see that after the chemical reaction the molecule :Alice and the molecule :Bob may loose or win some nodes (atoms) and they may suffer some internal rewiring (bonds), so this looks like :Alice and :Bob changed the chemical composition.

But they also moved as an effect of the reaction.

Indeed, graph_1, which is AB, may have other arrows besides he ones decorated with (:Bob, :Alice) , (:Bob, :Bob) or (:Alice, :Alice). The chunk A (which belongs to Alice(t)) may have arrows which connect it with :Claire, i.e. there may be arrows from A to another actor, Claire, decorated with (:Alice, :Claire), for example.

After the reaction which consist in the replacement of AB by CD, there are rewiring which happened, which may have as effect the apparition of arrows decorated (:Bob, :Claire), for example. In such a case we say that Bob moved close to Claire. The molecules move this way (i.e. in the sense that the neighboring relations change in this concrete way).

Pit stop

Let’s stop here for the moment, because there is already a lot. In the next message I hope to talk about why the idea of using a Chemical reaction network image is good, but still global, it is a way to replace the user’s deus ex machina clicks by random availability of enzymes, but still using a global time and a global space (i.e. the actors diagrams). The model will be better also than what is usually a CRN based model, where the molecules are supposed to be part of a “well stirred” solution (i.e. let’s neglect space effects on the reaction), or they are supposed to diffuse in a fixed space (i.e let’s make the space passive). The model will allow to introduce global notions of entropy.

Such a CRN based model deserves a study for itself, because it is unusual in the way it describes the space and the chemical reactions of the molecules-actors as aspects of the same thing.

But we want to go even further, towards renouncing at the global pov.

The example with the marbles

In a discussion about the possible advantages for secure computing with the  GLC actors model, I came up with this analogy, which I want to file here, not to get lost in the flow of exchanges:

Mind that this is only a thought  experiment, which might not be accurate in all aspects in it’s representation of the kind of computation with GLC or more accurately with chemlambda.

Imagine a large pipe, with a diameter of 1 m say, and 3 m long, to have an image. It is full of marbles, all identical in shape. It is so full that if one forces a marble at one end then a marble (or sometimes more) have to get out by the other end.

Say Alice is on one end of the pipe and Bob is at the other end. They agreed previously to communicate in the most primitive manner, namely by the spilling  of a small (say like ten) or a big (for example like 50)   marbles at their respective ends. The pipe contains maybe 10^5   or 10^6 marbles, so both these numbers are small.

There is also Claire who, for some reason, can’t see the ends of Alice and Bob, but the pipe has a window at the middle and Claire can see about 10% of the marbles from the pipe, those which are behind the window.

Let’s see how the marbles interact. Having the same shape, and because the pipe is full of them, they are in a local configuration which minimizes the volume (maybe not all of them, but here the analogy is mum about this). When a marble (or maybe several) is forced at Alice’s end of the pipe, there are lots of movements which accommodate the new marbles with the old ones. The physics of marbles is known, is the elastic contact between them and there is a fact in the platonic sky which says that for any local portion of the pipe the momentum and energy are conserved, as well as the volume of the marbles. The global conservation of these quantities is an effect of those (as anybody versed in media mechanics can confirm to you).

Now, Claire can’t get anything from looking  by the window. At best Claire remarks complex small movements, but there is no clear way how this happens (other than if she looks at a small number of them then she might figure out the local mechanical ballet imposed by the conservation laws), not are Alice’s marbles marching towards Bob’s end.

Claire can easily destroy the communication, for example by opening her window and getting out some buckets of marbles, or even by breaking the pipe. But this is not getting Claire closer to understanding what Alice and Bob are talking about.

Claire could of course claim that i the whole pipe was transparent, she could film the pipe and then reconstruct the communication. But in this case Claire would be the goddess of the pipe and nothing would be hidden to her. Alice and Bob would be her slaves because Claire would be in a position which is equivalent to having a window at each end of the pipe.

__________________________

Comments:

  • each marble is a GLC actor
  • they interact locally, by known and simple rules
  • this is an example of signal transduction
  • which encrypts itself, more  communication makes the decoding harder. It is the same problem which is encountered when observing a living system, for example a cell. You may freeze it (and therefore kill it) and look at it but you won’t see how it functions. You can observe it alive, but it is complex by itself, you never see, or only rare glimpses of meaning.
  • the space (of the pipe) represents  an effect of the local, decentralized, asynchronous interactions.

Beneath under there is just local interaction, via the moves which act on patterns of graphs which are split between actors. But this locality gives space, which is an emergent, global effect of these distinctions which communicate.

Two chemical molecules which react are one composite molecule which reduces itself, splitted between two actors (one per molecule). The molecules react when they are close is the same as saying that their associated actors interact when they are in the neighboring relation.  And the reaction modifies not only the respective molecules, but also the neighboring relation between actors, i.e. the reaction makes the molecules to move through space. The space is transformed as well as the shape of the reactants, which looks from an emergent perspective as if the reactants move through some passive space.

Concretely, each actor has a piece of the big graph, two actors are neighbours if there is an arrow of the big graph which connects their respective pieces, the reduction moves can be applied only on patterns which are splitted between two actors and as an effect, the reduction moves modify both the pieces and the arrows which connect the pieces, thus the neighbouring of actors.

What we do in the distributed GLC project is to use actors to transform the Net into a space. It works exactly because space is an effect of locality, on one side, and of universal simple interactions (moves on graphs) on the other side.

__________________________________________

Quantum experimenters Alice and Bob are actors (NTC vs TC, II)

… i.e. if you turn the diagrams which explain quantum protocols by 90 deg, then the graph rewrites which are used in the NTC (topology does not compute) categorical quantum mechanics transform into TC (topology computes) diagrams. (See NTC vs TC part 1 for the terminology.)

And Alice and Bob become GLC actors.  Indeed, they  behave like actors from Hewitt Actor model and moreover communicate by the intermediary of graph rewrites.

I shall come back to this with many details, for the moment I can’t do decent drawings, they will appear soon.

It is intriguing, right? Yes, because when you turn the diagrams by 90 deg, the time and space “directions” change places. Moreover, the Alice from the experiment 1 (described by what appears in categorical quantum mechanics as the process 1, a decorated graph) is united with the Alice from the experiment 2 (described by what appears as the process 2, which is equivalent by graph rewrites with the process 1). That is, if you turn the diagrams by 90 deg, the two instances of Alice (or all the instances of Alice from all the intermediary steps of the graph rewrites) form the GLC actor Alice. Same for Bob, of course.

What this equivalence means, is intriguing. Very much intriguing.

________________________________________

 

How is different signal transduction from information theory?

They look different.

“Signal transduction occurs when an extracellular signaling[1] molecule activates a specific receptor located on the cell surface or inside the cell. In turn, this receptor triggers a biochemical chain of events inside the cell, creating a response.[2] Depending on the cell, the response alters the cell’s metabolism, shape, gene expression, or ability to divide.[3] The signal can be amplified at any step. Thus, one signaling molecule can cause many responses.[4]

Signal transduction involves the binding of extracellular signalling molecules and ligands to cell-surface receptors that trigger events inside the cell. The combination of messenger with receptor causes a change in the conformation of the receptor, known as receptor activation. This activation is always the initial step (the cause) leading to the cell’s ultimate responses (effect) to the messenger. Despite the myriad of these ultimate responses, they are all directly due to changes in particular cell proteins. Intracellular signaling cascades can be started through cell-substratum interactions; examples are the integrin that binds ligands in the extracellular matrix and steroids.[13]

[wiki source]

 

It seems that signal transduction involves:

  • messenger molecule
  • receptor molecule
  • the messenger reacts with the receptor, which changes conformation (receptor activation)
  • receptor activation triggers other chemical reactions

Let’s condense further:

  • molecule M (messenger) binds to molecule R (receptor)
  • the complex MR changes shape (a spatial notion, in a generalized sense)
  • which triggers other reactions between (the new) R with other neighboring molecules.

It is interesting for me because that is how distributed GLC works:

  • the actor M reacts with the actor R
  • after interaction both molecules may change, the one which belongs to the actor M and the one which belongs to the actor R,
  • but also the actors adjacencies may change as well, due to the reductions involving the reaction between M and R (this corresponds to the receptor activation)
  • which triggers other interactions between GLC actors.

 

Information theory, on the other side, concerns a sender, a channel and a receiver. The sender sends messages through the channel to the receiver.

Completely different frames.

One may, of course, partially ignore the mechanism (signal transduction) and look instead at the environment as a sender, to the cell as a receiver and to the cell’s membrane as a channel (just an example).

But sender, channel and receiver look (to me) as mind constructs which are useful for the human trying to make a sense of what is happening, from outside. What is happening though,  is signal transduction.

_________________________________