Tag Archives: chemical reaction networks

Distributed GLC discussion

This is my contribution to a discussion about the distributed GLC model of computation and about the associated gui.

Read carefully.  It has a fractal structure.

Basics about chemlambda graphs and the GUI

The chemlambda graphs (molecules) are not flowcharts. One just has to specify certain (known) graphs with at most 4 nodes and how to replace them with other known simple graphs. That is all.

That means that one needs:

– a file which specifies what kind of graphs are used (by giving the type of nodes and arrows)

– a file which specifies which are the patterns (i.e. graphs) and which are the rewrite moves

– and a program which takes these files as input and a graph and does things, like checking if the graph is of the kind described in file 1, if there are any patterns from the file 2 and do the rewrite in a place where the user wants, do a sequence of rewrites until it forks, if the user wants, take as input a lambda expression given by the user and transform it into a graph.

– then there is the visualization of the graphs program(s), that is the hard part, but it is already done in multiple places. Means that one has to write only the possible conversions of file formats from and to the viz tool.

That is the minimal configuration.

Decorations

There are various reasons why one wants to be able to decorate the graphs, locally, as a supplementary thing, but in no way is this needed for the basic process.

Concerning decorations, one needs a file which specifies how to decorate arrows and which are the relations coming from nodes. But this is not part of the computation.

If we want to make it more powerful then it gets more complex because if we want to do symbolic computations of decorations (like elimination of a relation coming from a node) then probably it is better to output a file of decorations of arrows and relations from nodes and input it in a symbolic soft, like mathematica or something free, there is no need to reinvent the wheel.

After the graph rewrite you loose the decoration, that’s part of the fun which makes decorations less interesting and makes supposedly the computation more secure.

But that depends on the choice of decoration rules.

For example, if you decorate with types then you don’t loose the decoration after the move. Yes, there are nodes and arrows which disappear, but outside of the site where the move was applied, the decorations don’t change.

In the particular case of using types as decorations, there is another phenomenon though. If you use the decoration with types for graphs which don’t represent lambda terms then you will get relations between types, relations which are supplementary. a way of saying this is that some graphs are not well typed, meaning that the types form an algebra which is not free (you can’t eliminate all relations). But the algebra you get, albeit not free, is still an interesting object.

So the decoration procedure and the computation (reduction) procedure are orthogonal. You may decorate a fixed graph and you may do symbolic algebraic computations with the decorations, in an algebra generated by the graph, in the same way as a know generates an algebra called quandle. Or you may reduce the graph, irrespective of the decorations, and get another graph. Decorations of the first graph don’t persist, a priori, after the reduction.

An exception is decoration with types, which persists outside the place where the reduction is performed. But there is another problem, that even if the decoration with types satisfies locally (i.e. at the arrows of each node) what is expected from types, many (most) of the graphs don’t generate a free algebra, as it would be expected from algebras of types.

The first chemical interpretation

There is the objection that the reductions can’t be like chemical reactions because the nodes (atoms) can’t appear or disappear, there should be a conservation law for them.

Correct! What then?

The reduction, let’s pick one – the beta move say – is a chemical reaction of the graph (molecule) with an enzyme which in the formalism appears only with the name “beta enzyme” but is not specified as a graph in chemlambda. Then, during the reaction, some nodes may disappear, in the sense that they bond to the beta enzyme and makes it inactive further.

So, the reduction A –>beta B appears as the reaction

A + beta = B + garbage

How moves are performed

Let’s get a bit detailed about what moves (graph rewrites) mean and how they are done. Every move says replace graph_1 with graph_2 , where graph_1, graph_2 are graphs with a small number of nodes and arrows (and also “graph” may well be made only by two arrows, like is the case for graph_2 for the beta move).

So, now you have a graph G. Then the program looks for graph_1 chunks in G and adds some annotation (perhaps in an annotation file it produces). Then there may be script which inputs the graph G and the annotation file into the graph viz tool, which has as effect, for example, that the graph_1 chunk appears phosphorescent on the screen. Or say when you hover with the mouse over the graph_1 chunk then it changes color, or there is an ellipse which encircles it and a tag saying “beta”.

Suppose that the user clicks, giving his OK for performing the move. Then on the screen the graph changes, but the previous version is kept in the memory, in case the user wants to go back (the moves are all reversible, but sometimes, like in the case of the beta move, the graph_2 is too common, is everywhere, so the use of both senses of some moves is forbidden in the formalism, unless it is used in a predefined sequence of moves, called “macro move”).

Another example would be that the user clicks on a button which says “go along with the reductions as long as you can do it before you find a fork in the reduction process”. Then, of course it would be good to keep the intermediate graphs in memory.

Yet another example would be that of a node or arrow of the graph G which turn out to belong to two different interesting chunks. Then the user should be able to choose which reduction to do.

It would be good to have the possibility to perform each move upon request,

plus

the possibility to perform a sequence of moves which starts from a first one chosen by the user (or from the only one available in the graph, as is the case for many graphs coming from lambda terms which are obtained by repeated currying and nesting)

plus

the possibility to define new, composed moves at once, for example you notice that there is a chunk graph_3 which contains graph_1 and after reduction of graph_1 to graph_2 inside graph_3, the graph_3 becomes graph_4; graph_4 contains now a chunk graph_1 of another move, which can be reduced and graph_4 becomes graph_5. Now, you may want to say: I save this sequence of two moves from graph_3 to graph_5 as a new move. The addition of this new move does not change the formalism because you may always replace this new move with a sequence of two old moves

Practically the last possibility means the ability to add new chunks graph_3 and graph_5 in the file which describes the moves and to define the new move with a name chosen by the user.

plus

Finally, you may want to be able to either select a chunk of the input graph by clicking on nodes and arrows, or to construct a graph and then say (i.e. click a button) that from now on that graph will be represented as a new type of node, with a certain arity. That means writing in the file which describes the type of nodes.

You may combine the last two procedures by saying that you select or construct a graph G. Then you notice that you may reduce it in an interesting way (for whatever further purposes) which looks like this:

– before the chain of reduction you may see the graph G as being made by two chunks A and B, with some arrows between some nodes from chunk A and some nodes from chunk B. After the reduction you look at the graph as being made by chunks C, D, E, say.

– Then you “save” your chunks A, B, C, D, E as new types of nodes (some of them may be of course just made by an old node, so no need to save them) and you define a new move which transforms the chunk AB into the chunk CDE (written like this only because of the 1D constraints of writing, but you see what I mean, right?).

The addition of these new nodes and moves don’t change the formalism, because there is a dictionary which transforms each new type of node into a graph made of old nodes and each new move into a succession of old moves.

How can this be done:

– use the definition of new nodes for the separation of G into A, B and for the definition of G’ (the graph after the sequence of reductions) into C,D,E

– save the sequence of moves from G to G’ as new composite move between G and G’

– produce a new move which replaces AB with CDE

That’s interesting how should work properly, probably one should keep both the move AB to CDE and the move G to G’, as well as the translations of G into AB and G’ into CDE.

We’re getting close to actors, but the first purpose of the gui is not to be a sandbox for the distributed computation, that would be another level on top of that.

The value of the sequence of moves saved as a composite move is multiple:

– the graph_3 which is the start of the sequence contains graph_1 which is the start of another move, so it always lead to forks: one may apply the sequence or only the first move

– there may be a possible fork after you do the first reduction in graph_3, in the sense that there may be another chunk of another move which could be applied

GLC actors

The actors are a special kind of decoration which transform (some) moves (at the choice of the user) into interaction devices.

You decorate the nodes of a graph G with actors names (they are just names, for the moment, at your choice). As a convention let’s say that we denote actor names by :a , :b , etc

You also decorate arrows with pairs of names of actors, those coming from the decorations of nodes, with the convention that (:a, :a) is identified (in the user mind) with :a (nothing surprising here, think about the groupoid over a set X, which is the set of “arrows” X \times X and X appears as the set of objects of the groupoid and it identifies with the set of pairs (x,x) with x \in X).

Now, say you have a move from graph_1 to graph_2. Then, as in the boldfaced previous description, but somehow in the opposite sense, you define graphs A, B such that graph_1 is AB and graphs C,D such that graph_2 is CD.

Then you say that you can perform the reduction from graph_1 to graph_2 only if the nodes of A are all decorated with :a and the nodes of :b are decorated with :b, a different name than :a.

After reduction you decorate the nodes of C with :a and the nodes of D with :b .

In this way the actors with identities :a and :b change their state during the reduction (i.e. the graph made by the nodes decorated with :a and the arrows decorated with (:a,:a) change, same for :b).

The reduction can be done for the graph G only at chunks graph_1 which are decorated as explained.

To explain what actor :Bob is doing it matters from which point of view. Also, what is the relation between actors and the chemical interpretation, how they fit there?

So let’s take it methodically.

The point of view of the GUI

If we discuss from the point of view of playing with the gui, then the user of the gui has global, God’s view over what happens. That means the the user of the gui can see the whole graph at one moment, the user has a clock which is like a global notion of time. So from this point of view the user of the gui is the master of space and time. He sees the fates of :Bob, :Alice, :Claire, :Dan simultaneously. The user has the right in the gui world to talk about parallel stuff happening (i.e. “at the same time”) and sequential stuff happening (to the same actor or actors). The user may notice that some reductions are independent, in the sense that wrt to the user’s clock the result is the same if first :Bob interacts with :Alice and then :Claire interacts with :Dan or conversely, which makes the user think that there is some notion more general than parallelism, i.e. concurrency.

If we discuss from the point of view of :Bob, it looks different. More later.

Let’s stay at the user of the gui point of view and think about what actors do. We shall use the user’s clock for reference to time and the user’s point of view about space (what is represented on the screen via the viz tool) to speak about states of actors.

What the user does:

– he defines the graph types an the rules of reduction

– he inputs a graph

– he decorates it with actors names

– he click some buttons from time to time ( deus ex machina quote : “is a plot device whereby a seemingly unsolvable problem is suddenly and abruptly resolved by the contrived and unexpected intervention of some new event, character, ability or object.” )

At any moment the actor :Bob has a state.

Definition: the state of :Bob at the moment t is the graph formed by the nodes decorated with the name :Bob, the arrows decorated by (:Bob, :Bob) and the arrows decorated by (:Bob, :Alice), etc .

Because each node is decorated by an actor name, it follows that there are never shared nodes between different actors, but there may be shared arrows, like an arrow decorated (:Bob, :Alice), which is both belonging to :Bob and :Alice.

The user thinks about an arrow (:Bob, :Alice) as being made of two half arrows:

– one which starts at a node decorated with :Bob and has a free end, decorated with :Alice ; this half arrow belongs to the state of :Bob

– one which ends at a node decorated with :Alice and has a free start, decorated with :Bob ; this half arrow belongs to the state of :Alice

The user also thinks that the arrow decorated by (:Bob, :Alice) shows that :Bob and :Alice are neighbours there. What means “there”? Is like you, Bob, want to park your car (state of Bob) and the your front left tyre is close to the concrete margin (i.e. :Alice), but you may consider also that your back is close to the trash bin also (i.e :Elvis).

We may represent the neighboring relations between arrows as a new graph, which is obtained by thinking about :Bob, :Alice, … as being nodes and by thinking that an arrow decorated (:Bob, :Alice) appears as an arrow from the node which represents :Bob to the node which represents :Alice (of course there may be several such arrows decorated (:Bob, :Alice) ).

This new graph is called “actors diagram” and is something used by the gui user to put order in his head and to explain to others the stuff happening there.

The user calls the actors diagram “space”, because he thinks that space is nothing but the neighboring relation between actors at a moment in time (user’s time). He is aware that there is a problem with this view, which supposes that there is a global time notion and a global simultaneous view on the actors (states), but says “what the heck, I have to use a way to discuss with others about what’s happening in the gui world, but I will show great caution and restraint by trying to keep track of the effects of this global view on my explanation”.

Suppose now that there is an arrow decorated (:Bob, :Alice) and this arrow, along with the node from the start (decorated with :Bob) and the node from the end (decorated with :Alice) is part of the graph_1 of one of the graph rewrites which are allowed.

Even more general, suppose that there is a graph_1 chunk which has the form AB with the sub-chunk A belonging to :Alice and the sub-chunk B belonging to Bob.

Then the reduction may happen there. (Who initiates it? Alice, Bob, the user’s click ? let’s not care about this for a moment, although if we use the user’s point of view then Alice, Bob are passive and the user has the decision to click or not to click.)

This is like a chemical reaction which takes into consideration also the space. How?

Denote by Alice(t) and Bob(t) the respective states of :Alice and :Bob at the moment t. Think about the states as being two chemical molecules, instead of one as previously.

Each molecule has a reaction site: for Alice(t) the reaction site is A and for Bob(t) the reaction site is B.

They enter in the reaction if two conditions are satisfied:

– there is an enzyme (say the beta enzyme, if the reduction is the beta) which can facilitate the reaction (by the user’s click)

– the molecules are close in space, i.e. there is an arrow from A to B, or from B to A

So you see that it may happen that Alice(t) may have inside a chunk graph which looks like A and Bob(t) may have a chunk graph which looks like B, but if the chunks A, B are not connected such that AB forms a chunk which is like the graph_1 of the beta move, then they can’t react because (physical interpretation, say) they are not close in space.

The reaction sites of Alice(t) and Bob(t) may be close in space, but if the user does not click then they can’t react because there is no beta enzyme roaming around to facilitate the reaction.

If they are close and if there is a beta enzyme around then the reaction appears as

Alice(t) + Bob(t) + beta = Alice(t+1) + Bob(t+1) + garbage

Let’s see now who is Alice(t+1) and Bob(t+1). The beta rewrite replaces graph_1 (which is AB) by graph_2 (which is CD). C will belong to Alice(t+1) and D will belong to Bob(t+1). The rest of Alice(t) and Bob(t) is inherited unchanged by Alice(t+1) and Bob(t+1).

Is this true? What about the actors diagram, will it change after the reaction?

Actually graph_1, which is AB, may have (and it usually does) other arrows besides the ones decorated with (:Bob, :Alice). For example A may have arrows from A to the rest of Alice(t), i.e. decorated with (:Alice, :Alice), same for B which may have others arrows from B to the rest of B(t), which are decorated by (:Bob, :Bob).

After the rewrite (chemical reaction) these arrows will be rewired by the replacement of AB by CD, but nevertheless the new arrows which replace those will be decorated by (:Alice, :Alice) (because they will become arrows from C to the rest of Alice(t+1)) and (:Bob, :Bob) (same argument). All in all we see that after the chemical reaction the molecule :Alice and the molecule :Bob may loose or win some nodes (atoms) and they may suffer some internal rewiring (bonds), so this looks like :Alice and :Bob changed the chemical composition.

But they also moved as an effect of the reaction.

Indeed, graph_1, which is AB, may have other arrows besides he ones decorated with (:Bob, :Alice) , (:Bob, :Bob) or (:Alice, :Alice). The chunk A (which belongs to Alice(t)) may have arrows which connect it with :Claire, i.e. there may be arrows from A to another actor, Claire, decorated with (:Alice, :Claire), for example.

After the reaction which consist in the replacement of AB by CD, there are rewiring which happened, which may have as effect the apparition of arrows decorated (:Bob, :Claire), for example. In such a case we say that Bob moved close to Claire. The molecules move this way (i.e. in the sense that the neighboring relations change in this concrete way).

Pit stop

Let’s stop here for the moment, because there is already a lot. In the next message I hope to talk about why the idea of using a Chemical reaction network image is good, but still global, it is a way to replace the user’s deus ex machina clicks by random availability of enzymes, but still using a global time and a global space (i.e. the actors diagrams). The model will be better also than what is usually a CRN based model, where the molecules are supposed to be part of a “well stirred” solution (i.e. let’s neglect space effects on the reaction), or they are supposed to diffuse in a fixed space (i.e let’s make the space passive). The model will allow to introduce global notions of entropy.

Such a CRN based model deserves a study for itself, because it is unusual in the way it describes the space and the chemical reactions of the molecules-actors as aspects of the same thing.

But we want to go even further, towards renouncing at the global pov.

Advertisements

Why Distributed GLC is different from Programming with Chemical Reaction Networks

I use the occasion to bookmark the post at Azimuth Programming with Chemical Reaction Networks, most of all because of the beautiful bibliography which contains links to great articles which can be freely downloaded. Thank you John Baez for putting in one place such an essential list of articles.

Also, I want to explain very briefly why CRNs are not used in Distributed GLC.

Recall that Distributed GLC  is a distributed computing model which is based on an artificial chemistry called chemlambda, itself a variant (slightly different) of graphic lambda calculus, or GLC.

There are two stages of the computation:

  1. define the initial participants at the computation, each one called an “actor”. Each actor is in charge of a chemlambda molecule. Molecules of different actors may be connected, each such connection being interpreted as a connection between actors.  If we put together all molecules of all actors then we can glue them into one big molecule. Imagine this big molecule as a map of the world and actors as countries, each coloured with a different colour.  Neighbouring countries correspond to connected actors. This big molecule is a graph in the chemlambda formalism. The graph which has the actors as nodes and neighbouring relations as edges is called the “actors diagram” and is a different graph than the big molecule graph.
  2. Each actor has a name (like a mail address, or like the colour of a country). Each actor knows only the names of neighbouring actors. Moreover, each actor will behave only as a function of the molecule it has and according to the knowledge of his neighbouring actors behaviour. From this point, the proper part of the computation, each actor is by itself. So, from this point on we use the way of seeing of the Actor Model of Carl Hewitt.  Not the point of view of Process Algebra. (See  Actor model and process calculi.)  OK, each actor has 5 behaviours, most of them consisting into graph rewrites of it’s own molecule or between molecules of neighbouring actors. These graph rewrites are like chemical reactions between molecules and enzymes, one enzyme per graph rewrite. Finally, the connections between actors (may) change as a result of these graph rewrites.

That is the Distributed GLC model, very briefly.

It is different from Programming with CRN because of several reasons.

1.  Participants at the computation are individual molecules. This may be unrealistic for real chemistry and lab measurements of chemical reactions, but this is not the point, because the artificial chemistry chemlambda is designed to be used on the Internet. (However, see the research project on  single molecule quantum computer).

2. There is no explicit stochastic behaviour. Each actor in charge of it’s molecule behaves deterministically. (Or not, there is nothing which stops the model to be modified by introducing some randomness into the behaviour of each actor, but that is not the point here). There are not huge numbers of actors, or some average behaviour of those.

That is because of point 1. (we stay at the level of individual molecules and their puppeteers, their actors) and also because we use the Actor Model style, and not Process Algebra.

So, there is an implicit randomness, coming from the fact that the computation is designed Actor Model style, i.e. such that it may work differently, depending on the particular physical  timing of messages which are sent between actors.

3.  The computation is purely local. It is also decentralized. There is no corporate point of view of counting the number of identical molecules, or their proportion in a global space – global time solution.  This is something reasonable from the point of view of a distributed computation over the Internet.

__________________________________

All this being said,  of course that it would be interesting to see what happens with CRNs of reactions of molecules in chemlambda.  May be very instructive, but this would be a different model.

That is why Distributed GLC does not use the CRN point of view.

__________________________________

The Y combinator in graphic lambda calculus and in the chemical concrete machine

A simpler computation than the one with the Ackermann machine concerns the Y combinator.  Seen as a chemical reaction network, it looks like this for graphic lambda calculus. In the figure “A” is any graph in GRAPH which has only one exit arrow, for example a combinator graph.

ycombi_2

One might prefer to not use the GLOBAL FAN-OUT move. That’s possible, by passing to the chemical concrete machine formalism. The chemical reaction network is a bit different. (Notice the move PROP+, which is a composite move defined in the post Chemical concrete machine, detailed (VI). )

ycombi_3

Lots of interesting things happen, among them we notice the appearance of  stacks of “elementary units from the last post. so in the chemical concrete machine version of the behaviour of the Y combinator, the machine counts the number of times A should be replicated, if known (that’s a kind of lazy evaluation, if evaluation would make sense in this formalism).

___________________________

Neurons know nothing, however …

… they know surprisingly much, according to the choice of definition of “neural knowledge”. The concrete definition which I adopt is the following:  the knowledge of a neuron at a given moment is the collection (multiset) of molecules it contains. The knowledge of a synapse is the collection of molecules present in respective axon, dendrite and synaptic cleft.

I take the following hypotheses for a wet neural network:

  • the neural network is physically described as a graph with nodes being the neurons and arrows being the axon-dendrite synapses. The network is built from two ingredients: neurons and synapses. Each synapse involves three parts: an axon (associated to a neuron), a synaptic cleft (associated to a local environment) and a dendrite (associated to a neuron).
  • Each of the two ingredients of a neural network, i.e. neurons and synapses, as described previously, function by associated chemical reaction networks, involving the knowledge of the respective ingredients.
  • (the most simplifying hypothesis)  all molecules from the knowledge of a neuron, or of a synapse, are of two kinds: elements of MOLECULES or enzyme  names from the chemical concrete machine.

The last hypothesis seem to introduce knowledge with a more semantic flavour by the backdoor. That is because, as explained in  arXiv:1309.6914 , some molecules (i.e. trivalent graphs from the chemical concrete machine formalism) represent lambda calculus terms. So, terms are programs, moreover the chemical concrete machine is Turing universal, therefore we end up with a rather big chunk of semantic knowledge in a neuron’s lap. I intend to show you this is not the case, in fact a neuron, or a synapse does not have (or need to) this kind of knowledge.

__________________________

Before giving this explanation, I shall explain in just a bit more detail how the wet neural network,  which satisfies those hypotheses, works.  A physical neuron’s behaviour is ruled by the chemistry of it’s metabolic pathways. By the third hypothesis these metabolic pathways can be seen as graph rewrites of the molecules (more about this later). As an effect of it’s metabolism, the neuron has an electrical activity which in turn alters the behaviour of the other ingredient, the synapse. In the synapse act other chemical reaction networks, which are amenable, again by the third hypothesis, to computations with the chemical concrete machine. As an effect of the action of these metabolic pathways, a neuron communicates with another neuron. In the process the knowledge of each neuron (i.e. the collection of molecules) is modified, and the same is true about a synapse.

As concerns chemical reactions between molecules, in the chemical concrete machine formalism there is only one type of reactions which are admissible, namely the reaction between a molecule and an enzyme. Recall that if (some of the) molecules are like lambda calculus terms, then (some of the) enzymes are like names of reduction steps and the chemical reaction between a molecule and an enzyme is assimilated to the respective reduction step applied to the respective lambda calculus term.

But, in the post  SPLICE and SWITCH for tangle diagrams in the chemical concrete machine    I proved that in the chemical concrete machine formalism there is a local move, called SWITCH

chem_switch

which is the result of 3 chemical reactions with enzymes, as follows:

chem_switch_2

Therefore, the chemical concrete machine formalism with the SWITCH move added is equivalent with the original formalism. So, we can safely add the SWITCH move to the formalism and use it for defining chemical reactions between molecules (maybe also by adding an enzyme, or more, for the SWITCH move, let’s call them \sigma).  This mechanism gives chemical reactions between molecules with the form

A + B + \sigma \rightarrow C + D + GARB

where $\latex A$ and B are molecules such that by taking an arrow from A and another arrow from B we may apply the \sigma enzyme and produce the SWITCH move for this pair of arrows, which results in new molecules C and D (and possibly some GARBAGE, such as loops).

In conclusion, for this part concerning possible chemical reactions between molecules, we have enough raw material for constructing any chemical reaction network we like. Let me pass to the semantic knowledge part.

__________________________

Semantic knowledge of molecules. This is related to evaluation and it is maybe the least understood part of the chemical concrete machine. As a background, see the post  Example: decorations of S,K,I combinators in simply typed graphic lambda calculus , where it is explained the same phenomenon (without any relation with chemical metaphors) for the parent of the chemical concrete machine, the graphic lambda calculus.

Let us consider the following rules of decorations with names and types:

chem_decor_1

If we consider decorations of combinator molecules, then we obtain the right type and identification of the corresponding combinator, like in the following example.

chem_decor_2

For combinator molecules, the “semantic knowledge”, i.e. the identification of the lambda calculus term from the associated molecule, is possible.

In general, though, this is not possible. Consider for example a 2-zipper molecule.

chem_decor_3

We obtain the decoration F as a nested expression of A, D, E, which enough for performing two beta reductions, without knowing what A, D, E mean (without the need to evaluate A, D, E). This is equivalent with the property of zippers, to allow only a certain sequence of graphic beta moves (in this case two such moves).

Here is the tricky part: if we look at the term F then all that we can write after beta reductions is only formal, i.e. F  reduces to (A[y:=D])[x:=E], with all the possible problems related to variable names clashes and order of substitutions. We can write this reduction but we don’t get anything from it, it still needs further info about relations between the variables x, y and the terms A, D, E.

However, the graphic beta reductions can be done without any further complication, because they don’t involve any names, nor of variables, like x, y, neither of terms, like A, D, E, F.

Remark that the decoration is made such that:

  • the type decorations of arrows are left unchanged after any move
  • the terms or variables decorations (names elsewhere “places”) change globally.

We indicate this global change like in the following figure, which is the result of the sequence of the two possible \beta^{+} moves.

chem_decor_4

Therefore, after the first graphic beta reduction, we write  A'= A[y:=D] to indicate that A' is the new, globally (i.e. with respect to the whole graph in which the 2-zipper is a part) obtained decoration which replaces the older A, when we replace y by D. After the second  graphic beta reduction we use the same notation.

But such indication are even misleading, if, for example, there is a path made by arrows outside the 2-zipper, which connect the arrow decorated by D with the arrow decorated by y.  We should, in order to have a better notation, replace D by D[y:=D], which gives rise to a notation for a potentially infinite process of modifying D. So, once we use graphs (molecules) which do not correspond to combinators (or to lambda calculus terms), we are in big trouble if we try to reduce the graphic beta move to term rewriting, or to reductions in lambda calculus.

In conclusion for this part: decorations considered here, which add a semantic layer to the pure syntax of graph rewriting, cannot be used as replacement the graphic molecules, nor should reductions be equated with chemical reactions, with the exception of the cases when we have access to the whole molecule and moreover when the whole molecule is one which represents a combinator.

So, in this sense, the syntactic knowledge of the neuron, consisting in the list of it’s molecules, is not enough for deducing the semantic knowledge which is global, coming from the simultaneous decoration of the whole chemical reaction network which is associated to the whole neural network.

A sea of possibilities

… opens when I look at graphic lambda calculus as a graph rewriting system  (see also the foundational Term graph rewriting by Barendregt et al.)  The first, most obvious one, is that by treating graphic lambda calculus as a particular GRS, I might USE already written software for applications, like the chemical concrete machine or the B-type neural networks (more about this further). There are other possibilities, much, much more interesting from my point of view.

The reason for writing this post is that I feel a bit like the character Lawrence Pritchard Waterhouse from Neal Stephenson’s Cryptonomicon, more specifically as described by the character Alan Turing in a discussion with Rudolf  von Hacklheber. Turing (character in the book) describes science as a train with several locomotives, called “Newton”, etc (Hacklheber suggests there’s a “Leibniz” locomotive as well), with today’s scientists in the railroad cars and finally with Lawrence running after the train with all his forces, trying to keep the pace.

When you change the research subjects as much as I did, this feeling is natural, right? So, as for the lucky  Lawrence from the book (lucky because having the chance to be friend with Turing), there is only one escape for keeping the pace: collaboration. Why run after the GRS train when there is  already amazing research done?  My graphic lambda calculus is a particular GRS, which is designed so that it has applications in real (non-silicon) life, like biological vision (hopefully), chemistry, etc.  In real life, I believe, geometry rules, not term rewriting, not types, not any form of the arbor porphyriana. These are extremely useful tools for getting predictions on (silicon) computers out of models. Nature has a way to be massively (and geometrically) parallel, extremely fast and unpreoccupied  with the cartesian method. On the other side, in order to get predictions from geometrically interesting models (like graphic lambda calculus and eventually emergent algebras)  there is no other tool for simulations better than the computer.

Graphic lambda calculus is not just any GRS, but one which has very interesting properties, as I hope shown in this open notebook/blog. So, I am not especially interested in the way graphic lambda calculus falls into the general formalism of GRS, in particular because of various reasons, like (I might be naive to think) that of the heavy underlying machinery which seems to be used to reduce the geometrically beautiful formalism to some linear writing formalism dear to logicians. But how to write a (silicon) computer program without this reduction? Mind that Nature does not need this step, for example a chemical concrete machine may be (I hope) implemented in reality just by well mixing the right choice of substances (gross simplification which serves to describe the fact that reality is effortlessly parallel).

All this for calling the attention of eventual GRS specialists to the subject of graphic lambda calculus and it’s software implementation (in particular), which is surely more productive than becoming myself such a specialist and then use GRS for graphic lambda calculus.

Please don’t let me run after this train 🙂   The king  character from Brave says better than me  (00.40 in this clip)  :

Now, back to the other possibilities.   I’ll give you evidence for these, so that you can judge for yourself.  I started from the following idea, which is related to the use of graphic lambda calculus for neural networks. I wrote previously that the NN which I propose have the strange property that there’s nothing circulating through the wires of the network. Indeed, a more correct view is the following: say you have a network of real neurons which is doing something.  Whatever it does the network, it does it by physical and chemical mechanisms. Imagine the network, then image, overimposed over the network, a dynamical chemical reaction network which explains what the real network does.  The same idea rules the NN which I am beginning to describe in some of the previous posts. Instead of the neurons there are graphs which link to others through synapses, which are also graphs. The “computation” consists in graph rewriting moves, so at the end of the computation the initial graphs and synapses are “consumed”. This image fits well not with the image of the physical neural network, but with the image of the chemical reaction network which is overimposed.

I imagine this idea is not new, so I started to google for this. I have not found (yet, thank you for sending me any hints)  exactly the same idea, but here is what I have found:

That’s already to much to process in one’s plate, right? But I think I have made my point: a sea of possibilities. I can think about others, instead of “running after the train”.

Research is so much fun!