Chemical actors

UPDATE: Clearly needed a mix of the ActorScript of Carl Hewitt with GLC and chemlambda. Will follow in the months to come!

______________________

Thinking out loud about a variant of the actor model (chapter 3 here), which uses graphic lambda calculus or the chemical concrete machine. The goal is to arrive to a good definition of an Artificial Chemical Connectome.

Have remarks? Happy to read them!

A chemical actor is the following structure:

  • a graph A \in GRAPH  (or a molecule A \in MOLECULES)
  • with a unique ID name ID
  • with a numbering (tagging)  of a (possibly empty) part of it’s free arrows
  • with a behaviour, to be specified further.

actor_1

A communication is:

  • a graph  B \in GRAPH  (or a molecule)
  • with a source ID and a target ID
  • with a part of free arrows tagged with tags compatible (i.e. the same) with the ones from the graph from the source ID
  • with another part of free arrows tags with tags compatible with the ones from the graph from the target ID

actor_3

The actor target ID receives a communication from the actor source ID and it becomes:

actor_4

At this point the actor which has target ID exhibit the following behaviour:

  • performs one, several, or in a given order, etc of graph rewrites (only the + unidirectional moves)  which involve at least an arrow between A and B
  • following a given algorithm, splits into a new actor and a communication B’ which has as target arrows the one from the lower part of the previous figure (but with another target ID)
  • or creates a new actor by using only (-) moves

Remark: the numbers n, m could be uniformly bounded to 2, or 4, or 6, according to user’s wishes. Take a look at the Ackermann machine, for inspiration.

Chemical concrete machine on figshare

Just published the Chemical concrete machine on figshare.  Should be cited as


Chemical concrete machine. Marius Buliga. figshare.
http://dx.doi.org/10.6084/m9.figshare.830457

The description is a bit more telling than the abstract of the article (see why by reading the suggested posts below):
This article introduces an artificial, algorithmic chemistry based on local graph rewrites applied to a class of trivalent graphs named molecules. The graph rewrites are seen as chemical reactions with enzymes. The enzymes are tagged with the name of the respective move.  This artificial chemistry is Turing complete, because it allows the implementation of untyped lambda calculus with beta reduction (but without eta reduction). This opens the possibility  to use this artificial chemistry for constructing artificial chemical connectomes.
Here, on this open notebook/blog, you may read more about this here:
The article is available also as  arXiv:1309.6914 [cs.FL].
_________________________

The Y combinator in graphic lambda calculus and in the chemical concrete machine

A simpler computation than the one with the Ackermann machine concerns the Y combinator.  Seen as a chemical reaction network, it looks like this for graphic lambda calculus. In the figure “A” is any graph in GRAPH which has only one exit arrow, for example a combinator graph.

ycombi_2

One might prefer to not use the GLOBAL FAN-OUT move. That’s possible, by passing to the chemical concrete machine formalism. The chemical reaction network is a bit different. (Notice the move PROP+, which is a composite move defined in the post Chemical concrete machine, detailed (VI). )

ycombi_3

Lots of interesting things happen, among them we notice the appearance of  stacks of “elementary units from the last post. so in the chemical concrete machine version of the behaviour of the Y combinator, the machine counts the number of times A should be replicated, if known (that’s a kind of lazy evaluation, if evaluation would make sense in this formalism).

___________________________

A machine for computing the Ackermann function in graphic lambda calculus

The Ackermann function is a computable function which is not primitive recursive. It can be expressed in untyped lambda beta  calculus (which is the version that I use).

UPDATE (April 2015): see the much better demos for this in chemlambda, like Ackermann(2,2)=7 computation. It is moreover random and the pet example for a proof of principle experiment with a hypothetical molecular computer.

Let’s see how does it look in graphic lambda calculus. I shall explain how the computation of it unfolds in at least one more post. In this one I shall concentrate at defining a sort of machine which does the computation of this function.

Very, briefly, I introduce some macros (i.e. notations) for the relevant pieces which are used in this Ackermann machine.

ack_1

These pacman macros are used for defining the Ackermann machine:

ack_2

Some explanations and remarks are in order.

1. As you see, the Ackermann machine looks like a pacman munching pacmans, some of them controlled by other pacmans.

2. The Ackermann machine as presented here is not the graph obtained from the lambda calculus term which represents the Ackermann function (explanations in a future post).

3. The coloured rectangles can be any graph in GRAPH with two entries and two exits. Depending on what these graphs are, the Ackermann machine does different things, one of them being the computation of the Ackermann function. The fact that such graphs are taken with two entries and two exits is reminiscent of the modelling of B-type neural networks with graphic lambda calculus, where synapses are also such graphs (this needs more explanations, are for later).

4. In particular, the coloured rectangles can be chosen in a way related to the Church encoding of the numerals. Indeed, look at the following figure, which contains examples of graphs which can be used as the coloured rectangles in the Ackermann machine.

ack_3

It’s clear what they are: a kind of stacks which accumulate copies of the same “elementary unit”, or otherwise they are empty (i.e. the first graph). I use these to obtain the Church numerals:

ack_5

5. The same can be done with the chemical concrete machine, of course. This observation is relevant because the way the machine functions (by local graph rewrites), when expressed in the chemical concrete machine formalism, will look like a chemical reaction network.

____________________________

Model of computation vs programming language in molecular computing

An interesting discussion (?) started in the comments of this John Baez  G+ post concerns differences between “model of computation” and “programming language” denominations (in that post, in particular, for Petri nets).

I reproduce here what I think that are relevant bits for this discussion and later, after possibly several updates, I shall try to write something useful by using these bits.

1.  Turing machine and lambda calculus are both models of computation, but lambda calculus is also  a programming language and Turing machine is not.

2. Zachariah Hargis makes the point of comparing this model of computation  vs  programming language distinction as related to the one  made by Leibniz between calculus ratiocinator  and  lingua characteristica. (Among other references, note to self to explore further.)

3. Chemical reaction networks (CRNs)  is one fundamental ingredient of molecular computing, no matter what formalization of CRNs is used. Don’t believe that all “computational part” of a CRN is a Petri net (because it is often very important which are concretely the molecules and reactions involved in the CRN, not only the abstract number of species and reaction rates between those).

4. Petri nets, as used in chemistry, are a tool for getting quantitative information from CRNs, once the CRNs are designed by other means. Petri nets might be useful for thinking about CRNs, but not necessary for designing CRNs.

5.  CRNs  which are designed by using, or embody in some sense lambda calculus is a much more interesting path towards a “programming language”, and a classical one (Fontana and Buss Algorithmic chemistry), than the more studied engineering style implementation of imperative programming and by force copy-paste of TM thinking into bio computing.

6. Among the specifications for a “programming language for chemistry” are the following:

  • (a) geometrical (in particular no use of currying, along more obvious features as being parallel, asynchronous, the latter being achievable already by many well known, non exotic programming languages),
  • (b) purely syntactic (in particular no names management, nor input-output signalling),
  • (c) (maybe an aspect of (b), but not sure)  no use of evaluation in computation (strategy).

(The chemical concrete machine satisfies these requirements and moreover it contains lambda calculus, thus showing that  such a “language” is possible. However, the chemical concrete machine is based on a made-up, artificial chemistry, thus it provides only a proof of principle for the existence of such a “language”. Or is it?  Biochemists help is needed to identify, or search for real chemical reactions which could be used for implementing the chemical concrete machine in reality.)

7. The problem of tacit axioms in history of the Church-Turing thesis might be especially relevant for biochemists, and it could also be mentioned as an argument in favour of making a distinction between “model of computation” and “programming language”:  any model of computation uses some tacit axioms, while a programming language does not (only at the semantic level concerning making (human) sense about the results and ways of functioning of the programming language  such tacit axioms are used in this case). For biochemists, to not use such tacit axioms is a must  when they try to find scientifically valid explanations. CS does well when ignoring these, in most of the cases.

___________________________

Computing with space in the internet of things

My primary interest is “computing with space”.  Some remarks by  Allen Francom  make me believe that it might be a way to give a “space” to the internet of things by using graphic lambda calculus. By that I mean the following: the internet of things is a huge collection of objects which are somewhere on Earth, each carrying a very small processor. So the objects are in the physical space, but they change their places all the time, and their processors are tiny, so one better use tiny programs and simple executions for syncronizing their relative places one with respect to the others.

That is a kind of “computing with space” and it fits well with the following problem with e-books raised by  by Mark Changizi, who says that  e-books lack space and gives as an example his (physical library). He says he knows that book A is the one close to that calculus book he uses when he need some formulae, and that’s the way he is going to find book A. While, in a virtual library there is no space, the relative positions of a e-book wrt to others is irrelevant, because when you click a link is like you teleport (and ignore space) from one place to another. My interpretation of this is: the way Changizi finds a book in his library is different from the way a robot Changizi would find the book, because the robot Changizi would need the coordinates of the book A with respect to  a corner of the bookshelf, or some information like this, then it would go and fetch the book. On the contrary, the human Changizi (‘s brain) acquired a geometric competence by accumulating these simple bits of knowledge about his library, namely tht book A is near the calculus book, and the calculus book is used when he needs formulae, etc.

Is this a problem which the internet of things will have? Surely. Is this solved? I have no idea, but I know how to solve it, by satisfying also the “tiny programs and executions” constraint. This involves exactly that part (called by me “emergent algebra sector”) which gives the kind of programs for using space in a very concentrated form, not needing “geometric expertise” to generate and manipulate them. Then, imagine that each object from the internet of things has a relative representation of some  of his neighbours, under the form of a small graph, like the graphs from graphic lambda calculus or the molecules from the chemical concrete machine. These graphs are generated from simple rules concerning proximity relations (like the keys thing is on the couch thing, or the socks thing is in the drawer thing), more on that later, and changes in proximity relations are like moves acting on such tiny graph molecules. Then, if you want to know where you lost the keys thing, instead of endowing the keys with a GPS, you just need to evaluate (i.e. decorate, like in the discussions about decorations of graphic lambda calculus graphs) these tiny graphs and find out that the key thing is i the drawer thing.
Here is a more mathematical explanation. Suppose you have a set T of things, objects. You want to describe how they are in space without eventually using any “geometric expertise” (like using a GPS coordinate system, for example).

As you know, the set of things T is the same as the trivial groupoid of things T^{2}, with arrows being pairs of things (a,b) and with composition of arrows being (a,b)(b,c) = (a,c).  Now, let us define first what is the “space” around one object from T, let’s call it “e“. (The same works for each object).

For defining this you take a commutative group (could be (0,\infty) or could be the free commutative group over an alphabet (containing for example “NEAR” an “FAR”, such that NEAR^{-1} = FAR, etc). Call this group the “scale group”. Then to any pair of objects (a,b) and any scale \varepsilon, associate a “virtual pair at scale epsilon”  (a,b)_{\varepsilon} = (b(\varepsilon)a, b), where b(\varepsilon)a  is like a term in lambda calculus constructed from variable names “a” and “b”, only that it does not satisfy the lambda calculus definition, but the emergent algebra definition. Of course you can see such terms as graphs in graphic lambda calculus made by the epsilon gates, along with the emergent algebra moves from graphic lambda calculus, as in

Moreover, you can mix them with lambda calculus terms, as it is possible in graphic lambda calculus.

So we have (in our platonic universe, because in practice we can’t “have” an infinite set of all things possible) a collection of term-like objects made from a, b, etc and from scales epsilon, mu, etc. We may imagine them visually either as binary trees, like when we represent graphically compositions of operations,
or as graphs in graphic lambda calculus, if we apply to them the algorithm for eliminating variable names. This is a kind of collection of all programs for using a space (each one is a me for space). (Mind you that there is no constraint, like euclidean space, or other.) The space around one object, say e, is obtained from this construction applied to the groupoid of pairs of pairs with arrows ((a,e),(b,e))  and the scale acts like this ((a,e),(b,e))_{\varepsilon} = ((a,e)_{\varepsilon}, (b,e)_{\varepsilon}).  (Remark: in the epsilon deformed groupoid the composition of arrows is transported by the epsilon deformation, so the composition changed from trivial to more interesting.)
If you are still reading this and is too long, the shorter description is that the space around an object e is defined in terms of virtual geometric relations (mes for space)  between all the other things from the set T (which is finite, of course), with respect to $latex  e$, and that each such virtual geometric relation is a small graph in graphic lambda calculus. More than this, suppose you want to translate from a virtual geometric relation to another (or from a virtual place to another). Such a “translation” is itself a small graph, actually made by only 4 nodes.
And here comes the final part: in order to know where the things from T are, one with respect to the others, it is enough to give,  to attribute to each pair of objects (a,b) a “virtual geometric relation” in the space around b, i.e. a graph. If the things move one with respect  to the other, they can update their relative geometric relations by moves in graphic lambda calculus.
If you want to know where a thing is in the physical space then you have to start from somewhere (some object) and decorate the virtual space relations,as they appear as graphs. This is like evaluations of  lambda calculus terms.

In conclusion, I am very interested in practical applications of this, and I can at least formulate a list of needs, some of them with an IT flavour, others applied math. As for the pure math needs, they are pure fun and play. All this together would lead to really interesting things. One of the things to keep in mind is that would also endow the internet of things with a synthetic chemical connectome, as an extension of The chemical connectome of the internet .

___________________________

Competitivity and creativity in academia

Via Mark Changizi, I arrived to this post at It’s OK to be smart , which has as a conclusion: [for PhD students] “… faculty jobs ARE the alternative career”.  Went from here to  The Tenure Games, which lot of people know. The problem is big, one hears about particular cases all the time  and a natural question is:  is academia facing a real problem or it is just about people who don’t  stand the heat of the fierce competition?

I think there is a more important side question: is academia delivering? It should, because it selects the best of the best, right?

Don’t believe my suggestion to the contrary. Let’s see another, older academia in action, more than a century ago: [source]

The most famous art competition for students was the Prix de Rome. The winner of the Prix de Rome was awarded a fellowship to study at the Academie francaise’s school at the Villa Medici in Rome for up to five years. To compete, an artist had to be of French nationality, male, under 30 years of age, and single. He had to have met the entrance requirements of the Ecole and have the support of a well-known art teacher. The competition was grueling, involving several stages before the final one, in which 10 competitors were sequestered in studios for 72 days to paint their final history paintings. The winner was essentially assured a successful professional career.

The ultimate achievement for the professional artist was election to membership in the Academie  francaise and the right to be known as an academician.

All this to the result, a 100 years after, of decorating a chocolate box. Art followed another, much more creative path.

______________________

This is part of the same problem as academic publishing, what is intriguing is that  the same parallel works.

______________________

Metabolism of loops (a chemical reaction network in the chemical concrete machine)

In the following figure you see a chemical reaction network in the chemical concrete machine which involves only “+” enzymes.

chem_eval_2

Comments:

  • the molecule from the left upper corner corresponds to combinator \Omega = (\lambda x . (xx)) (\lambda x . (xx))
  • for each molecule I marked the available reaction sites, with dashed red closed curves, along with the name in red of the enzyme which is interested in the respective reaction site,
  • for the chemical reactions, figured with blue arrows, I put the name of the move which corresponds to the respective chemical reaction, with an added “+” to indicate that the reaction is unidirectional, as if only “+” enzymes are available (that is I wrote “DIST^{+}” instead of “\delta^{+}” and so on, see for details the notations used in the chemical concrete machine),
  • whenever there are non-overlapping reaction sites, we can perform in parallel the moves (reactions),
  • but in some places we have overlapping reaction sites, like in the case of the molecule from the 3rd row, left, where a \beta reaction site and a DIST reaction site overlap.
  • In case of overlapping reaction sites there are multiple possibilities, which produce branches in the chemical reaction network,
  • I have not used any elimination of loops,
  • several molecules from the figure don’t correspond to lambda calculus terms, thus what  is figured is not a representation of the usual fact that the combinator \Omega does not have a normal form (one which cannot be reduced further),
  • in the middle of the figure we see a loop-producing cycle, hence the name,
  • curiously, by going outside lambda calculus, we can reduce \Omega (at left upper corner) to a “normal” form (right lower corner), i.e. a loop and a molecule which does not have any “+” reaction sites. (The green fork node of that molecule is a fan-out node and the molecule is like a fan-out gate with one output curling back to the input of the fan-out, but remember that no signals circulate through the arrows 🙂  )

____________________________

See also:

____________________________

UPDATE:  For clarity, here is the usual cycle of reductions of the combinator \Omega, seen in graphic lambda calculus (but with the drawing conventions of the chemical concrete machine):

chem_eval_4

(A similar figure appears after the proof of Theorem 3.1  arXiv:1305.5786v2 [cs.LO] . )

In graphic lambda calculus we have the GLOBAL FAN-OUT move, which, as the name says, is a global move. In the chemical concrete machine we have only local moves.  Here is how the same cycle is achieved in the chemical concrete machine.

chem_eval_3

You see a move (reaction) which is labelled “(multiple) CO-ASSOC”. The reason is that we need several CO-ASSOC moves to pass from the molecule at the right of the 2nd row to the one from the 3rd row. Alternatively, the same can be done with a SWITCH move, which is a succession of FAN-IN(+) , CO-COMM(+)  and FAN-IN(-), therefore unpleasant as well. Moreover, if we invent a SWITCH enzyme (i.e. if we accept SWITCH as a new move which does not modify the whole chemical machine, because SWITCH is a consequence of the other moves) then we have an explosion of places where SWITCH enzyme could act.

In conclusion, the usual endless reduction of \Omega in lambda calculus, is possible, but highly unlikely in the chemical concrete machine, and only in the presence of CO-ASSOC or SWITCH enzymes, and moreover under tight control of the places where these enzymes act.

____________________________

A me for space

On the upper part of the stele of Hammurabi’s code of laws we see Shamash giving to Hammurabi the sumerian royal insignia [source]:

code_Hammurabi_bas_relief_rwk

The royal insignia are not a 1 and a 0, as speculated by Neal Stephenson in Snow crash, but the “holy measuring rod and line“, which is a me, i.e [source]

Another important concept in Sumerian theology, was that of me. The me were universal decrees of divine authority. They are the invocations that spread arts, crafts, and civilization. The me were assembled by Enlil in Ekur and given to Enki to guard and impart to the world, beginning with Eridu, his center of worship. From there, he guards the me and imparts them on the people. He directs the me towards Ur and Meluhha and Dilmun, organizing the world with his decrees. Later, Inanna comes to Enki and complains at having been given too little power from his decrees. In a different text, she gets Enki drunk and he grants her more powers, arts, crafts, and attributes – a total of ninety-four me. Inanna parts company with Enki to deliver the me to her cult center at Erech. Enki recovers his wits and tries to recover the me from her, but she arrives safely in Erech with them. (Kramer & Maier 1989: pp. 38-68)

A me for space.  A program for using space.

_____________________

This is a good introduction to the main subject of this post. My main interest lies into understanding “computing with space”, which is a (more general?) form of computing (than the one covered by lambda calculus), based on some variant of graphic lambda calculus.  In this formalism we see trivalent graphs, subjected to graph rewrites. Some of these graphs can be associated to lambda calculus terms, i.e. to programs. Other graphs can be associated to emergent algebras, which, I claim, are the natural house of computing with space. They are programs for using space, they are mes for space, which you (or your brain, hypothetically) may use without appeal to geometric expertise, as Koenderink  claims in Brain a geometry engine that the visual brain does.

This is a rather different image about what space is, than the one offered by physicists.  Think about space in terms of universal programs for using it, instead about space as made by points, or as a smooth of fractal passive receptacle.

The graphic lambda calculus has to be modified in certain ways, which I shall explain in this and later posts, as it happened with the chemical concrete machine, which is another modification of graphic lambda calculus, Turing universal, which expresses programs (lambda calculus terms) as molecules and graph rewrites as enzymes.

The strangest, but possibly the most powerful feature of these graphic languages is that they don’t use names and they don’t need evaluations for being able to perform reductions (i.e. program executions). This feature makes them good candidates for trying to use them as models of brain mechanism, because in no biological brain you will find physically implementations of names, alphabets or evaluations in the CS sense.

_____________________

Enough with the controversial claims. Concretely, the graphic formalism which I want to develop is already sketched in the following posts:

and in the series

The elementary gates, or nodes, of the formalism I am looking for will be (expressed in terms of graphic lambda calculus, but really taken afterwards as fundamental, not derived)

space_gates

Add to these the fan-in and fan-out gates from the chemical concrete machine.

The main move is the \varepsilon– beta move, (see a proof for this move in the graphic lambda calculus formalism, in the post  Better than extended beta move  )

space_main

which combines the graphic beta move with the move R2 of emergent algebra inspiration.

In few words, the “me for space” which I propose is a deformation of the chemical concrete machine formalism with a scale parameter.

The chemical connectome of the internet

This is a continuation of the post  WWW with Metabolism . That post, in a nutshell, propose to endow the internet connectome with a chemical connectome, by using an artificial chemistry for this artificial network, namely the graphic form of the lambda calculus formalism of the chemical concrete machine (please go read the mentioned post for details).

I’m going to do a weird, but maybe instructive thing and pretend that it already happened. Why? Because I intend to argue that the internet with a chemical connectome might be a good laboratory for testing ideas about real brains.

The name “chemical connectome” is taken from the article The BRAIN Initiative: Toward a Chemical Connectome.  It means the global chemical reaction network associated with the “connectome”, which is the physical neurons-and-synapses network. Quoting from the article:

Focusing solely on neuronal electrical activity is shortsighted if the goal is to understand information processing in brains. Brain function integrally entails complex synaptic chemistries. […] New tools fuel fundamentally new conceptualizations. By and large, today’s approaches for sensing neurotransmitters are not able to approach chemical neurotransmission at the scales that will ultimately be important for uncovering emergent properties. Likewise, most methods are not highly multiplexed suchthat the interplay of multiple chemical transmitters can be unraveled. In addition to measuring electrical activity at hundreds or thousands of neurons simultaneously, we need to advocate for a large-scale multidisciplinary effort aimed at in vivo nanoscale chemical sensing.

Imagine then we already succeeded with implementing a (logical, artificial, human made) chemical connectome over the internet, by using CS tools and exploiting the relative simplicity of ingredients of the  internet (connected Von Neumann machines which exchange data through known formats and programming languages). We could explore this chemical connectome and we could play with it (because we defined it), even if, like in the case of the brain connectome, we can’t possibly know the www network in all details. So, we should be in a situation which is extremely complex, like a brain is,  but however allows for very powerful  experimentation, because is still human made, according to human rules.

On John Bohannon article in Science

In Science appeared the article Who’s afraid of peer-review?  by John Bohannon.  There were many reactions already to this article, I’ll add mine.

I shall politely pretend that the article is not a piece of propaganda against OA. (Remark, ironically, that in order to enhance it’s dissemination Science did not hide it behind a paywall.)

Then, it’s like a gun, which can be used by anybody for shooting in whatever direction they like.  Pick your line:

  • Gold OA journals are afraid of peer-review (because Bohannon uses a list made of exclusively by Gold OA journals)
  • Traditional peer review is a joke, should be improved by making it more open, and perpetual, (cf Michael Eisen)
  • DOAJ is a joke (because Bohannon used DOAJ as one of the sources for building his list and some of the DOAJ listed journals accepted the flawed articles)
  • DOAJ is not a joke (not all DOAJ listed journals from Bohannon list accepted the flawed articles)
  • Beall’s list is good (a lot of predatory publishers from Beall’s list accepted the flawed articles)
  • Beall’s list is not entirely good (however, some of the publishers listed by Beall did not accept the flawed articles)
  • Study flawed, pay-back by Science after the arsenic life story (cf Mike Taylor)
  • All OA journals are bad quality (yes, and salami slicing publishing is ethical, provided is published by legacy publishers)
  • We should trust only ISI journals (sure, boss, but I like DORA)

What if, when faced with spin, we should instead declare that anybody has the right to make it’s own opinion, based on the abundant evidence available? Instead of looking at DOAJ through authority lens, why not acknowledge that DOAJ is a useful tool, not an authority argument. Same for Beall’s list. They don’t have to show us perfect lists, they just help us with some information. We have to use our brains, even if the legacy publishers don’t like that.  We don’t have to believe what  somebody say, based only on the authority of the person. For example look at Peter Suber’s post, you don’t have to believe him. It’s up to you to read Bohannon’s article, then Suber’s reaction, or Eisen, or Taylor, whatever you want to look at, and then make your own opinion.

It goes the same when it comes to legacy publishers, to ISI lists, to research articles. Read them, use as many information you can gather and make your own opinion. Don’t rely on authority, there’s something better these days, is called free access to information. If you are lazy and you don’t want to make your own mind, well, it’s not my problem, because, like never before in history,  information is not scarce.

Neurons know nothing, however …

… they know surprisingly much, according to the choice of definition of “neural knowledge”. The concrete definition which I adopt is the following:  the knowledge of a neuron at a given moment is the collection (multiset) of molecules it contains. The knowledge of a synapse is the collection of molecules present in respective axon, dendrite and synaptic cleft.

I take the following hypotheses for a wet neural network:

  • the neural network is physically described as a graph with nodes being the neurons and arrows being the axon-dendrite synapses. The network is built from two ingredients: neurons and synapses. Each synapse involves three parts: an axon (associated to a neuron), a synaptic cleft (associated to a local environment) and a dendrite (associated to a neuron).
  • Each of the two ingredients of a neural network, i.e. neurons and synapses, as described previously, function by associated chemical reaction networks, involving the knowledge of the respective ingredients.
  • (the most simplifying hypothesis)  all molecules from the knowledge of a neuron, or of a synapse, are of two kinds: elements of MOLECULES or enzyme  names from the chemical concrete machine.

The last hypothesis seem to introduce knowledge with a more semantic flavour by the backdoor. That is because, as explained in  arXiv:1309.6914 , some molecules (i.e. trivalent graphs from the chemical concrete machine formalism) represent lambda calculus terms. So, terms are programs, moreover the chemical concrete machine is Turing universal, therefore we end up with a rather big chunk of semantic knowledge in a neuron’s lap. I intend to show you this is not the case, in fact a neuron, or a synapse does not have (or need to) this kind of knowledge.

__________________________

Before giving this explanation, I shall explain in just a bit more detail how the wet neural network,  which satisfies those hypotheses, works.  A physical neuron’s behaviour is ruled by the chemistry of it’s metabolic pathways. By the third hypothesis these metabolic pathways can be seen as graph rewrites of the molecules (more about this later). As an effect of it’s metabolism, the neuron has an electrical activity which in turn alters the behaviour of the other ingredient, the synapse. In the synapse act other chemical reaction networks, which are amenable, again by the third hypothesis, to computations with the chemical concrete machine. As an effect of the action of these metabolic pathways, a neuron communicates with another neuron. In the process the knowledge of each neuron (i.e. the collection of molecules) is modified, and the same is true about a synapse.

As concerns chemical reactions between molecules, in the chemical concrete machine formalism there is only one type of reactions which are admissible, namely the reaction between a molecule and an enzyme. Recall that if (some of the) molecules are like lambda calculus terms, then (some of the) enzymes are like names of reduction steps and the chemical reaction between a molecule and an enzyme is assimilated to the respective reduction step applied to the respective lambda calculus term.

But, in the post  SPLICE and SWITCH for tangle diagrams in the chemical concrete machine    I proved that in the chemical concrete machine formalism there is a local move, called SWITCH

chem_switch

which is the result of 3 chemical reactions with enzymes, as follows:

chem_switch_2

Therefore, the chemical concrete machine formalism with the SWITCH move added is equivalent with the original formalism. So, we can safely add the SWITCH move to the formalism and use it for defining chemical reactions between molecules (maybe also by adding an enzyme, or more, for the SWITCH move, let’s call them \sigma).  This mechanism gives chemical reactions between molecules with the form

A + B + \sigma \rightarrow C + D + GARB

where $\latex A$ and B are molecules such that by taking an arrow from A and another arrow from B we may apply the \sigma enzyme and produce the SWITCH move for this pair of arrows, which results in new molecules C and D (and possibly some GARBAGE, such as loops).

In conclusion, for this part concerning possible chemical reactions between molecules, we have enough raw material for constructing any chemical reaction network we like. Let me pass to the semantic knowledge part.

__________________________

Semantic knowledge of molecules. This is related to evaluation and it is maybe the least understood part of the chemical concrete machine. As a background, see the post  Example: decorations of S,K,I combinators in simply typed graphic lambda calculus , where it is explained the same phenomenon (without any relation with chemical metaphors) for the parent of the chemical concrete machine, the graphic lambda calculus.

Let us consider the following rules of decorations with names and types:

chem_decor_1

If we consider decorations of combinator molecules, then we obtain the right type and identification of the corresponding combinator, like in the following example.

chem_decor_2

For combinator molecules, the “semantic knowledge”, i.e. the identification of the lambda calculus term from the associated molecule, is possible.

In general, though, this is not possible. Consider for example a 2-zipper molecule.

chem_decor_3

We obtain the decoration F as a nested expression of A, D, E, which enough for performing two beta reductions, without knowing what A, D, E mean (without the need to evaluate A, D, E). This is equivalent with the property of zippers, to allow only a certain sequence of graphic beta moves (in this case two such moves).

Here is the tricky part: if we look at the term F then all that we can write after beta reductions is only formal, i.e. F  reduces to (A[y:=D])[x:=E], with all the possible problems related to variable names clashes and order of substitutions. We can write this reduction but we don’t get anything from it, it still needs further info about relations between the variables x, y and the terms A, D, E.

However, the graphic beta reductions can be done without any further complication, because they don’t involve any names, nor of variables, like x, y, neither of terms, like A, D, E, F.

Remark that the decoration is made such that:

  • the type decorations of arrows are left unchanged after any move
  • the terms or variables decorations (names elsewhere “places”) change globally.

We indicate this global change like in the following figure, which is the result of the sequence of the two possible \beta^{+} moves.

chem_decor_4

Therefore, after the first graphic beta reduction, we write  A'= A[y:=D] to indicate that A' is the new, globally (i.e. with respect to the whole graph in which the 2-zipper is a part) obtained decoration which replaces the older A, when we replace y by D. After the second  graphic beta reduction we use the same notation.

But such indication are even misleading, if, for example, there is a path made by arrows outside the 2-zipper, which connect the arrow decorated by D with the arrow decorated by y.  We should, in order to have a better notation, replace D by D[y:=D], which gives rise to a notation for a potentially infinite process of modifying D. So, once we use graphs (molecules) which do not correspond to combinators (or to lambda calculus terms), we are in big trouble if we try to reduce the graphic beta move to term rewriting, or to reductions in lambda calculus.

In conclusion for this part: decorations considered here, which add a semantic layer to the pure syntax of graph rewriting, cannot be used as replacement the graphic molecules, nor should reductions be equated with chemical reactions, with the exception of the cases when we have access to the whole molecule and moreover when the whole molecule is one which represents a combinator.

So, in this sense, the syntactic knowledge of the neuron, consisting in the list of it’s molecules, is not enough for deducing the semantic knowledge which is global, coming from the simultaneous decoration of the whole chemical reaction network which is associated to the whole neural network.

SPLICE and SWITCH for tangle diagrams in the chemical concrete machine

Knot diagrams can be implemented in the chemical concrete machine.  This result has been already touched in the post  Local FAN-IN eliminates GLOBAL FAN-OUT (II). Here I shall give a short and clear exposition.

1. Oriented knot diagrams.  I shall describe the larger collection of oriented tangle diagrams. A knot is a proper embedding of an oriented circle in 3D space. A tangle is a proper  embedding of a finite collection of  oriented 1D segments (arcs) and circles  in 3D space. A tangle diagram represents a projection of a tangle in general position on a plane (i.e. so that different pieces of arcs or circles do not project on tangent curves in the plane), along with supplementary information at the crossings of the projections (i.e. when the  projections of two pieces of arcs cross, there is more information about which piece passes over).  In this description, a tangle diagram is a 4-valent, locally planar graph, made by two elementary nodes, called “crossings” (usually tangle diagrams are defined as being globally planar graphs made by crossings), and by loops.

chem_cross_0

2. Oriented Reidemeister moves. Two (globally planar) tangle diagrams represent the same 3D tangle up to regular deformations in 3D if and only if we can pass from one tangle diagram to another by a finite sequence of oriented Reidemeister moves (I use here, as previously, the names from   Michael Polyak “Minimal generating sets of Reidemeister moves“, excepting that I use  the letter “R” instead of the letter “\Omega“) .

reidemeister_1

reidemeister_2

reidemeister_3

These are 16 local graph rewrites (local moves) acting on the collection of tangle diagrams.

 3.   My purpose now is to give an implementation of tangle diagrams (not especially globally planar ones, but the more general locally planar ones)  in the chemical concrete machine formalism. For this I have to define the elementary nodes of tangle diagrams as molecules in the chemical concrete machine formalism. This is easy: the loops are the same as in the chemical concrete machine, the crossings are defined as in the following figure:

chem_cross

Each oriented tangle diagram is translated into a molecule from the chemical concrete machine, by using this definition of crossings.  As a consequence, each oriented Reidemeister move becomes a local move in the chemical concrete machine, simply by translating the LHS and RHS tangle diagrams into molecules.

4. I have to prove that these translations of the oriented Reidemeister moves are compatible with the moves from the chemical concrete machine in the following sense: each translation of a Reidemeister move can be obtained as a finite chain of moves from the chemical concrete machine.

It is easier to proceed the other way around, namely to ground  the reasoning in the oriented tangle diagrams universe. I shall translate back some moves from the chemical concrete machine, as seen as acting on oriented tangle diagrams.  See this post at mathoverflow for context, basically it’s almost all we need for the proof, supplemented only by the proof of the SWITCH move, as you shall see.

The graphic beta move in the chemical concrete machine translates as a SPLICE move (it has two variants) over oriented tangle diagrams. The elimination of loops becomes a LOOP move. These moves are described in the next figure.

splice_1

You can find in the mathoverflow post a proof that all oriented Reidemeister moves, with the exception of R2c, R2d, R3a, R3h, are a consequence of a finite number of SPLICE and LOOP moves.

It is left to prove that R2c, R2d, R3a, R3h can be expressed by (translations of) some chains of chemical concrete machine moves. It turns out that we can reduce all these moves to the move R2d by using SPLICE and LOOP, therefore the only thing left is to look at R2d.

chem_r2d

By SPLICE and LOOP, the left hand side of R2d becomes:

chem_r2d_1

So R2d is equivalent with the following SWITCH move:

chem_switch

Finally,  in the post   Local FAN-IN eliminates GLOBAL FAN-OUT (II)   there is a proof that the SWITCH move can be obtained from CO-COMM and FAN-IN moves. The next figure shows this.

chem_switch_2

This concludes the proof that R2d can be expressed by a chain of moves from the chemical concrete machine.

________________________