Neurons know nothing, however …

… they know surprisingly much, according to the choice of definition of “neural knowledge”. The concrete definition which I adopt is the following:  the knowledge of a neuron at a given moment is the collection (multiset) of molecules it contains. The knowledge of a synapse is the collection of molecules present in respective axon, dendrite and synaptic cleft.

I take the following hypotheses for a wet neural network:

• the neural network is physically described as a graph with nodes being the neurons and arrows being the axon-dendrite synapses. The network is built from two ingredients: neurons and synapses. Each synapse involves three parts: an axon (associated to a neuron), a synaptic cleft (associated to a local environment) and a dendrite (associated to a neuron).
• Each of the two ingredients of a neural network, i.e. neurons and synapses, as described previously, function by associated chemical reaction networks, involving the knowledge of the respective ingredients.
• (the most simplifying hypothesis)  all molecules from the knowledge of a neuron, or of a synapse, are of two kinds: elements of $MOLECULES$ or enzyme  names from the chemical concrete machine.

The last hypothesis seem to introduce knowledge with a more semantic flavour by the backdoor. That is because, as explained in  arXiv:1309.6914 , some molecules (i.e. trivalent graphs from the chemical concrete machine formalism) represent lambda calculus terms. So, terms are programs, moreover the chemical concrete machine is Turing universal, therefore we end up with a rather big chunk of semantic knowledge in a neuron’s lap. I intend to show you this is not the case, in fact a neuron, or a synapse does not have (or need to) this kind of knowledge.

__________________________

Before giving this explanation, I shall explain in just a bit more detail how the wet neural network,  which satisfies those hypotheses, works.  A physical neuron’s behaviour is ruled by the chemistry of it’s metabolic pathways. By the third hypothesis these metabolic pathways can be seen as graph rewrites of the molecules (more about this later). As an effect of it’s metabolism, the neuron has an electrical activity which in turn alters the behaviour of the other ingredient, the synapse. In the synapse act other chemical reaction networks, which are amenable, again by the third hypothesis, to computations with the chemical concrete machine. As an effect of the action of these metabolic pathways, a neuron communicates with another neuron. In the process the knowledge of each neuron (i.e. the collection of molecules) is modified, and the same is true about a synapse.

As concerns chemical reactions between molecules, in the chemical concrete machine formalism there is only one type of reactions which are admissible, namely the reaction between a molecule and an enzyme. Recall that if (some of the) molecules are like lambda calculus terms, then (some of the) enzymes are like names of reduction steps and the chemical reaction between a molecule and an enzyme is assimilated to the respective reduction step applied to the respective lambda calculus term.

But, in the post  SPLICE and SWITCH for tangle diagrams in the chemical concrete machine    I proved that in the chemical concrete machine formalism there is a local move, called SWITCH

which is the result of 3 chemical reactions with enzymes, as follows:

Therefore, the chemical concrete machine formalism with the SWITCH move added is equivalent with the original formalism. So, we can safely add the SWITCH move to the formalism and use it for defining chemical reactions between molecules (maybe also by adding an enzyme, or more, for the SWITCH move, let’s call them $\sigma$).  This mechanism gives chemical reactions between molecules with the form

$A + B + \sigma \rightarrow C + D + GARB$

where $\latex A$ and $B$ are molecules such that by taking an arrow from $A$ and another arrow from $B$ we may apply the $\sigma$ enzyme and produce the SWITCH move for this pair of arrows, which results in new molecules $C$ and $D$ (and possibly some GARBAGE, such as loops).

In conclusion, for this part concerning possible chemical reactions between molecules, we have enough raw material for constructing any chemical reaction network we like. Let me pass to the semantic knowledge part.

__________________________

Semantic knowledge of molecules. This is related to evaluation and it is maybe the least understood part of the chemical concrete machine. As a background, see the post  Example: decorations of S,K,I combinators in simply typed graphic lambda calculus , where it is explained the same phenomenon (without any relation with chemical metaphors) for the parent of the chemical concrete machine, the graphic lambda calculus.

Let us consider the following rules of decorations with names and types:

If we consider decorations of combinator molecules, then we obtain the right type and identification of the corresponding combinator, like in the following example.

For combinator molecules, the “semantic knowledge”, i.e. the identification of the lambda calculus term from the associated molecule, is possible.

In general, though, this is not possible. Consider for example a 2-zipper molecule.

We obtain the decoration $F$ as a nested expression of $A, D, E$, which enough for performing two beta reductions, without knowing what $A, D, E$ mean (without the need to evaluate $A, D, E$). This is equivalent with the property of zippers, to allow only a certain sequence of graphic beta moves (in this case two such moves).

Here is the tricky part: if we look at the term $F$ then all that we can write after beta reductions is only formal, i.e. $F$  reduces to $(A[y:=D])[x:=E]$, with all the possible problems related to variable names clashes and order of substitutions. We can write this reduction but we don’t get anything from it, it still needs further info about relations between the variables $x, y$ and the terms $A, D, E$.

However, the graphic beta reductions can be done without any further complication, because they don’t involve any names, nor of variables, like $x, y$, neither of terms, like $A, D, E, F$.

Remark that the decoration is made such that:

• the type decorations of arrows are left unchanged after any move
• the terms or variables decorations (names elsewhere “places”) change globally.

We indicate this global change like in the following figure, which is the result of the sequence of the two possible $\beta^{+}$ moves.

Therefore, after the first graphic beta reduction, we write  $A'= A[y:=D]$ to indicate that $A'$ is the new, globally (i.e. with respect to the whole graph in which the 2-zipper is a part) obtained decoration which replaces the older $A$, when we replace $y$ by $D$. After the second  graphic beta reduction we use the same notation.

But such indication are even misleading, if, for example, there is a path made by arrows outside the 2-zipper, which connect the arrow decorated by $D$ with the arrow decorated by $y$.  We should, in order to have a better notation, replace $D$ by $D[y:=D]$, which gives rise to a notation for a potentially infinite process of modifying $D$. So, once we use graphs (molecules) which do not correspond to combinators (or to lambda calculus terms), we are in big trouble if we try to reduce the graphic beta move to term rewriting, or to reductions in lambda calculus.

In conclusion for this part: decorations considered here, which add a semantic layer to the pure syntax of graph rewriting, cannot be used as replacement the graphic molecules, nor should reductions be equated with chemical reactions, with the exception of the cases when we have access to the whole molecule and moreover when the whole molecule is one which represents a combinator.

So, in this sense, the syntactic knowledge of the neuron, consisting in the list of it’s molecules, is not enough for deducing the semantic knowledge which is global, coming from the simultaneous decoration of the whole chemical reaction network which is associated to the whole neural network.

A space program in graphic lambda calculus

I want to explain what I intend to do further in the program of “computing with space”, seen in the restricted sense of making sense of space in graphic lambda calculus. The explanation itself, if well done, would amount to achieving the program, therefore, at this stage, it can’t be too exact.

________________________

I think that a good definition of space is relational, based on it’s computational content, and not on “points” or other geometric objects and axioms over those. Obviously, a good definition of space should not be “spatial” itself, otherwise would be self-referential.

My belief, expressed mathematically, is that space (places) and types are simply just decorations over graphs in graphic lambda calculus, following the rules from the post Types and places .

If $A$ is a place and $\sigma$ is a type, then $A : \sigma$ means “there is a $\sigma$ at the place $A$“, for example $A : CAR$ means there is a car at $A$. (This interpretation is what I use to make sense in my mind, beyond the mathematical formalism, therefore subject to changing).  The name $A$ is just a name (of a variable or term), it does not have any other meaning than the one which can be grasped in relation with the other names for variables and terms. Recall that in graphic lambda calculus there are no names for variables and terms.

In the post   Better than extended beta move  I introduced two macros which I think they are better than the application and abstraction gates. Here they are again, for an arbitrary “scale parameter” $\varepsilon \in \Gamma$, where $\Gamma$ is a commutative group (think, for example, about the free abelian group over an alphabet, if you don’t like to think about $\Gamma$ as the group of reals):

Graphic lambda calculus can be reformulated by using these “scaled” application and abstraction gates as fundamental gates. This has as effects:

• elimination of the move ext2, replaced by the definition of application and abstraction gates as the “scaled” gates at scale $\varepsilon = 1$
• condensation of the graphic beta move and the move R2 into one scaled version of the graphic beta move, which is better than extended beta move .

The subtle part is that, if we look at the decorations, then the types decoration is left unchanged by the moves, but the place decoration is globally affected by the moves (i.e. “logical” moves, like the graphic beta move, change the space).

The other subtle part consists into extending the

to these scaled application and abstraction gates. You shall see that rather amazing things happen, when we look at decorations with places.

I am going to use lessons learned from the Chemical concrete machine.  If we use the scaled application and abstraction gates (and the FAN-OUT gate, of course) instead of the four gates of the original graphic lambda calculus, this makes 3 gates instead of four. Is then enough place to add a FAN-IN gate, as in the chemical concrete machine (while renouncing at identifying it with an $\varepsilon$ gate, as indicated in the chemical concrete machine formalism). This way we shall have four gates again, along with some natural distributivity moves, namely the ones from the chemical concrete machine and the one called the “mystery move” in the series on the dictionary from emergent algebra to graphic lambda calculus. In this way, as long as we stay in the “scaled version of combinatory logic sector” (which will contain BOTH the emergent algebras and the combinatory logic, interacting in beautiful ways), we will not need the GLOBAL FAN-OUT move.

After these mildly technical things will be done, interpretations with words will follow.

Spiral patterns in the full zipper graph (another post on the Chemical concrete machine)

A step for making something analogous to the Holliday junction, such that it is related to the Chemical concrete machine minimal formalism, is to make a full zipper. Let’s play with it.

I shall use two ingredients, which function only by using the graphic beta move, so I shall NOT use the FAN-IN move.

• zippers, this time written with the conventions of the Chemical concrete machine minimal formalism.

If we put them together then we obtain graphs like this: (ignore for the moment the red and green curved  arrows, suppose they are not there)

Now, if we want the numbering 1-1′ 2-2′ 3-3′  to make sense, we may add arrows, the green and red ones. But mind you, we could also add graphs with one input and one output instead of each green or red arrow.  As usual, the colours are just a visual help, they mean nothing in the formalism.

Is just a play, but for the moment, since it’s holiday season, I like to think about the green and red arrows (in fact about their replacements by graphs with one input and one output) as being the letters A, C, G, T. Of course, it might mean nothing, let’s see, it’s just a vague hypothesis.

Left for play: draw a Holliday junction.

Ancient Turing machine helps Chemical concrete machine

In this post I want to prove that only with arrow molecules and enzyme moves we can generate all we need for the chemical concrete machine, by using the minimal (?) formalism from  Chemical concrete machine, short list of gates and moves . This is done with the help of ideas from the old post   Ancient Turing machines (I): the three Moirai, which   contains a discussion about generating moves for graphic lambda calculus.

In the post Local FAN-IN eliminates GLOBAL FAN-OUT (II) , at the end, I prove a switch move from CO-COMM and FAN-IN moves. In the next figure, I call this move (which is, recall, a “macro move”, made by a succession of three moves) SWITCH.  In the same figure appears a move CREA’. The name is justified by the resemblance with the CREA move proposed in Ancient Turing machines (I): the three Moirai .

Before giving the proof of CREA’, let me remark that from these macro moves, together with the ones selected in the post  Chemical concrete machine, short list of gates and moves , we can recover all the generating moves of the three Moirai described in the ancient Turing machine post. For example, CREA’ , used for a triple of arrows, gives both the  Clotho’s  CREA  original move and the Atropos  GARB move.

Here is the proof of CREA’:

From a bag of arrows at our discretion, this move gives us fan-out gates and termination gates.  Moreover, it is pleasant to see that, as the SWITCH move is a consequence of CO-COMM and FAN-IN, the CREA’  move is a consequence of CO-ASSOC and FAN-IN.

We need the other three trivalent gates, let’s see how we can obtain them (generate them from arrows, by using moves, recall that moves are enzymes, in the world of the chemical concrete machine).

The fan-in gate is generated like this (we already generated termination gates):

The lambda abstraction and application gates can be generated by using the graphic beta move and a SWITCH move.

Now we have all the gates we need and we can proceed to constructing whatever combinator we want from them, mainly by using SWITCH moves. One elegant way for doing this would be to construct zippers first, by using the graphic beta moves, then construct the S,K,I combinators from zippers, as indicated in  Combinators and zippers.

Probably, the real chemical concrete machine will function with terms (molecules) created by some more natural chemistry tricks, but it is pleasant to see that in principle we can also construct what we need, before passing to the “computation” part, only by using arrows and moves.

Chemical concrete machine, short list of gates and moves

Continuing from  Local FAN-IN eliminates GLOBAL FAN-OUT (II)  and   Local FAN-IN eliminates global FAN-OUT (I)  I propose the following list of gates and moves, which are taken from graphic lambda calculus, with the FAN-IN and DIST moves added. (So this could be called the “chemical concrete machine sector”, and it has Turing universality, via the fact that it contains the combinatory logic.)

_______________

Principle: each gate is a molecule, each move is an enzyme.

_______________

Dictionary:  we need four trivalent gates, which can be distinguished one from another by looking at the number of inputs/outputs and from a choice of colour between red and green. To these add the arrows, loops and the termination gate. The translation from graphic lambda calculus notation to this new coloured notation is the following.

Each of these gates is a molecule.  [Speculations: (1) by looking at DNA backbones, could be the 5′-3′ phosphate-deoxyribose backbone be used as $\rightarrow$? ]

_______________

The moves, translated. Each move is made possible by an enzyme (hence move = enzyme). Here is the list, with the names taken from graphic lambda calculus, by using the dictionary for translation.

• Elimination of loops:

_______________

Implementation needed, but this is out of my competence.  I was thinking that could be possible by using DNA origami techniques, but it might turn out that the schema here is even more close to the structure of DNA. This is an outrageous speculation, but I can’t stop to remark that there are 4 trivalent gates, making two pairs of duality (seen in the graphic beta move and in the FAN-IN move), and this resembles to the A-T, G-C pairing (but it might be a coincidence).

Chemical concrete machine not algorithmic self-assembly of DNA, nor Turing machine

This post is partially a request for references, partially a place for possible discussions, in the comments, and partially a place for clarifications of the previous post Why build a chemical concrete machine, and how? .

I started to read Erik Winfree’s thesis Algorithmic self-assembly of DNA and, to my joy, I see that at least at the knowledge level of 1998, what I propose is different. Here is a short brief of what I got from Winfree’s thesis  (along with my excuses for not representing things correctly and for misattributions) :

• Adleman, Lipton, propose a model of DNA computing which uses exponential space, i.e. all the candidates for a solution of a combinatorial problem, each one represented by a strand of DNA, which are then filtered by a sequence of physical, or chemical manipulations of test tubes, of one of the types: separate (a tube into two others, according to the value of the variable at “i” position), merge (two tubes into one), detect. Variants (not due to Adleman, Lipton, Karp) are to add an amplify operation (like a FAN-OUT with variables) which transform a tube into two copies of it. Or (Boneh), instead of amplify, an append operation which adds a new variable with a give value.  All these models have variables and are based on the mechanism of selection of the solution from an exponential sea of candidates.
• Hagiya, single-strand DNA computation, using a mechanism called “whiplash PCR”, which I don’t understand yet, but which has the computational power of a GOTO program. Has variables.
• Winfree, single-strand DNA computation, but in 2D, where he proposes a “materialization” of a block cellular automaton (BCA) which has Turing universality. Has variables, tries to make a Turing machine.

_________________

In the post   Why build a chemical concrete machine, and how?   I mentioned Turing machines, but this is obviously wrong, as can be seen by looking at the previous post A chemical concrete machine for lambda calculus. I don’t want to have a ” syringe with 10^9 nano geometrical turing machines”, no, that’s misleading, what I call a chemical concrete machine works with lambda calculus terms (among other graphs, more geometrical, from graphic lambda calculus), which are reduced by chemical reactions (using for example the graphic beta move enzyme). That’s different.

_________________

At page 29 of Winfree’s thesis, there’s a picture illustrating various reactions and results of reactions between DNA strands.  I find interesting the Holliday junction, (image taken from the wiki page)

because it’s relation to crossings in knot diagrams. Recall that in the $\lambda$-TANGLE sector of the graphic lambda calculus, the graphic beta move appears as a SPLICE move.

Compare with these images from 3D crossings in graphic lambda calculus:

_________________

As that’s an exploratory post, kind of note to self, but open to anybody, take a look at this short course

Packing arrows (II) and the strand networks sector

I continue from   Packing and unpacking arrows in graphic lambda calculus  and I want now to describe another sector of the graphic lambda calculus, called the strand networks sector.

Strand networks are a close relative to spin networks (in fact they may be used as primitives, together with some algebraic tricks in order to define spin networks). See the beautiful article  by Roger Penrose, Angular momentum: an approach to combinatorial space-time, in Quantum Theory and Beyond, ed. T. Bastin, Cambridge University Press, Cambridge, 1971.   where strand networks appear in and around figure 17.

However the strand networks which I shall write about here are oriented. Moreover, a more apt name for those would be “states of strand networks”.  The main thing to retain is that the algebraic glue can be put over them (for example, as concerns free algebras generated by those, counting states, grouping them into equivalence classes which are the real strand networks and so on).

In graphic lambda calculus we may consider graphs in $GRAPH$ which are obtained by the following procedure. We start with a  finite collection of loops and we clip together, as we please, strands from these loops. How can we do this? Is simple, we choose our strands we want to clip together and the we cross them with an arrow. Here is an example for three strands:

We want to make this clipping permanent, so to say, therefore we apply as many graphic beta moves as are needed, three in this example:

The beautiful thing about this procedure is that in the middle of the graph from the right hand side we have now three arrows with the same orientation. So, let’s pack them into one arrow.  The packing procedure for three arrows, with the same orientation, is the following:

So, we pack the arrows, we connect them and we unpack afterwards.

With the previous trick, we may define now decorated arrows of strand networks, but mind that we have bundles, packs of oriented strands, so the decoration has to be more precise than just indicating the number of strands.

That is about all, only we have to be careful to discern among clipped strands which become decorated arrows and clipped strands which become nodes of the strand network. But this is rather obvious, if you think about.

Conclusion. Let’s think about the purpose of the algebraic glue which is needed in order to define a spin network and then an evolving spin network. As you can see all comes down to another way of deciding which is the sequence of applications of (graphic) beta moves, which is this time probabilistic. It is another type of computation than the one from the lambda calculus sector, where we use the combinators and zippers (as well as some strategy of evaluation) in order to make a classical computation. So at the level of graphic lambda calculus, it becomes transparent that both are computations, but with slightly different strategies. (This is a kind of a theorem, but to put it in a precise form one has to make ornate notations and unnecessary orderings, so it would be just a manifestation of  the cartesian disease, which we don’t need.) The last point to mention is that, compared to the lambda calculus strategy (based on combinators plus choice of evaluation), this computation which comes from physics, is more cartesian disease free.