Tag Archives: chemlambda

Ancient Turing machine helps Chemical concrete machine

In this post I want to prove that only with arrow molecules and enzyme moves we can generate all we need for the chemical concrete machine, by using the minimal (?) formalism from  Chemical concrete machine, short list of gates and moves . This is done with the help of ideas from the old post   Ancient Turing machines (I): the three Moirai, which   contains a discussion about generating moves for graphic lambda calculus.

In the post Local FAN-IN eliminates GLOBAL FAN-OUT (II) , at the end, I prove a switch move from CO-COMM and FAN-IN moves. In the next figure, I call this move (which is, recall, a “macro move”, made by a succession of three moves) SWITCH.  In the same figure appears a move CREA’. The name is justified by the resemblance with the CREA move proposed in Ancient Turing machines (I): the three Moirai .

recrea_2

Before giving the proof of CREA’, let me remark that from these macro moves, together with the ones selected in the post  Chemical concrete machine, short list of gates and moves , we can recover all the generating moves of the three Moirai described in the ancient Turing machine post. For example, CREA’ , used for a triple of arrows, gives both the  Clotho’s  CREA  original move and the Atropos  GARB move.

Here is the proof of CREA’:

recrea_1

From a bag of arrows at our discretion, this move gives us fan-out gates and termination gates.  Moreover, it is pleasant to see that, as the SWITCH move is a consequence of CO-COMM and FAN-IN, the CREA’  move is a consequence of CO-ASSOC and FAN-IN.

We need the other three trivalent gates, let’s see how we can obtain them (generate them from arrows, by using moves, recall that moves are enzymes, in the world of the chemical concrete machine).

The fan-in gate is generated like this (we already generated termination gates):

recrea_3

The lambda abstraction and application gates can be generated by using the graphic beta move and a SWITCH move.

recrea_4

Now we have all the gates we need and we can proceed to constructing whatever combinator we want from them, mainly by using SWITCH moves. One elegant way for doing this would be to construct zippers first, by using the graphic beta moves, then construct the S,K,I combinators from zippers, as indicated in  Combinators and zippers.

Probably, the real chemical concrete machine will function with terms (molecules) created by some more natural chemistry tricks, but it is pleasant to see that in principle we can also construct what we need, before passing to the “computation” part, only by using arrows and moves.

Chemical concrete machine, short list of gates and moves

Continuing from  Local FAN-IN eliminates GLOBAL FAN-OUT (II)  and   Local FAN-IN eliminates global FAN-OUT (I)  I propose the following list of gates and moves, which are taken from graphic lambda calculus, with the FAN-IN and DIST moves added. (So this could be called the “chemical concrete machine sector”, and it has Turing universality, via the fact that it contains the combinatory logic.)

_______________

Principle: each gate is a molecule, each move is an enzyme.

_______________

Dictionary:  we need four trivalent gates, which can be distinguished one from another by looking at the number of inputs/outputs and from a choice of colour between red and green. To these add the arrows, loops and the termination gate. The translation from graphic lambda calculus notation to this new coloured notation is the following.

convention_1

Each of these gates is a molecule.  [Speculations: (1) by looking at DNA backbones, could be the 5′-3′ phosphate-deoxyribose backbone be used as \rightarrow? ]

_______________

The moves, translated. Each move is made possible by an enzyme (hence move = enzyme). Here is the list, with the names taken from graphic lambda calculus, by using the dictionary for translation.

convention_2

convention_3

convention_6

convention_4

  • Elimination of loops:

convention_5

_______________

Implementation needed, but this is out of my competence.  I was thinking that could be possible by using DNA origami techniques, but it might turn out that the schema here is even more close to the structure of DNA. This is an outrageous speculation, but I can’t stop to remark that there are 4 trivalent gates, making two pairs of duality (seen in the graphic beta move and in the FAN-IN move), and this resembles to the A-T, G-C pairing (but it might be a coincidence).

Chemical concrete machine not algorithmic self-assembly of DNA, nor Turing machine

This post is partially a request for references, partially a place for possible discussions, in the comments, and partially a place for clarifications of the previous post Why build a chemical concrete machine, and how? .

I started to read Erik Winfree’s thesis Algorithmic self-assembly of DNA and, to my joy, I see that at least at the knowledge level of 1998, what I propose is different. Here is a short brief of what I got from Winfree’s thesis  (along with my excuses for not representing things correctly and for misattributions) :

  • Adleman, Lipton, propose a model of DNA computing which uses exponential space, i.e. all the candidates for a solution of a combinatorial problem, each one represented by a strand of DNA, which are then filtered by a sequence of physical, or chemical manipulations of test tubes, of one of the types: separate (a tube into two others, according to the value of the variable at “i” position), merge (two tubes into one), detect. Variants (not due to Adleman, Lipton, Karp) are to add an amplify operation (like a FAN-OUT with variables) which transform a tube into two copies of it. Or (Boneh), instead of amplify, an append operation which adds a new variable with a give value.  All these models have variables and are based on the mechanism of selection of the solution from an exponential sea of candidates.
  • Hagiya, single-strand DNA computation, using a mechanism called “whiplash PCR”, which I don’t understand yet, but which has the computational power of a GOTO program. Has variables.
  • Winfree, single-strand DNA computation, but in 2D, where he proposes a “materialization” of a block cellular automaton (BCA) which has Turing universality. Has variables, tries to make a Turing machine.

_________________

In the post   Why build a chemical concrete machine, and how?   I mentioned Turing machines, but this is obviously wrong, as can be seen by looking at the previous post A chemical concrete machine for lambda calculus. I don’t want to have a ” syringe with 10^9 nano geometrical turing machines”, no, that’s misleading, what I call a chemical concrete machine works with lambda calculus terms (among other graphs, more geometrical, from graphic lambda calculus), which are reduced by chemical reactions (using for example the graphic beta move enzyme). That’s different.

_________________

At page 29 of Winfree’s thesis, there’s a picture illustrating various reactions and results of reactions between DNA strands.  I find interesting the Holliday junction, (image taken from the wiki page)

472px-Holliday_Junctionbecause it’s relation to crossings in knot diagrams. Recall that in the \lambda-TANGLE sector of the graphic lambda calculus, the graphic beta move appears as a SPLICE move.

Compare with these images from 3D crossings in graphic lambda calculus:

3d_crossing_3

3d_crossing_2

_________________

 

As that’s an exploratory post, kind of note to self, but open to anybody, take a look at this short course

[Updates will follow here]

Local FAN-IN eliminates GLOBAL FAN-OUT (II)

As I wrote in   Local FAN-IN eliminates global FAN-OUT (I) , the introduction of the three moves (FAN-IN and the two DIST moves) eliminates global FAN-OUT from the lambda calculus sector of the graphic lambda calculus.  In this post we shall see that we can safely eliminate other two moves, namely R1a, R1b, as well as improving the behaviour of the crossings from the \lambda-TANGLE sector.

The equilibrium is thus established: three new moves instead of the three old moves. And there are some unexpected advantages.

______________

Proposition.

Proof.  (a) Done in the following picture.

frob_6

The proof of (b) is here:

frob_7

Finally, here is the proof of (c):

frob_8

______________

The \lambda-TANGLE sector of the graphic lambda calculus is obtained by using the lambda-crossing macros

frob_9

In Theorem 6.3   arXiv:1305.5786 [cs.LO]  I proved that all the oriented Reidemeister moves (with the crossings replaced by the respective macros), with the exception of the moves R2c, R2d, R3a and R3h, can be proved by using the graphic beta move and elimination of loops.  We can improve the theorem in the following way.

Theorem.  By using the graphic beta move, elimination of loops, FAN-IN and CO-COMM, we can prove all the 16 oriented Reidemeister moves.

Proof. The missing moves R2c, R2d, R3a and R3h are all equivalent (by using the graphic beta move and elimination of loops, see this question/answer at mathoverflow) with the following switching move, which we can prove with FAN-IN and CO-COMM:

frob_2

The proof is done.

Local FAN-IN eliminates global FAN-OUT (I)

For being able to build  a chemical concrete machine (see the posts  A chemical concrete machine for lambda calculus  and  Why build a chemical concrete machine, and how?) we have to prove that  universal computation can be attained with only local moves in graphic lambda calculus. Or, the lambda calculus sector of the graphic lambda calculus, which gives universality to graphic lambda calculus, uses the global FAN-OUT move (see theorem 3.1 (d)  arXiv:1305.5786 [cs.LO]. Similarly, we see in proposition 3.2 (d), which describes the way combinatory logic appears in graphic lambda calculus, that again global FAN-OUT is used.

I want to describe a way to eliminate the global FAN-OUT move from combinatory logic (as appears in graphic lambda calculus via the algorithm described here ).

________________

There are reasons to dislike global moves in relation to B-type neural networks (see the last post    Pair of synapses, one controlling the other (B-type NN part III) ). Similar concerns can be found in the series of posts which has as the most recent one Dictionary from emergent algebra to graphic lambda calculus (III) .

In this first post I shall introduce a local FAN-IN move and two distributivity moves and I shall prove that they eliminate the need for using global FAN-OUT in combinatory logic. In the next post I shall prove that we can eliminate two other moves (so that the total number of moves of graphic lambda calculus stays the same as before) and moreover we can recover from distributivity and local FAN-OUT moves the missing oriented Reidemeister moves from the \lambda-TANGLE sector.

________________

Definition. The local FAN-IN move is described in the next figure and it can be applied for any \varepsilon \not = 1.

frob_1

Comments:

  • as you see, in the move appears a dilation gate, what can this has to do with combinatory logic? As I explained previously, the properties of the gates are coming through the moves they are involved in, and not from their name. I could have introduced a new gate, with two inputs and one output, call this new gate “fan-in gate” and use it in the FAN-IN move. However, wait until the next post to see that there are other advantages, besides the economy of gates available, in using a dilation gate as a fan-in.
  • the FAN-IN move resembles to the packing arrows trick which is used extensively in the neural networks posts.  This suggests to use as a  fan-in gate

synapse_1

the green triangle gate and as fan-out gate the red triangle gate. This would eliminate the \Upsilon gate from the formalism, but is not clear to me how this replacement would interfere with the rest of the moves.

  • the FAN-IN move resembles with the dual of the graphic beta move, but is not the same (recall that until now I have not accepted the dual of the graphic beta move in the list of the moves of graphic lambda calculus, although there are strong reasons to do so):

dist_2

dist_1

which is needed in the emergent algebra sector in order to make the dictionary to work (and related as well to the goal of non using global FAN-OUT in that sector).  This latter move is in fact a distributivity move (see further), but we are free to choose different moves in different sectors of the graphic lambda calculus,

  • I know it is surprising that until now there was nothing about evaluation strategies in graphic lambda calculus, the reason being that because there are no variables then there is noting to evaluate. However, the situation is not so simple, at some point, for example in the chemical concrete machine or for neural networks, some criterion for choosing the order of moves will be needed. But it is an important point to notice that replacing global FAN-OUT (which could be seen as a remnant of having variables and evaluating them) by local FAN-IN has nothing to to with evaluation strategies.

________________

Definition: The distributivity moves (related to the application and lambda abstraction gates) are the following:

dist_3

dist_4

Comments:

  • the first distributivity move is straighforward, an application gate is just doubled and two fan-out moves establish the right connections. We see here why the “mystery move” can be seen as a kind of distributivity move,
  • the second distributivity move is where we need a fan-in gate (and where we use a dilation gate instead): because of th orientation of the arrows, after we double the lambda abstraction gates, we need to collect two arrows into one!

________________

Combinatory logic terms appear in graphic lambda calculus as  trees made by application gates, with leaves one of the combinators S, K, I (seen as graphs in GRAPH.  I want to show the following. [UPDATE: made some corrections.]

Theorem.   We can replace the global FAN-OUT move with a sequence of local FAN-IN ,  DIST, CO-COMM and local pruning moves, every time the global FAN-OUT move is applied to a term made by SKI combinators.

Proof.  First, remark that a sequence of  DIST moves for the application gate allows to reduce the problem of replacing global FAN-OUT moves for any combinator to the problem of replacing it for S, K, and I. This is because the DIST move for the application gate allows to do the FAN-OUT of trees of application gates:

dist_5

Now we have to prove that we can perform global FAN-OUT for I , K, S combinators.  For the combinator I the proof is the following:

frob_4

For the combinator K we shall also use a local pruning move:

frob_5

Finally, the proof for the combinator S is the following:

frob_3

Now we are going to use 3 DIST moves, followed by the switch of arrows explained in   Local FAN-IN eliminates GLOBAL FAN-OUT (II) , which is applied in the dashed green ellipse from the next figure:

frob_10

And we are done.

UPDATE: At close inspection, it turns out that we don’t need to do switch (macro) moves. Instead, if we go back at the point we were before the last figure,  we may use  first CO-ASSOC and then perform the three FAN-IN moves .

Why build a chemical concrete machine, and how?

Asked about this, I spent a bit thinking and I arrived at  this very brute answer:

What I have in mind can be split in two.

There is a first part concerning graphic lambda calculus seen as it’s about real molecules interacting in space. For this part I would like to construct graphs which are like a Turing machine (or other computing devices) then, the important step is to eliminate everything which is due to our human 1d writing conventions (see details in the rant from the first part of this post) and thinking and simplify such “machines”, in the same way, for example, like I saw it’s possible

A serious tool for doing this would be, for example, a program which allows to visualize and perform moves  (automatically and from the keyboard) on graphs in GRAPH.

The second part, which goes in parallel, would be to try to find in the real world (here DNA origami looks like a possible tool) ways to construct chemically, physically, such machines, which are naturally adapted to the real world because they are geometrical (an object or a property is geometrical if it can be described, or it’s invariant to an arbitrary change of parametrization, in our case, an arbitrary renaming).  For this part I want to understand first how DNA origami works, to the level of being able to absorb information and understand some of the hurdles.  This leads to applications, which are still vague in my head, but I was impressed by this video

as well as by research of Jean-Pierre Banatre and Daniel Le Metayer.

In conclusion, I can’t imagine what a syringe with 10^9 nano geometrical turing machines  graphs representing lambda terms [see UPDATE]  can’t do.

______________

UPDATE:  Corrections to this post are made in  Chemical concrete machine not algorithmic self-assembly of DNA, nor Turing machine   , where it is stressed that the “chemical concrete machine”, even if it has Turing universality property, it is not intended to be a Turing machine, (nor an automaton), as is wrongly suggested in this post.

Pair of synapses, one controlling the other (B-type NN part III)

In  Teaser (I) and Teaser (II) I discussed about the possibility to construct neural networks with controlled connections, resembling the B-type neural networks of Turing, in the following sense:

  • they are formed by “neurons”, which in Turing’s description are boolean logic gates, and here they are graphs from graphic lambda calculus which correspond to lambda calculus terms. But there’s no real restriction on that, graphic lambda calculus being more general and powerful than untyped lambda calculus, one may consider “neurons” which are outside the lambda calculus sector. The fact is that a “neuron” here should be interpreted as any graph in GRAPH with at least one input but only one output. Later, when we shall speak about the order of performing moves in such neural networks, it will be seen that each “neuron” is a bag containing graphs which are modified (or reduced, as people say) independently one from another. Neurons are packing conventions from a formal viewpoint, but this viewpoint may not be the best one, a better viewpoint would be to see neurons as well as synapses, described further, as real constructs, like in the related project of the chemical concrete machine, In particular, neurons may do other things than computing in the lambda calculus sense (classical computation), they may do some more geometrical or physical or robot-like activities, not usually considered computations.
  • the connections between neurons are controlled in a B-type NN, by TRUE – FALSE control wires or inputs. Turing explains that controls themselves can be realized as constructions with boolean neurons (i.e. in his language B-type NNs are particular cases of his A-type NNs). In Teaser (II)    is explained how the connections between graphic lambda calculus neurons can be constructed by using a 2-zipper and a switch, such that a connection is controlled (by the same TRUE-FALSE mechanism, but this time TRUE and FALSE are literary the graphs of the corresponding terms in lambda calculus) by another connection.

There is an important difference, besides the power of the calculi used (boolean vs. graphic lambda), namely that there is no signal transmitted through the wires  of the network. That is because graphic lambda calculus does not need variable names. I shall come back to this very important point (which is a big advantage for implementing such networks in reality) in a future post, but let me mention two simple facts:

  • an analogy: think about hardware and software, a difference between them is that software may be associated to the pattern of signals which circulates in the hardware. The hardware is the machine inhabited by the software, they are not in the same plane of physical reality, right? Well, in this implementation there is literary no difference between those, exactly because there are no signals (software) transmitted through the hardware.
  • this use of names, software, variable and signals is a pain for those trying to make sense how to implement in reality computing constructs. But, if you think, the need to name that stuff “x” and that other stuff “y”, to make alpha conversions in order to avoid name clashes, or even to describe computations in a linear string of symbols, all this are in no relation with physical reality, they are only cultural constraints of humans (and their artificial constructs). It all has to do with the fact that humans communicate science through writing, and of course, it has to do with the enumeration techniques of the cartesian method , which “is designed as a technique for understanding performed by one mind in isolation, severely handicapped by the bad capacity of communication with other isolated minds”. Molecules in a cell or in a solution do not align in a string according to the experimenter writing habits, from left to right, right to left, or vertical. They just don’t wait for an external  clock  to rhythm their ballet, nor for the experimenter to draw it’s familiar coordinate system. These are all limitations of the method, not of nature.

_____________

Further, in this post, I shall use “synapse” instead “connection”.  Let’s see, now that we know we can implement synapses by a TRUE-FALSE mechanism, let’s see if we can do it simpler. Also, if it is possible to make the “synapses control other synapses” concept more symmetric.

Instead of the little magenta triangles used in Teaser I, II posts (which are not the same as the magenta triangles used in the chemical concrete machine post) , I shall use a more clear notation:

synapse_1

The design of a synapse proposed in Teaser (II)  involved two switches and a zipper, justified by the direct translation of TRUE-FALSE lambda calculus terms in the freedom sector of graphic lambda calculus.  Think now that in fact we have there two synapses, one controlling the other. The zipper between them is only a form of binding them together in unambiguous way.

A simplified form of this idea of a pair of synapses is the following:

synapse_2

In the upper part of the figure you see the pair of synapses, taking the a form like  this: ><. There are two synapses there, one is >, which is controlled by the other <.  The connection between the blue 1 and the red 3′ is controlled by the other synapse. In order to see how, let’s perform a graphic beta move and obtain the graph from the lower part of the figure.

The red 3 can connect, in the sense explained by the switch mechanism from Teaser (II) with blue 1′ or blue 2′, all three of them belonging to the controlling synapse. Say red 3  connects with blue 1′ and look what happens:

synapse_3

OK, so we obtain a connection between blue 1 and red 3′. Suppose now that red 3 connects with blue 2′, look:

synapse_4

Yes, blue 1 does not connect with red 3′, they just exchanged a pair of termination gates. That’s how the pair of synapses work.

Finally, let me remark that the use of termination gates is a bit ugly, it breaks the symmetry,  is not really needed.