Tag Archives: Turing machine

Asynchronous and decentralized teleportation of Turing Machines

The idea is that it is possible to copy a computer which executes a program, during execution, and to produce working clones. All this without any control (hence decentralized) and any synchronization.

More than that, the multiple copies are done in the same time as the computers compute.

I think is crazy, there’s nothing like this on the offer. This is a proof of principle.  The post is a light (hopefully) introduction into the subject.

If you look for validation, then go follow the instructions from
and use the file bbdupli.mol
This animation starts with one Turing Machine and ends with three Turing Machines.
Let’s take it slower.
1. Everybody knows what is a TM. Computers are the real thing which resembles most to a TM. There is lot of stuff added, like a screen, keyboard, a designer box, antennas and wires, but essentially a computer has a memory which is like a tape full of bits (0 and 1) and there is a processor which is like a read/write head with an internal state. When the head reads a bit from the tape then the internal state changes, the head writes in the tape and then moves, according to some internal rules.
2. These rules define the TM. Any rule goes, but among them there are rules which make the TM universal. That means that there are well chosen rules such that  if you first put on the tape a string of bits, which is like the OS of the computer, and if you place the head at the beginning of this string, then the TM (which obbeys those clever rules) uploads the OS and then it becomes the TM of your choice.
3. From here it becomes clear that these rules are very important. Everything else is built on them. There are many examples of rules for universal TM, the more or less precise estimate is that for a tape written with bits (0 and 1) the known universal TM need about 20 rules.
4. Just a little bit earlier than the TM invention, lambda calculus has been invented as well, by Alonzo Church. Initially the lambda calculus was a monumental formalism, but after the TM invention there was a variant of it, called untyped lambda beta calculus, which has been proved to be able to compute the same things as a TM can compute.
The untyped lambda beta calculus is famous among the CS nerds and much less known by others (synthetic biologists, I’m looking at you) who think that TM are the natural thing to try to emulate with biology and chemistry, because it is obviously something everybody understands, not like lambda calculus which goes into higher and higher abstractions.
There are exceptions, the most famous in my opinion is the Algorithmic Chemistry (or Alchemy) of Fontana and Buss. They say that lambda calculus is chemistry and that one operation (called application) of LC is like a chemical reaction and the other operation (called abstraction) is like a reactive site. (Then they change their mind, trying to fit types into the story, speaking about chemicals as functions, then they use pi calculus, all very interesting but outside of the local goal of this post. Will come back later to that.)
Such exceptions aside, the general idea is: TM easy, LC hard.That is false and I shall prove it. The side bonus is that I can teleport Turing machines.
5. If we want to compare TM with LC then we have to put them on the same footing, right? This same footing is to take the rules of TM and the reductions of LC as rewrites.
But, from the posts about chemlambda, rewrites are chemical reactions!
So let’s put all in the chemlambda frame.


  • LC is actually much simpler than TM, because it uses only one rewrite which is specific to it (the BETA rewrite) instead of about 20 min for the TM
  • LC and TM are both compatible with the chemical approach a la chemlambda, in the sense that chemlambda can be enhanced by the addition of “bits” (red and green 2 valent nodes), head move (Arrow element) and TM internal states (other 2-valent nodes) and by some “propagation” rewrites, such that chemlambda can now do both TM and LC in the same time!

In the animation you see, at the beginning, something like
a tree made of green fanout (FO) nodes (and yellow FRIN and magenta FROUT leaves) and
– a  small ring of green (2-valent, bit 0) and pale green (a state of a TM). That small ring is a TM tape which is “abstracted”, i.e. connected to a lambda abstraction node, which itself is “applied” (by an application A node) to the fanout tree.

What happens? Eventually there are produced 3 TM (you see 3 tapes) which function independently.

They are actually in different states, because the algorithm which does all this is random (it does a rewrite if a coin has the fancy to drop in the right state).

The algorithm is random (simulation of asynchronous behaviour) and all rewrites are local (simulation of decentralized).

Only a simulation which shows it is perfectly possible. The real thing would be to turn this into a protocol.


Nothing vague in the “no semantics” point of view

I’m a supporter of “no semantics” and I’ll try to convince you that it is nothing vague in it.

Take any formalism. To any term built from this formalism there is an associated syntactic tree. Now, look at the syntactic tree and forget about the formalism. Because it is a tree, it means that no matter how you choose to decorate its leaves, you can progress from the leaves to the root by decorating each edge. At each node of the tree you follow a decoration rule which says: take the decorations of the input edges and use them to decorate the output edge. If you suppose that the formalism is one which uses operations of bounded arity then you can say the following thing: strictly by following rules of decoration which are local (you need to know only at most N edge decorations in order to decorate another edge) you can arrive to decorate all the tree. Al the graph! And the meaning of the graph has something to do with this decoration. Actually the formalism turns out to be not about graphs (trees), but about static decorations which appear at the root of the syntactic tree.
But, you see, these static decorations are global effects of local rules of decoration. Here enters the semantic police. Thou shall accept only trees whose roots accept decorations from a given language. Hard problems ensue, which are heavily loaded with semantics.
Now, let’s pass from trees to other graphs.
The same phenomenon (there is a static global decoration emerged from local rules of decoration) for any DAG (directed acyclic graph). It is telling that people LOVE DAGs, so much so they go to the extreme of excluding from their thinking other graphs. These are the ones who put everything in a functional frame.
Nothing wrong with this!
Decorated graphs have a long tradition in mathematics, think for example at knot theory.
In knot theory the knot diagram is a graph (with 4-valent nodes) which surely is not acyclic! However, one of the fundamental objects associated to a knot is the algebraic object called “quandle”, which is generated from the edges of the graph, with certain relations coming from the edges. It is of course a very hard, fully loaded semantically problem to try to identify the knot from the associated quandle.
The difference from the syntactic trees is that the graph does not admit a static global decoration, generically. That is why the associated algebraic object, the quandle, is generated by (and not equal to) the set of edges.

There are beautiful problems related to the global objects generated by local rules. They are also difficult, because of the global aspect. It is perhaps as difficult to find an algorithm which builds an isomorphism between  two graphs which have the same associated family of decorations, as it is to  find a decentralized algorithm for graph reduction of a distributed syntactic tree.

But these kind of problems do not cover all the interesting problems.

What if this global semantic point of view makes things harder than they really are?

Just suppose you are a genius who found such an algorithm, by amazing, mind bending mathematical insights.

Your brilliant algorithm, because it is an algorithm, can be executed by a Turing Machine.

Or Turing machines are purely local. The head of the machine has only local access to the tape, at any given moment (Forget about indirection, I’ll come back to this in a moment.). The number of states of the machines is finite and the number of rules is finite.

This means that the brilliant work served to edit out the global from the problem!

If you are not content with TM, because of indirection, then look no further than to chemlambda (if you wish combined with TM, like in
http://chorasimilarity.github.io/chemlambda-gui/dynamic/turingchem.html , if you love TM ) which is definitely local and Turing universal. It works by the brilliant algorithm: do all the rewrites which you can do, nevermind the global meaning of those.

Oh, wait, what about a living cell, does it have a way to manage the semantics of the correct global chemical reactions networks which ARE the cell?

What about a brain, made of many neural cells, glia cells and whatnot? By the homunculus fallacy, it can’t have static, external, globally selected functions and terms (aka semantic).

On the other side, of course that the researcher who studies the cell, or the brain, or the mathematician who finds the brilliant algorithm, they are all using heavy semantic machinery.


Not that the cell or the brain need the story in order for them to live.

In the animated gif there is a chemlambda molecule called the 28 quine, which satisfies the definition of life in the sense that it randomly replenish its atoms, by approximately keeping its global shape (thus it has a metabolism). It does this under the algorithm: do all rewrites you can do, but you can do a rewrite only if a random coin flip accepts it.

Most of the atoms of the molecule are related to operations (application and abstraction) from lambda calculus.

I modified a bit a script (sorry, not in the repo this one) so that whenever possible the edges of this graph which MAY be part of a syntactic tree of a lambda term turn to GOLD while the others are dark grey.

They mean nothing, there’s no semantics, because for once the golden graphs are not DAGs, and because the computation consists into rewrites of graphs which don’t preserve well the “correct” decorations before the rewrite.

There’s no semantics, but there are still some interesting questions to explore, the main being: how life works?



Louis Kauffman reply to this:

Dear Marius,
There is no such thing as no-semantics. Every system that YOU deal with is described by you and observed by you with some language that you use. At the very least the system is interpreted in terms of its own actions and this is semantics. But your point is well-taken about not using more semantic overlay than is needed for any given situation. And certainly there are systems like our biology that do not use the higher level descriptions that we have managed to observe. In doing mathematics it is often the case that one must find the least semantics and just the right syntax to explore a given problem. Then work freely and see what comes.
Then describe what happened and as a result see more. The description reenters the syntactic space and becomes ‘uninterpreted’ by which I mean  open to other interactions and interpretations. It is very important! One cannot work at just one level. You will notice that I am arguing both for and against your position!
Lou Kauffman
My reply:
Dear Louis,
Thanks! Looks that we agree in some respects: “And certainly there are systems like our biology that do not use the higher level descriptions that we have managed to observe.” Not in others; this is the base of any interesting dialogue.
Then I made another post
Related to the “no semantics” earlier g+ post [*], here is a passage from Rodney Brooks “Intelligence without representation”

“It is only the observer of the Creature who imputes a central representation or central control. The Creature itself has none; it is a collection of competing behaviors.  Out of the local chaos of their interactions there emerges, in the eye of an observer, a coherent pattern of behavior. There is no central purposeful locus of control. Minsky [10] gives a similar account of how human behavior is generated.  […]
… we are not claiming that chaos is a necessary ingredient of intelligent behavior.  Indeed, we advocate careful engineering of all the interactions within the system.  […]
We do claim however, that there need be no  explicit representation of either the world or the intentions of the system to generate intelligent behaviors for a Creature. Without such explicit representations, and when viewed locally, the interactions may indeed seem chaotic and without purpose.
I claim there is more than this, however. Even at a local  level we do not have traditional AI representations. We never use tokens which have any semantics that can be attached to them. The best that can be said in our implementation is that one number is passed from a process to another. But it is only by looking at the state of both the first and second processes that that number can be given any interpretation at all. An extremist might say that we really do have representations, but that they are just implicit. With an appropriate mapping of the complete system and its state to another domain, we could define a representation that these numbers and topological  connections between processes somehow encode.
However we are not happy with calling such things a representation. They differ from standard  representations in too many ways.  There are no variables (e.g. see [1] for a more  thorough treatment of this) that need instantiation in reasoning processes. There are no rules which need to be selected through pattern matching. There are no choices to be made. To a large extent the state of the world determines the action of the Creature. Simon  [14] noted that the complexity of behavior of a  system was not necessarily inherent in the complexity of the creature, but Perhaps in the complexity of the environment. He made this  analysis in his description of an Ant wandering the beach, but ignored its implications in the next paragraph when he talked about humans. We hypothesize (following Agre and Chapman) that much of even human level activity is similarly a reflection of the world through very simple mechanisms without detailed representations.”

This brings to mind also this quote from the end of Vehicle 3 section from V. Braintenberg book Vehicles: Experiments in Synthetic Psychology:

“But, you will say, this is ridiculous: knowledge implies a flow of information from the environment into a living being ar at least into something like a living being. There was no such transmission of information here. We were just playing with sensors, motors and connections: the properties that happened to emerge may look like knowledge but really are not. We should be careful with such words.”

Louis Kauffman reply to this post:
Dear Marius,
It is interesting that some people (yourself it would seem) get comfort from the thought that there is no central pattern.
I think that we might ask Cookie and Parabel about this.
Cookie and Parabel and sentient text strings, always coming in and out of nothing at all.
Well guys what do you think about the statement of MInsky?

Cookie. Well this is an interesting text string. It asserts that there is no central locus of control. I can assert the same thing! In fact I have just done so in these strings of mine.
the strings themselves are just adjacencies of little possible distinctions, and only “add up” under the work of an observer.
Parabel. But Cookie, who or what is this observer?
Cookie. Oh you taught me all about that Parabel. The observer is imaginary, just a reference for our text strings so that things work out grammatically. The observer is a fill-in.
We make all these otherwise empty references.
Parabel. I am not satisfied with that. Are you saying that all this texture of strings of text is occurring without any observation? No interpreter, no observer?
Cookie. Just us Parabel and we are not observers, we are text strings. We are just concatenations of little distinctions falling into possible patterns that could be interpreted by an observer if there were such an entity as an observer?
Parabel. Are you saying that we observe ourselves without there being an observer? Are you saying that there is observation without observation?
Cookie. Sure. We are just these strings. Any notion that we can actually read or observe is just a literary fantasy.
Parabel. You mean that while there may be an illusion of a ‘reader of this page’ it can be seen that the ‘reader’ is just more text string, more construction from nothing?
Cookie. Exactly. The reader is an illusion and we are illusory as well.
Parabel. I am not!
Cookie. Precisely, you are not!
Parabel. This goes too far. I think that Minsky is saying that observers can observe, yes. But they do not have control.
Cookie. Observers seem to have a little control. They can look here or here or here …
Parabel. Yes, but no ultimate control. An observer is just a kind of reference that points to its own processes. This sentence observes itself.
Cookie. So you say that observation is just self-reference occurring in the text strings?
Parabel. That is all it amounts to. Of course the illusion is generated by a peculiar distinction that occurs where part of the text string is divided away and named the “observer” and “it” seems to be ‘reading’ the other part of the text. The part that reads often has a complex description that makes it ‘look’ like it is not just another text string.
Cookie. Even text strings is just a way of putting it. We are expressions in imaginary distinctions emanated from nothing at all and returning to nothing at all. We are what distinctions would be if there could be distinctions.
Parabel. Well that says very little.
Cookie. Actually there is very little to say.
Parabel. I don’t get this ‘local chaos’ stuff. Minsky is just talking about the inchoate realm before distinctions are drawn.
Cookie. lakfdjl
Parabel. Are you becoming inchoate?
Cookie. &Y*
Parabel. Y


My reply:
Dear Louis, I see that the Minsky reference in the beginning of the quote triggered a reaction. But recall that Minsky appears in a quote by Brooks, which itself appears in a post by Marius, which is a follow up of an older post. That’s where my interest is. This post only gathers evidence that what I call “no semantics” is an idea which is not new, essentially.
So let me go back to the main idea, which is that there are positive advances which can be made under the constraint to never use global notions, semantics being one of them.
As for the story about Cookie and Parabel, why is it framed into text strings universe and discusses about  a “central locus of control”? I can easily imagine Cookie and Parabel having a discussion before writing was invented, say for example in a cave which much later will be discovered by modern humans in Lascaux.
I don’t believe that there is a central locus of control. I do believe that semantics is a mean to tell the story, any story, as if there is a central locus of control. There is no “central” and there is very little “control”.
This is not a negative stance, it is a call for understanding life phenomena from points of view which are not ideologically loaded by “control” and “central”. I am amazed by the life variety, beauty and vastness, and I feel limited by the semantics point of view. I see in a string of text thousands of years of cultural conventions taken for granted, I can’t forget that a string of text becomes so to me only after a massive processing which “semantics” people take as granted as well, that during this discussion most of me is doing far less trivial stuff, like collaborating and fighting with billions of other beings in my gut, breathing, seeing, hearing, moving my fingers. I don’t forget that the string of text is recreated by my brain 5 times per second.
And what is an “illusion”?
A third post
In the last post https://plus.google.com/+MariusBuliga/posts/K28auYf69iy I gave two quotes, one from Brooks “Intelligence without representation” (where he quotes Minsky en passage, but contains much more than this brief Minsky quote) and the other from Braitenberg “Vehicles: Experiments in Synthetic Psychology”.
Here is another quote, from a reputed cognitive science specialist, who convinced me about the need for a no semantics point of view with his article “Brain a geometry engine”.
The following quote is by Jan Koenderink “Visual awareness”

“What does it mean to be “visually aware”? One thing, due to Franz Brentano (1838-1917), is that all awareness is awareness of something. […]
The mainstream account of what happens in such a generic case is this: the scene in front of you really exists (as a physical object) even in the absence of awareness. Moreover, it causes your awareness. In this (currently dominant) view the awareness is a visual representation of the scene in front of you. To the degree that this representation happens to be isomorphic with the scene in front of you the awareness is veridical. The goal of visual awareness is to present you with veridical representations. Biological evolution optimizes veridicality, because veridicality implies fitness.  Human visual awareness is generally close to veridical. Animals (perhaps with exception of the higher primates) do not approach this level, as shown by ethological studies.
JUST FOR THE RECORD these silly and incoherent notions are not something I ascribe to!
But it neatly sums up the mainstream view of the matter as I read it.
The mainstream account is incoherent, and may actually be regarded as unscientific. Notice that it implies an externalist and objectivist God’s Eye view (the scene really exists and physics tells how), that it evidently misinterprets evolution (for fitness does not imply veridicality at all), and that it is embarrassing in its anthropocentricity. All this should appear to you as in the worst of taste if you call yourself a scientist.”  [p. 2-3]

[Remark: all these quotes appear in previous posts at chorasimilarity]


A comparison of two models of computation: chemlambda and Turing machines

The purpose is to understand clearly what is this story about. The most simple stuff, OK? in order to feel it in familiar situations.

Chemlambda is a collection of rules about rewritings done on pieces of files in a certain format. Without an algorithm which tells which rewrite to use, where and when,  chemlambda does nothing.

In the sophisticated version of the Distributed GLC proposal, this algorithmic part uses the Actor Model idea. Too complicated!, Let’s go simpler!

The simplest algorithm for using the collection of rewrites from chemlambda is the following:
  1. take a file (in the format called “mol”, see later)
  2. look for all patterns in the file which can be used for rewrites
  3. if there are different patterns which overlap, then pick a side (by using an ordering or graph rewrites, like the precedence rules in arithmetic)
  4. apply all the rewrites at once
  5. repeat (either until there is no rewrite possible, or a given number of times, or forever)
 To spice things just a bit, consider the next simple algorithm, which is like the one described, only that we add at step 2:
  •  for every identified pattern flip a coin to decide to keep it or ignore it further in the algorithm
The reason  is that  randomness is the simplest way to say: who knows if I can do this rewrite when I want, or maybe I have in my computer only a part of the file, or maybe I have to know that a friend has a half of the pattern and I have the other, so I have to talk with him first, then agree to make together the rewrite. Who knows? Flip a coin then.

Now, proven facts.

Chemlambda with the stupid deterministic algorithm is Turing universal. Which means that implicitly this is a model of computation. Everything is prescribed from the top to the bottom. Is on the par with a Turing machine, or with a RAND model.

Chemlambda with the random stupid model seems to be also Turing universal, but I don’t have yet a proof for this. There is a reason for the fact that it is as powerful as the stupid deterministic model, but I won’t go there.

So the right image to have is that chemlambda with the  described algorithm can do anything any computer can.

The first question is, how? For example how compares chemlambda with a Turing machine? If it is at this basic level then it means it is incomprehensible, because we humans can’t make sense of a scroll of bytecode, unless we are highly trained in this very specialised task.

All computers do the same thing: they crunch machine code. No matter which high language you use to write a program, it is then compiled and eventually there is a machine code which is executed, and that is the level we speak.

It does not matter which language you use, eventually all is machine code. There is a huge architectural tower and we are on the top of it, but in the basement all looks the same. The tower is here for us to be easy to use the superb machine. But it is not needed otherwise, it is only for our comfort.

This is very puzzling when we look at chemlambda because it is claimed that chemlambda has something to do with lambda calculus, or lambda calculus is the prototype of a functional programming language. So it appears that chemlamdba should be associated with higher meaning and clever thinking, and abstraction of the abstraction of the abstraction.

No, from the point of view of the programmer.

Yes, from the point of view of the machine.

In order to compare chemlambda with a TM we have to put it in the same terms. So you can easily put a TM in terms of a rewrite system, such that it works with the same stupid deterministic algorithm. http://chorasimilarity.github.io/chemlambda-gui/dynamic/turingchem.html

It is not yet put there, but the conclusion is obvious: chemlambda can do lambda calculus with one rewrite, while an Universal Turing Machine needs about 20 rewrites to do what TM do.

Wait, what about distributivity, propagation, the fanin, all the other rewrites?
They are common, they just form a mechanism for signal transduction and duplication!
Chemlambda is much simpler than TM.

So you can use directly chemlambda, at this metal level, to perform lambda calculus. Is explained here

And I highly recommend  to try to play with it by following the instructions.

You need a linux system, or any system where you have sh and awk.


2. unzip it and go to the directory “dynamic”
3. open a shell and write:  bash moving_random_metabo.sh
4. you will get a prompt and a list of files with the extension .mol , write the name of one of them, in the form file.mol
5. you get file.html. Open it with a browser with js enabled. For reasons I don’t understand, it works much better in safari, chromium, chrome than in firefox.

When you look at the result of the computation you see an animation, which is the equivalent of seeing a TM head running here and there on a tape. It does not make much sense at first, but you can convince that it works and get a feeling about how it does it.

Once you get this feeling I will be very glad to discuss more!

Recall that all this is related  to the most stupid algorithm. But I believe it helps a lot to understand how to build on it.

Busy beaver’s random church

The demo linked here would surely look impossible.

One computer, WHILE IT WORKS, is multiplied into 3 computers which  continue to work, each of them. This feat is done without any higher control and there are never conflicts which appear. All computers work asynchronously (mimicked here by randomness). Moreover, eventually they arrive to read and write on the same tape.

There are no calls, nothing you have seen before.Everything is local, no matter how you slice it into separated parts.


Turing Machines, chemlambda style (I)

UPDATE: First animation and a more detailed version here. There will be many additions, as well as programs available from the gh-pages of the repo.

Once again: why do chemists try to reproduce silicon computers? Because that’s the movie.

    At first sight they look very different. Therefore, if we want to compare them, we have to reformulate them in ways which are similar. Rewrite systems are appropriate here.
    The Turing Machine (TM) is a model of computation which is well-known as being a formalisation of what a computer is. It is a machine which has some internal states (from a set S), has a head which read/writes symbols (from a set A) on a tape. The tape is seen as an infinite word made of letters from A. The set A has a special letter (call it “b” from blank) and the infinite words which describe tapes have to be such that all but a finite number of letters of that word are different from “b”. Imagine an infinite, in both directions, tape which has written symbols on it, such that “b” represents an empty space. The tape has only a finite part of it filled with letters from A, others than the blank letter.
    The action of the machine depends on the internal state and on the symbol read by the head. It is therefore a function of the internal state of the machine (element of S), the letter from the tape which is read (element of A), and outputs a letter from the alphabet A (which is written in the case where previously was the letter which has been read, changes its internal state, and the head moves one step along the tape, to the left (L) or right (R), or does not move at all (N).
    The TM can be seen as a rewrite system.
    For example, one could see a TM as follows. (Pedantically this is seen as a multi-headed TM without internal states; the only interest in this distinction is that it raises the question if there is really any meaning into discerning internal states from the tape symbols.) We start from a set (or type) of internal states S. Such states are denoted by A, B, C (thus exhibiting their type by the type of the font used). There are 3 special symbols: < is the symbol of the beginning of a list (i.e. word), > is the symbol of the end of a list (word) and M is the special symbol for the head move (or of the type associated to head moves). There is an alphabet A of external states (i.e. tape symbols), with b (the blank) being in A.
    A tape is then a finite word (list) of one of the forms < w A w’ > , < w M A w’ > , < w A M w’ >, where A is an internal state and w and w’ are finite words written with the alphabet A, which can be empty words as well.
    A rewrite replaces a left pattern (LT) by a right pattern (RT), and there are denoted as LT – – > RT . Here LT and RT are sub-words of the tape word. It supposed that all rewrites are context independent, i.e. LT is replaced by RT regardless of the place where LT appears in the tape word. The rewrite is called “local” if the lengths (i.e. number of letters) of LT and RT are bounded a priori.
    A TM is given as a list of Turing instructions, which have the form (current internal state, tape letter read, new internal state, tape letter write, move of the tape). In terms as explained here, all this can be expressed via local rewrites.

  • Rewrites which introduce blanks at the extremities of the written tape:
    • < A   – – >   < b A
    • A >   – – >   A b >
  • Rewrites which describe how the head moves:
    • A M a   – – >   a A
    • a M A   – – >   A a
  • Turing instructions rewrites:
    • a A c   – – >   d B M c   , for the Turing instruction (A, a, B, d, R)
    • a A c   – – >   d M B c   , for the Turing instruction (A, a, B, d, L)
    • a A c   – – >   d B c   , for the Turing instruction (A, a, B, d, N)
    Together with the algorithm “at each step apply all the rewrites which are possible, else stop” we obtain the deterministic TM model of computation. For any initial tape word, the algorithm explains what the TM does to that tape. < don’t forget to link that to a part of the Cartesian method “to be sure that I made an exhaustive enumeration” which is clearly going down today > Other algorithms are of course possible. Before mentioning some very simple variants of the basic algorithm, let’s see when it works.
    If we start from a tape word as defined here, there is never a conflict of rewrites. This means that there is never the case that two LT from two different rewrites overlap. It might be the case though, if we formulate some rewrites a bit differently. For example, suppose that the Turing rewrites are modified to:
  • a A   – – >   d B M   , for the Turing instruction (A, a, B, d, R)
  • a A   – – >   d M B   , for the Turing instruction (A, a, B, d, L)
    Therefore, the LT of the Turing rewrites is no longer of the form “a A c”, but of the form “a A”. Then it may enter in conflict with the other rewrites, like in the cases:

  • a A M c where two overlapping rewrites are possible
    • Turing rewrite: a A M c   – – >   d M B M c &nbsp which will later produce two possibilities for the head moves rewrites, due to the string M B M
    • head moves rewrite: a A M c   – – >   a c A &nbsp which then produces a LT for a Turing rewrite for c A, instead of the previous Turing rewrite for a A
  • a A > where one may apply a Turing rewrite on a A, or a blank rewrite on A >
    The list is non exhaustive. Let’s turn back to the initial formulation to the Turing rewrites and instead let’s change the definition of a tape word. For example, suppose we allow multiple TM heads on the same tape, more precisely suppose that we accept initial tape words of the form < w1 A w2 B w3 C … wN >. Then we shall surely encounter conflicts between head moves rewrites for patterns as a A M B c.
    The most simple solution for solving these conflicts is to introduce a priority of rewrites. For example we may impose that blank rewrites take precedence over head moves rewrites, which take precedence over Turing rewrites. More such structure can be imposed (like some head moves rewrites have precedence over others). Even new rewrites may be introduced, for example rewrites which allow multiple TMs on the same tape to switch place.
    Let’s see an example: the 2-symbols, 3-states

busy beaver machine

    . Following the conventions from this work, the tape letters (i.e. the alphabet A) are “b” and “1”, the internal states are A, B, C, HALT. (The state HALT may get special treatment, but this is not mandatory). The rewrites are:

  • Rewrites which introduce blanks at the extremities of the written tape:
    • < X   – – >   < b X   for every internal state X
    • A >   – – >   A b >   for every internal state X
  • Rewrites which describe how the head moves:
    • X M a   – – >   a X   , for every internal state X and every tape letter a
    • a M X   – – >   X a   , for every internal state X and every tape letter a
  • Turing instructions rewrites:
    • b A c   – – >   1 B M c   , for every tape letter c
    • b B c   – – >   b C M c   , for every tape letter c
    • b C c   – – >   1 M C c   , for every tape letter c
    • 1 A c   – – >   1 HALT M c   , for every tape letter c
    • 1 B c   – – >   1 B M c   , for every tape letter c
    • 1 C c   – – >   1 M A c   , for every tape letter c
    We can enhance this by adding the priority of rewrites, for example in the previous list, any rewrite has priority over the rewrites written below it. In this way we may relax the definition of the initial tape word and allow for multiple heads on the same tape. Or for multiple tapes.
    Suppose we put the machine to work with an infinite tape with all symbols being blanks. This corresponds to the tape word < A >. Further are the steps of the computation:
  • < A >   – – >   < b A >
  • <b A >   – – >   < b A b >
  • < b A b >   – – >   < 1 B M b >
  • < 1 B M b >   – – >   < 1 b B >
  • < 1 b B >   – – >   < 1 b B b >
  • < 1 b B b >   – – >   < 1 b C M b >
  • < 1 b C M b >   – – >   < 1 b b C >
  • < 1 b b C >   – – >   < 1 b b C b >
  • < 1 b b C b >   – – >   < 1 b 1 M C b >
  • < 1 b 1 M C b >   – – >   < 1 b C 1 b >
  • < 1 b C 1 b >   – – >   < 1 1 M C 1 b >
  • < 1 1 M C 1 b >   – – >   < 1 C 1 1 b >
  • < 1 C 1 1 b >   – – >   < 1 M A 1 1 b >
  • < 1 M A 1 1 b >   – – >   < A 1 1 1 b >
  • < A 1 1 1 b >   – – >   < b A 1 1 1 b >
  • < b A 1 1 1 b >   – – >   < 1 B M 1 1 1 b >
  • <1 B M 1 1 1 b >   – – >   < 1 1 B 1 1 b >
  • < 1 1 B 1 1 b >   – – >   < 1 1 B M 1 1 b >
  • < 1 1 B M 1 1 b >   – – >   < 1 1 1 B 1 b >
  • < 1 1 1 B 1 b >   – – >   < 1 1 1 B M 1 b >
  • < 1 1 1 B M 1 b >   – – >   < 1 1 1 1 B b >
  • < 1 1 1 1 B b >   – – >   < 1 1 1 1 B M b >
  • < 1 1 1 1 B M b >   – – >   < 1 1 1 1 b B >
  • < 1 1 1 1 b B >   – – >   < 1 1 1 1 b B b >
  • < 1 1 1 1 b B b >   – – >   < 1 1 1 1 b C M b >
  • < 1 1 1 1 b C M b >   – – >   < 1 1 1 1 b b C >
  • < 1 1 1 1 b b C >   – – >   < 1 1 1 1 b b C b >
  • < 1 1 1 1 b b C b >   – – >   < 1 1 1 1 b 1 M C b >
  • < 1 1 1 1 b 1 M C b >   – – >   < 1 1 1 1 b C 1 b >
  • < 1 1 1 1 b C 1 b >   – – >   < 1 1 1 1 1 M C 1 b >
  • < 1 1 1 1 1 M C 1 b >   – – >   < 1 1 1 1 C 1 1 b >
  • < 1 1 1 1 C 1 1 b >   – – >   < 1 1 1 1 M A 1 1 b >
  • < 1 1 1 1 M A 1 1 b >   – – >   < 1 1 1 A 1 1 1 b >
  • < 1 1 1 A 1 1 1 b >   – – >   < 1 1 1 HALT M 1 1 1 b >
  • < 1 1 1 HALT M 1 1 1 b >   – – >   < 1 1 1 1 HALT 1 1 b >
    At this stage there are no possible rewrites. Otherwise said, the computation stops. Remark that the priority of rewrites imposed a path of the rewrites applications. Also, at each step there was only one rewrite possible, even if the algorithm does not ask for this.
    More possibilities appear if we see the tape words as graphs. In this case we pass from rewrites to graph rewrites. Here is a proposal for this.
    I shall use the same kind of notation as in

chemlambda: the mol format

    . It goes like this, explained for the busy beaver TM example. We have 9 symbols, which can be seen as nodes in a graph:

  • < which is a node with one “out” port. Use the notation FRIN out
  • > which is a node with one “in” port. Use the notation FROUT in
  • b, 1, A, B, C, HALT, M which are nodes with one “in” and one”out” port. Use a notation Name of node in out
    The rule is to connect “in” ports with “out” ports, in order to obtain a tape word. Or a tape graph, with many busy beavers on it. (TO BE CONTINUED…)

Ouroboros predecessor (IV): how the walker eats

Continues from  Ouroboros predecessor (III): the walking machine .  This time you have to imagine that the walker machine sits on a long enough train track.

The regularity of the train track is corrupted by a bit of food (appearing as a L node connected to a termination node), see the next (BIG) picture. It is at the right of the walker.

You can see (maybe if you click on the image to make it bigger) that the walker ingests the food. The ingested part travels through the walker organism and eventually is expelled as a pair L and A nodes.




Perhaps, by clever modifications of the walker (and some experiments with its food) one can make a Turing machine.

This would give a direct proof that chemlambda with the  sequential strategy is universal as well. (Well, that’s only of academic interest, to build trust as well, before going to the really nice part, i.e. distrbuted, decentralized, alife computations.)


Chemical concrete machine not algorithmic self-assembly of DNA, nor Turing machine

This post is partially a request for references, partially a place for possible discussions, in the comments, and partially a place for clarifications of the previous post Why build a chemical concrete machine, and how? .

I started to read Erik Winfree’s thesis Algorithmic self-assembly of DNA and, to my joy, I see that at least at the knowledge level of 1998, what I propose is different. Here is a short brief of what I got from Winfree’s thesis  (along with my excuses for not representing things correctly and for misattributions) :

  • Adleman, Lipton, propose a model of DNA computing which uses exponential space, i.e. all the candidates for a solution of a combinatorial problem, each one represented by a strand of DNA, which are then filtered by a sequence of physical, or chemical manipulations of test tubes, of one of the types: separate (a tube into two others, according to the value of the variable at “i” position), merge (two tubes into one), detect. Variants (not due to Adleman, Lipton, Karp) are to add an amplify operation (like a FAN-OUT with variables) which transform a tube into two copies of it. Or (Boneh), instead of amplify, an append operation which adds a new variable with a give value.  All these models have variables and are based on the mechanism of selection of the solution from an exponential sea of candidates.
  • Hagiya, single-strand DNA computation, using a mechanism called “whiplash PCR”, which I don’t understand yet, but which has the computational power of a GOTO program. Has variables.
  • Winfree, single-strand DNA computation, but in 2D, where he proposes a “materialization” of a block cellular automaton (BCA) which has Turing universality. Has variables, tries to make a Turing machine.


In the post   Why build a chemical concrete machine, and how?   I mentioned Turing machines, but this is obviously wrong, as can be seen by looking at the previous post A chemical concrete machine for lambda calculus. I don’t want to have a ” syringe with 10^9 nano geometrical turing machines”, no, that’s misleading, what I call a chemical concrete machine works with lambda calculus terms (among other graphs, more geometrical, from graphic lambda calculus), which are reduced by chemical reactions (using for example the graphic beta move enzyme). That’s different.


At page 29 of Winfree’s thesis, there’s a picture illustrating various reactions and results of reactions between DNA strands.  I find interesting the Holliday junction, (image taken from the wiki page)

472px-Holliday_Junctionbecause it’s relation to crossings in knot diagrams. Recall that in the \lambda-TANGLE sector of the graphic lambda calculus, the graphic beta move appears as a SPLICE move.

Compare with these images from 3D crossings in graphic lambda calculus:





As that’s an exploratory post, kind of note to self, but open to anybody, take a look at this short course

[Updates will follow here]