In this post I want to explain the signal transduction process which replaces signal transmission in chemlambda.
It will be hopefully something clear with the help of some visual demos.
[Don’t believe what I write and proceed by the scientific method by checking my experiments yourself. You can easily do this by using the links in the demo. They point to a github repo. You have to switch to the gh-pages branch and there you find the scripts and the mol files which are needed to reproduce the experiments. You can of course make new experiments of your own! By looking in the scripts you shall see how things work. It is much more rigorous and easier to check than the written proof version. On the other side it is of course less authority-friendly and demands a bit larger attention span, but with a small attention span nobody can understand anything, right? For more on this “publishing philosophy” see the post The Shortest Open Access and New forms of Publication Question.]
Chemlambda is interesting because it is good at doing two different things at once:
- it can compute as computers do it (i.e. is Turing universal)
- it can also simulate chemical like, or even biological like phenomena.
This is great because there is no other system (excepting Nature) which can do this with such a small effort, at once (i.e. with the same tools).
Indeed, you have artificial life proposals, like for example swarm chemistry, which can simulate some simple life like phenomena but which can’t compute something as sophisticated as the Ackermann function (my favorite catch demo for CS people).
There is the amazing Game of Life, which can do both, but: for a Turing like computation one needs hundreds of thousands of nodes, on a fixed predefined grid, and synchronously updated.
What enables chemlambda to do that?
In the development of chemlambda I followed some principles as thought constraints. These principles shaped, and will shape further, this project. They are:
- (locality) every piece of the formalism or implementation has to be local in space, time or in control terms.
- (no meaning) global meaning is just an illusion, which is moreover hard to maintain or enforce, Nature does not work by high level meaning
- (topology does compute) signal transduction, not signal transmission.
While locality seems a natural and desirable feature to have in many formalisms, it is unsurprisingly difficult to achieve. The reason for this, in my opinion, is cultural: we are the children of the Industrial Revolution and so we are trained and our minds are shaped to think in terms of a global, god-like point of view, which gives us total powers over space, time, matter, and total control over all the universe at once. This is visible in the scientific explanations in particular, where, just because we want to explain a simple idea, we have then to build a formal notational frame around, like external coordinates, names for the variables (coming with their complex bookkeeping formalism) and to appeal to reification. While all these ingredients are useful for the transmission of a scientific explanation, they are not, by themselves, needed for the understanding. Example: suppose I’m explaining you the plot of a film. At some point you get lost with the names of the characters and then I react like this: “Look, is simple: A wants to kill B and for that he hires the hitman C. But C has a love affair with B and together they trick A into believing that …” Now, “A”, “B” and “C” may be useful to transmit my understanding of the movie plot to you, but they are not needed for my understanding. In the frame of mind of the Industrial Revolution, the world is a system which can be globally mapped into a hierarchical flow of processes, where everything falls well into this or that case of study. You have the system, you control the world, as if it is an industrial process.
The latest installment of this way of thinking (I’m aware about) is category theory.
The downside of this is the loose of locality and the profusion of higher and higher levels of abstraction, which is the quick fix of the cracks in the globality illusion.
Maybe now it becomes clear the second principle: no meaning. Many, if not all natural things don’t have a global, fixed or indisputable meaning. Still Nature works beautifully. The logical conclusion is that meaning is something us humans use and seek (sometimes), but not a necessary ingredient in Nature’s workings.
The concrete use of the “no meaning” principle consists into the elimination of any constraints which may introduce globality by the back door: there are no “correct” graphs in chemlambda, nor there exist any local decoration of the edges of the chemlambda graphs which give a global “meaning” to the chemlambda graphs.
Elimination of names, elimination of evaluations.
The third principle is called “topology does compute” as an allusion to the NTC vs TC discussion. The idea is very simple: instead of thinking in terms of wires which transmit signals, which are then processed by gates, think about signal transduction as an emergent phenomenon from the locality of the graph rewrites.
Signal transduction is a notion from biology: a molecule binds to a receptor (molecule), which trigger a chain, a cascade of other chemical reactions. Each chemical reaction is of course, local, but the emergent phenomenon is the moving of a “signal” (as seen by our minds, but non existent as a well defined entity in reality). We identify the “signal”, but things happen without needing the “signal” entity as a prerequisite.
Chemlambda works like that.
In order to explain this I shall use the “walker” example.
It is instructive to see how the computational and the biological like features of chemlambda led to the discovery of the walker.
The initial goal was to see how do various lambda calculus work in chemlambda. I knew that there is an algorithm which associates to any lambda term a chemlambda molecule, so just by picking interesting lambda terms, I could then see how they reduce in chemlambda, exclusively by local graph rewrites.
One of these terms is the predecessor, see Arithmetic in lambda calculus. Numbers appear (in chemlambda) as ladders of pairs of nodes, with the beginning and the end ladder connected by abstraction nodes. The whole topology is one of a circular ladder, roughly.
One can translate also the predecessor lambda term, and apply it to the number. In lambda calculus the predecessor applied to the number N gives N-1 (if N>0, otherwise the predecessor of 0 in lambda calculus is 0).
In chemlambda the predecessor is another molecule, which looks like a small bag. To apply the predecessor to a number translates in chemlambda into putting a supplementary A (application) node, and to connect some ports of the A node with the circular ladder (the number) and with the bag (the predecessor).
The first few reductions are trivial and they transform the molecule I described into a circular one, where on top of the ladder there is a molecule and at the end of the ladder there is another, smaller one.
All in all it looks like a circular train track with something on the tracks.
Now it gets interesting: the reduction process (in chemlambda) looks like there is a “creature” which travels along the train tracks.
This is the walker. You can see it in this first demo. Or you may look at this video
(however I do suggest the demo, the live version is far more impressive).
It is a beautiful example of signal transduction.
In the chemlambda reduction algorithm which is deterministic (all graph rewrites which are possible are applied) the walker keeps its shape, i.e. periodically we find the same graph (the walker graph) in different places on the train track (the number).
The random reduction algorithm breaks this regularity (and synchronicity as well) because in this algorithm a coin is flipped before application of any move, and the move is applied or not with 50% probability.
That is what you see in the demo: the walker looks like a kind of a wave which propagates along the train tracks, until it hits the end of the track (i.e. the small molecule I mentioned previously) and then it is destroyed progressively. It leaves behind a train track with a pair of nodes less than before the reduction.
So, inside the reduction mechanism of a lambda term (pure computation side) there is a self-maintaining, propagating entity, the walker, which travels in a biological fashion through the representation of a number!
This led me to the notion of a chemlambda quine in a rather obvious way:
- let’s eliminate the beginning and the end of the ladder and replace it by circular ladder with no start or end parts, then the walker entity would go and go in circles, endlessly; this is the “ouroboros“, a older explanation,
- remark that as a graph, in the deterministic reduction algorithm, after each reduction step the “ouroboros” is unchanged as a graph
- go to the extreme and use an ouroboros on a single train track pair, this is the 28-quine, see it in the second demo.
Let’s study more the signal transduction aspect. What can happen if two walkers interfere?
The experiment for this can be seen in the third demo.
It works like this. Take two predecessor reduction graphs, two copies of the graph from the first demo (they have each 20 pairs of nodes in their ladders).
Now, cross the ladders. How?
By a Holliday junction, in biochemical terms. I looks like this
Mathematically this is a product of two ladder graphs, where two links are cut, crossed and glued back.
The amazing thing is that the two walkers go their way, they mix and then they continue, each on others track, until the end points.
They behave as if they are waves passing one through the other.
UPDATE: This is a screen recording of the experiment