The shuffle trick, or how to eliminate commutativity and associativity, for the benefit of better self-multiplication

 On a set with an operation which has a neutral element, there is an algebraic axiom which is equivalent with both associativity and commutativity of the operation. I call it the shuffle (does it have a name?).
It is this one:

 

(ab)(cd) = (ac)(bd)

 

so the middle terms b and c switch their position.

Of course, by duality, that means that there is an axiom which is equivalent with both co-associativity and co-commutativity, in an algebra with co-unit.

This is a justification for the existence of TWO fanout nodes in chemlambda, called FO and FOE. The graphical representation of the shuffle, called now the shuffle trick, is simply a combination of two graph rewrites, as shown in the demo on the shuffle trick.
On this blog the shuffle trick is mentioned several times, and the “shuffle move” is proposed as various compositions of moves from graphic lambda calculus.
But the CO-COMM and CO-ASSOC moves from graphic lambda calculus are no longer needed, being replaced by FO-FOE and FI-FOE (aka fan-in) moves. This is good because the graph rewrites which express the associativity and commutativity are too symmetric, they don’t have a natural direction of application, therefore any choice of a preferred direction would be artificial.
The shuffle trick is essential for self-multiplication, where there has to be a way to multiply trees made by FO nodes, in such a way so that the leaves of the copies of the tree are not entangled, see this demo.
________________________________________________

What chemlambda can do for you

This post follows after What can I do with chemlambda, which questions to ask about it? and FAQ: chemlambda in real and virtual worlds.

UPDATE:

0.(for chemists)  With chemlambda you can build molecular computers. For this you (in case you’re a chemist) could identify the real chemical reactions which embody the moves of this made-up chemistry. They are simple enough to be possible in the real world. See All chemical reactions needed for a molecular computer.

___________________

1. Is chemlambda another visualization tool for lambda calculus?

NO. Visualization is an independent part, for the moment I use d3.js. Moreover chemlambda is only tangentially related to lambda calculus.

2. What is new in chemlambda then?

You have an example of a computation model which does not use values, calls, variables, variable passing.

Have you seen any other model with these properties?

3. Why should I be excited about a computation with no values, no evaluations, no calls, etc?

For several reasons:

  • nobody thinks about that (excepting perhaps biology related researchers, who don’t see, by looking through the microscope, the structure loved by CS people)
  • this is a blind spot, an accident of the history, because computers appeared after the telephone, during and after the WW2 when the problem was to encrypt and decrypt a message sent from A to B, and because physically the computers we know how to build are based on electronics
  • think, if you don’t have the problem of how to pass a value, or how to call a function in the internet, then what could be possible which now is not?
  • think about building real or virtual computers which are life like, in the sense that their work is complex to the point of seeming incomprehensible, but their complexity is based on very simple mechanisms, used again and again, without any director to channel them and without any hierarchy to maintain or enforce.

4. So should I pack my bags and look for other research subjects than what is popular now, or maybe should I just ignore chemlambda and go with the business as usual? After all, you know, cloud computing is great. Process calculi, types, all this.

It’s your choice, but now you know that there is something which is closer to real life like computation, which may hold the promise for a free sky, cloudless internet.

5. Suppose I want to play a bit with these ideas, but not to the point of getting out of my comfort zone. Which questions would be interesting to ask?

Right, this is a good strategy. Here are some questions which are related to lambda calculus, say.

  •  there is an algorithm which transforms a (untyped lambda beta calculus) term into a chemlambda molecule, but even if one can prove that beta move translates into the BETA move (like Wadsworth-Lamping) and that eventually the translations of SKI or BCKW combinators reduce in chemlambda like they should, there is the question: how does chemlambda does by local writes the thing which replaces  an evaluation strategy? A study by examples may give interesting insights.
  • how to program with chemlambda? Indeed, it can reproduce some behaviours of untyped lambda beta, but even so it does it in ways which are not like usually expected, as proven by the demos. To take an older example, the Y combinator simplifies to a 2 nodes molecule, story told in this article  . The fact is that because one does not have eta (which looks like a global move in chemlambda, i.e. there is no a priory bound on the number of nodes and links to check for application of eta) then there are no functions. It’s a kind of extremal functional programming where there are no functions.
  • (c) what are the right replacements for lists, currying, booleans, which could take advantage from  chemlambda?
  • (d) by a try and explore procedure, which are the less stupid reduction algorithms and in which sense do they work exactly? This has to do with locality, which constrains a lot the design of those algorithms.
  • (e) finally, what can chemlambda do outside the sector of untyped lambda beta, which is only a small part of the possible chemlambda molecules?

_____________________________________________________________________

The working factorial

Continuing from Experiments with a little lisper tutorial, recall that at that moment I succeeded to compute with chemlambda a relative of the factorial, but not the factorial itself.

Now, I learned how to do it and it works all the time (compared with 20% success last time).

Last time I took a lambda term for the factorial from the lambda calculus tutorial by Mayer Goldberg from the little-lisper.org, par. 57, page 14. Then I modified it and got a molecule which computes the factorial in about 20% of the cases. Now, in this working factorial example, I made two supplementary modifications. The first consists in starting from a lambda term which uses the mutiplication in the form L mnf.m(nf) instead of the one used in the tutorial. Secondly, the result of the computation (i.e. the “value” of the factorial) is applied to a SUCC (successor) which is then applied to c0, which result in the generation of the correct result.

Link to the demo with factorial(4)=24.

Here is the video, recorded as seen in safari, with 2X speed (firefox behaves crappy with the d3.js demos I make, have no precise idea why; that is why I recorded my experience with the demo, then re-recorded the video with 2X speed, all this done with QuickTime)

It works very well also with factorial(5)=120, but because the visualization of that computation takes some time (which may challenge people with short attention span), here is a video with the last part of the computation at 8X speed.

____________________________________________________

What can I do with chemlambda, which questions to ask about it?

UPDATE (2021): Obsolete! Go at chemlambda.github.io.

____________________________________________________________

[I continue the habit to use sometimes the drafts I write and to extract from them and process a bit those parts which may be of interest to any known or unknown fellow thinker, tinker, hacker.

This is the first part, there will be a second one.]

____________________________________________________________

See also: FAQ: chemlambda in real and virtual worlds

____________________________________________________________

1. Can I play with chemlambda like in the demos?

YES, if you go  at https://github.com/chorasimilarity/chemlambda-gui/tree/gh-pages/dynamic  , then you’ll  find everything needed to play with chemlambda.

What you need:

  • scripts like moving_random_metabo.sh,  which calls
  • the main program which is check_1_mov2_rand_metabo.awk
  • you need initial molecules, which are  in the files with the extension .mol. Read here about the mol format (1), (2). It’s just a list of nodes and their ports.
  • You shall need also the file firstpart.txt.

Then figure it out:

  • bash moving_random_metabo.sh asks you to type the name of a mol file, say lalala.mol
  • it produces lalala.html which you can see in the browser, as a d3.js animation.

In the awk script you have at the beginning the variables:  metabo (which decides the color of the new nodes; periodically, with period metabo, the color of the new nodes is turned to #222 or it is left unchanged, in order to help visualize the flow and replenishment of the new nodes in the molecule, i.e. the metabolism of the molecule) and cycount which is the maximum number of steps in the main cycles (if you expect that the computation stops quick, or if on the contrary, you suspect it will never stop, then take cycount=100, if you are sure the computation will stop, but perhaps after more than 100 steps, then take cycount=1000 for example).

The main cycle starts at line 931.

At about the line 900 there is a function nextval()… where there is a piece “return 3000 + (5*step“, now you can modify 3000 to something else to decide the initial time before the animation starts, and 5 to something like 30 or bigger, if you want to look at the result in firefox or if generally the browser has problems with rendering.

For the list of moves and more details see the page of moves. Go down to that page to see a list of articles.

2. Is chemlambda something fixed  or it changes? Can I change it? What can I do if I want to tinker with it, just for fun?

It changes. First there was “graphic lambda calculus”, then there was chemlambda without the FOE node and moves, now after many experiments there is THIS chemlambda.

You can change it any way you want (and be polite to cite my version and to notice me somehow, preferably by posting something in a public place).

If you have real programming skills, not like me (I’m just a mathematician) then you can take the awk script and:

  • transform it into a js script, instead of the actual system
  • make it quicker (presently there is a preset of 5ms which decides the speed of the animation, but this may give problems to many browsers; why? I have no idea, but it may be a trivial or a clever fix to that)
  • add a molecule builder UI
  • make a game out of it
  • make an app
  • make a script which converts lambda terms to chemlambda molecules, according to the algorithm from here.

These are immediately useful things to do. There are others too, see next question.

3. What does the script about chemlambda?

The script implements a model of computation based on chemlambda.

So, chemlambda is a purely local graph rewrite system (it is a list of patterns with up to 4 nodes which are replaced by other patterns, perhaps according to rules which involve a condition which can be verified by checking maybe yet another N (a priori fixed, small) links and nodes).

It does not compute by itself, you have to add an algorithm of reductions, which says how to use the graph rewrites, aka the moves.

In the script is used the following algorithm, called in previous post the “stupid” one, because it is really the most simple: after the mol file is converted into a graph (given as nodes and links arrays), after the head of the html file is written, then the algorithm enters a cycle. The structure of this cycle is the following:

  • look for patterns to change, in the order of the graph rewrites priority.  (What’s this priority? It may happen that there are nodes or links which are part of two distinct, but overlapping patterns for graph rewrites. This results into a conflict: which move to apply? By close examplnation of the moves, there is a order of looking for the moves which eliminates the conflicts if we look first at the move FO-FOE, then at the other DIST moves, then at BETA (i.e. A-L) move and FAN-IN (i.e. FI-FOE), then at the PRUNING moves, and each time we find a pattern we put the respective move in a list of proposed moves and we block the nodes so they are not available for being part of other patterns during the search of patterns )
  • do the proposed moves (in the awk script this means to do the moves in the sense of the graph data variables from the script and in the same time, to write in the html file what you just did, in a format intelligible in d3.js)
  • there may be lots of Arrow nodes which appear, and in order to eliminate as many as possible there is an Arrow cycle (see the moves page) which eliminates all Arrow nodes which can be eliminated by COMB moves. (Same thing apply here, you have to write in the html file what you did with the graph during that internal cycle)

This is the stupid deterministic algorithm.

There is a random variant of it, which is exactly what the awk script does.

Whenever a pattern is identified, there is a coin flipped, and with probability about 50% (see further a small detail) the move goes to the list of proposed moves and the nodes are blocked, or not.

In this way the priority of moves is made much less important.

Small detail: there are two kinds of moves, those who increase the number of nodes (DIST) and the others, who decrease the number of nodes (Arrow not taken into consideration). The small detail is that,  after each step of the cycle, the probabilities of the moves are slightly modified so that if in the cycle there has been more DIST moves than the others then the probability of DIST moves decrease, or else if there has been less DIST moves than the others then the probability of DIST move increases in the next cycle.

Why randomness: is a cheap substitute, in this stupid reduction version, for asynchronous.

4. How can I seriously change the  chemlambda reduction algorithm?

By changing from “stupid” to more interesting, it’s up to you. What about bringing into the game Node.js? Know why? because if you look at a mol file, you notice that if you split the file in ten pieces, then each of them is again a mol file.

_______________________________________________________________________

Experiments with a little lisper tutorial

I used the excellent lambda calculus tutorial by Mayer Goldberg from the little-lisper.org for some experiments in chemlambda.

UPDATE: See also the post The working factorial and the video

_______________

There are five of them.

The mentioned tutorial is the source for the lambda term which I translated into chemlambda and then used in the Ackermann function demos. The most outrageously successful is the computation of Ack(2,2) while self-multiplicating. The daughter molecules do not end at the same time, moreover.

Here is a video of a screen recording at 2X speed.

Now, there is a very puzzling thing here. In chemlambda with a random, purely local graph rewriting algorithm I can compute the Ackermann function. But what about simpler functions, like the factorial?

I tried the easy way consisting into translation of lambda terms for the factorial into chemlambda and the results are ugly. As a geometer who wants to decipher computation without using variables, values, variable passing or evaluations, I know that chemlambda is different because of this feature it has: it uses something like signal transduction instead of gates-and-wires general model from IT. So there has to be consequences of that.

Let’s see what happens in the case of the factorial. In this demo I took the lambda term from the same tutorial, par. 57, page 14. I hoped will work better than other proposals, based on the experience with the Ackermann function. I noticed in other experiments with terms for the factorial that the reduction in chemlambda is extremely wasteful, producing thousands of nodes and lots of trash. Moreover, there is a problem with the fix point combinator, explained in the article with Louis Kauffman Chemlambda, universality and self multiplication, and described in the older demos like this one. In chemlambda, the molecule which corresponds to the Y combinator reduces to a very simple one (which does not has an equivalent as a lambda term), made by only two nodes, fanout and application. Then the molecule becomes a gun which shoots pairs of fanout and application node. There is no mystery about the Y combinator in chemlambda, the real mechanism of it consists in the self-multiplication by the fanout node. [Note: in the old demo and in the article linked there is no FOE fanout. The FOE and the moves associated to it are a recent addition to the formalism, see the page of moves.]

The problem of using the Y combinator is that it never stops generating pairs of A and FOE nodes. In a computation which implements recurrence by the Y combinator, at some point, according to a IFTHENELSE or ISZERO, the cycle of Y is broken. But in chemlambda the Y gun molecule continues to exist and this leads to a never ending computation. From one side this is OK, see for example the life-like computations with chemlambda quines. Actually there is a stronger relation between chemlambda quines and the Y combinator molecule. One can design the computation to be such that when the Y molecule is no longer needed, it is encapsulated into a quine, but this is for another time to explain in detail.

I come back to the factorial example. In the demo I linked you can see that the computation of the factorial is wasteful (and paradoxically leads to a Y molecule), even if it does not use a fix point combinator.

Why?

First I thought it is because of currying and uncurrying. In chemlambda, because it is a graph rewrite system, there is no particular need to use currying all the time.

Then, to check this, I modified the molecule from the little lisper tutorial in order to geometrically compute the repeated application of a function f(a,b)=(succ(a), succ(a)b). The function is a piece of a graph with two in and two out links which is self-multiplying under the action of a number in the Church encoding.

Here is a successful computation with this molecule. But does it work all the time or have I been lucky? The reduction algorithm is random and different runs may give different results. It is the case with the Ackermann function computation as well, but that one was successful all the time.

Oh, it turns out that the computation with that molecule works well in about 20% of the runs. Here is an unsuccessful run.

So there is still a problem, but which one?

Under close examination the computation is still wasteful, because of the term (piece of molecule) c0, for the 0 in the Church encoding. In chemlambda this term corresponds to a small molecule which has a termination node inside.

When we want to thrash, like programmers do, something useless, in chemlambda the useless piece does not disappear. Is like in nature, the thing continues to exist and continues to interact, while in the process of degradation, with the rest of the environment.

The termination node, via the PRUNING moves, destroys slowly the useless part, but pother pieces form the useless part continue to interact with the rest of the graph.

Is this the problem?

In order to check this I further modified the molecule which was successful 20% of the time. I just replaced c0 by c1, which is the (molecule for) the lambda term for 1 in the Church encoding. Now, c1 does not have any termination node inside.

The price is that I no longer compute the factorial, but instead I compute the repeatedly applied function

F(a,b)=(succ(a), tms(a,b))

where tms(a,b)=ab+1. Here is a demo for the computation of tms(3,3) in chemlambda., and further is a video for tms(5,5)=26, where you can see a new creature, more complex than the walker discover in the predecessor.

I checked to see what is the closed expression, if any, for the function I compute, namely

f(0)=1, f(n+1) = (n+1)f(n) + 1

and the Wolfram Alpha engine has an interesting answer.

Well, this time I hit the right spot. The computation works, again and again and again.

So we have to learn to program ecologically with chemlambda.

____________________________________________________________________

Life thrives in randomness, creatures die or blow out in the absence of it

Randomness is a manifestation of locality. The world is big and anything works at a local level, asynchronously, and randomness ensues.

I want to advance the following hypothesis about the origin of life.

Life is a manifestation of the computational universality of a collection of chemical reactions.

Indeed, there probably are many small collections of chemical reactions which, coupled with a random chemical reduction algorithm, form a universal computing model.

A proof of principle for this is chemlambda. There are still to discover real chemical reactions which implement the (invisible in chemlambda formalism, for the moment) moves shown at the chemlambda moves page.

But they are so simple that there have to be many, many such chemical reaction.

In a system, in a chemical soup, if it happens to appear these chemical reactions, the following is a game of computation and self-multiplication.

Because universality means, in this particular case, that with non-negligible probability, anything can be achieved.

The role of randomness is tricky. On one side randomness selects resilient creatures. That’s a funny thing, for example in chemlambda good candidates for living creatures are quines.

A quine in chemlambda is a molecule which stays the same in a daterministic world. This gives to the quine molecule a certain resilience when faced with randomness, which makes it to have a life: it may grow, it may decrease, for a time it may play around the deterministic state, and it may eventually die.

This is illustrated in the first battle of microbes demo, where several quines are put together and “fed” with enzymes, which appear randomly, but such that if at a moment there are more enzymes for the moves which increase the number of nodes, then the next time the probability of appearance of such enzymes decreases in favour of those which decrease the number of moves.

So globally it appears as if the the quines compete for the moves and those quines having a greater diversity of possible moves thrive, while the other die.

The 9_quine is the most fragile quine, as you see in the demo many of them die (i.e. they transform into a small molecule which is inert to any reduction).

There is a lot to add about this, for example there are other quines which behave like they adopt the strategy of spores, i.e. they regress to an “egg” state and they flourish later, from time to time, when they have to “compete” with bigger quines.

Of course, all this is in the eye of the observer, it is an emergent behaviour, like the intelligence of a Braitenberg vehicle.

But what if quines are a bit too fragile for life? Maybe there are molecules who grow to an approximately stable configuration, in random conditions, for a time, at least until they self-multiply.

[Have you seen the story of the 16 bubble quine, from egg to motherhood?]

Suppose, just suppose that in deterministic conditions such a molecule would grow slowly, instead of being a quine.

This is consistent with the requirement to be resilient in random conditions, there will be a second part of this post when the demos are prepared.

But it has a curious corollary, namely that such a living creature will blow out, like a cancer, in too calm, too deterministic conditions.

The example I have and play with is a molecule made by two 9_quine and a 5 atoms molecule which, if left single, it grown in a regular pattern, but in the deterministic algorithm, when coupled by some bonds with the two quines, it grows very very slow.

This molecule, under random conditions, display an amazing variety of shapes. But all the runs show the same thing, more or less: that the two 9_quines have a role of taming the growth of the molecule, keeping it controlled, but at some moment the 9_quines die, somewhere in the molecule, in some unrecognizable shape, and after that the molecule reverts slowly to the regular growth pattern (which makes it unsustainable if there are phisical limits to the growth).

So not only that randomness select creatures who can survive (and self-multiply) in random conditions, but it may select creatures who live in random conditions, but who die in deterministic conditions.

Maybe that is why life hates when everything is predictable.

I close this post with the comment that however, there are molecules which arrive at a determined state in random conditions.

This may be very useful for computer like computations. The exmple I have is again the remarkable molecule for the Ackermann function.

See it in this video self-reproducing while it computes.

Apparently some molecules display a huge resilience to randomness. The Ackermann function molecule daughters finish the computation at different times, but they do finish it.

_______________________________________________________________________

 

Two walkers enter into a Holliday junction…

In this post I want to explain the signal transduction process which replaces signal transmission in chemlambda.

It will be hopefully something clear  with the help of some visual demos.

[Don’t believe what I write and proceed by the scientific method by checking my experiments yourself. You can easily do this by using the links in the demo. They point to a github repo. You have to switch to the gh-pages branch and there you find the scripts and the mol files which are needed to reproduce the experiments. You can of course make new experiments of your own! By looking in the scripts you shall see how things work. It is much more rigorous and easier to check than the written proof version. On the other side it is of course less authority-friendly and demands a bit larger attention span, but with a small attention span nobody can understand anything, right? For more on this “publishing philosophy” see the post The Shortest Open Access and New forms of Publication Question.]

I start.

Chemlambda is interesting because it is good at doing two different things at once:

  • it can compute as computers do it (i.e. is Turing universal)
  • it can also simulate chemical like, or even biological like phenomena.

This is great because there is no other system (excepting Nature) which can do this with such a small effort, at once (i.e. with the same tools).

Indeed, you have artificial life proposals, like for example swarm chemistry, which can simulate some simple life like phenomena but which can’t compute something as sophisticated as the Ackermann function (my favorite catch demo for CS people).

There is the amazing Game of Life, which can do both, but:  for a Turing like computation one needs  hundreds of thousands of nodes, on a fixed predefined grid, and synchronously updated.

What enables chemlambda to do that?

In the development of chemlambda I followed some principles as thought constraints. These principles shaped, and will shape further, this project. They are:

  • (locality) every piece of the formalism or implementation has to be local in space, time or in control terms.
  • (no meaning) global meaning is just an illusion, which is moreover hard to maintain or enforce, Nature does not work by high level meaning
  • (topology does compute) signal transduction, not signal transmission.

While locality seems a natural and desirable feature to have in many formalisms, it is unsurprisingly difficult to achieve. The reason for this, in my opinion, is cultural: we are the children of the Industrial Revolution and so we are trained and our minds are shaped to think in terms of a global, god-like point of view, which gives us total powers over space, time, matter, and total control over  all the  universe at once. This is visible in the scientific explanations in particular, where, just because we want to explain a simple idea, we have then to build a formal notational frame around, like external coordinates, names for the variables (coming with their complex bookkeeping formalism) and to appeal to reification. While all these ingredients are useful for the transmission of a scientific explanation, they are not, by themselves, needed for the understanding.  Example: suppose I’m explaining you the plot of a film. At some point you get lost with the names of the characters and then I react like this: “Look, is simple: A wants to kill B and for that he hires the hitman C. But C has a love affair with B and together they trick A into believing that …” Now, “A”, “B” and “C” may be useful to transmit my understanding of the movie plot to you, but they are not needed for my understanding. In the frame of mind of the Industrial Revolution, the world is a system which can be globally mapped into a hierarchical flow of processes, where everything falls well into this or that case of study. You have the system, you control the world, as if it is an industrial process.

The latest installment of this way of thinking (I’m aware about) is category theory.

The downside of this is the loose of locality and the profusion of higher and higher levels of abstraction, which is the quick fix of the cracks in the globality illusion.

Maybe now it becomes clear the second principle: no meaning. Many, if not all natural things don’t have a global, fixed or indisputable meaning. Still Nature works beautifully. The logical conclusion is that meaning is something us humans use and seek (sometimes), but not a necessary ingredient in Nature’s workings.

The concrete use of the “no meaning” principle consists into the elimination of any constraints which may introduce globality by the back door: there are no “correct” graphs in chemlambda, nor there exist any local decoration of the edges of the chemlambda graphs which give a global “meaning” to the chemlambda graphs.

Elimination of names, elimination of evaluations.

The third principle is called “topology does compute” as an allusion to the NTC vs TC discussion. The idea is very simple: instead of thinking in terms of wires which transmit signals, which are then processed by gates, think about signal transduction as an emergent phenomenon from the locality of the graph rewrites.

Signal transduction is a notion from biology: a molecule binds to a receptor (molecule), which trigger a chain, a cascade of other chemical reactions. Each chemical reaction is of course, local, but the emergent phenomenon is the moving of a “signal” (as seen by our minds, but non existent as a well defined entity in reality). We identify the “signal”,  but things happen without needing the “signal” entity as a prerequisite.

Chemlambda works like that.

In order to explain this I shall use the “walker” example.

It is instructive to see how the computational and the biological like features of chemlambda led to the discovery of the walker.

The initial goal was to see how do various lambda calculus work in chemlambda. I knew that there is an algorithm which associates to any lambda term a chemlambda molecule, so just by picking interesting lambda terms, I could then see how they reduce in chemlambda, exclusively by local graph rewrites.

One of these terms is the predecessor, see Arithmetic in lambda calculus. Numbers appear (in chemlambda) as ladders of pairs of nodes, with the beginning and the end ladder connected by abstraction nodes. The whole topology is one of a circular ladder, roughly.

One can translate also the predecessor lambda term, and apply it to the number. In lambda calculus the predecessor applied to the number N gives N-1 (if N>0, otherwise the predecessor of 0 in lambda calculus is 0).

In chemlambda the predecessor is another molecule, which looks like a small bag. To apply the predecessor to a number translates in chemlambda into putting a supplementary A (application) node, and to connect some ports of the A node with the circular ladder (the number) and with the bag (the predecessor).

The first few reductions are trivial and they transform the molecule I described into a circular one, where on top of the ladder there is a molecule and at the end of the ladder there is another, smaller one.

All in all it looks like a circular train track with something on the tracks.

Now it gets interesting: the reduction process (in chemlambda) looks like there is a “creature” which travels along the train tracks.

This is the walker. You can see it in this first demo. Or you may look at this video

(however I do suggest the demo, the live version is far more impressive).

It is a beautiful example of signal transduction.

In the chemlambda reduction algorithm which is deterministic (all graph rewrites which are possible are applied) the walker keeps its shape, i.e. periodically we find the same graph (the walker graph) in different places on the train track (the number).

The random reduction algorithm breaks this regularity (and synchronicity as well) because in this algorithm a coin is flipped before application of any move, and the move is applied or not with 50% probability.

That is what you see in the demo: the walker looks like a kind of a wave which propagates along the train tracks, until it hits the end of the track (i.e. the small molecule I mentioned previously) and then it is destroyed progressively. It leaves behind a train track with a pair of nodes less than before the reduction.

So, inside the reduction mechanism of a lambda term (pure computation side) there is a self-maintaining, propagating entity, the walker, which travels in a biological fashion through the representation of a number!

This led me to the notion of a chemlambda quine in a rather obvious way:

  • let’s eliminate the beginning and the end of the ladder and replace it by circular ladder with no start or end parts, then the walker entity would go and go in circles, endlessly; this is the “ouroboros“, a older explanation,
  • remark that as a graph, in the deterministic reduction algorithm, after each reduction step the “ouroboros” is unchanged as a graph
  • go to the extreme and use an ouroboros on a single train track pair, this is the 28-quine, see it in the second demo.

Let’s study more the signal transduction aspect. What can happen if two walkers interfere?

The experiment for this can be seen in the third demo.

It works like this. Take two predecessor reduction graphs, two copies of the graph from the first demo (they have each 20 pairs of nodes in their ladders).

Now, cross the ladders. How?

By a Holliday junction, in biochemical terms.  I looks like this

480px-Holliday_junction_coloured

[source]

Mathematically this is a product of two ladder graphs, where two links are cut, crossed and glued back.

The amazing thing is that the two walkers go their way, they mix and then they continue, each on others track, until the end points.

They behave as if they are waves passing one through the other.

UPDATE: This is a screen recording of the experiment

 

 

 

__________________________________________________________________________

 

 

 

The shortest Open Access and New Forms of Publication question

If:

then wtf is the article good for?

UPDATE: at figshare, they think about that.  Great!

UPDATE 2: for no particular reason, here is an accompanying short video done with the program

UPDATE 3:  See “Publish your computer code: it is good enough” by Nick Barnes, Nature 467, 753 (2010) | doi:10.1038/467753a

“I accept that the necessary and inevitable change I call for cannot be made by scientists alone. Governments, agencies and funding bodies have all called for transparency. To make it happen, they have to be prepared to make the necessary policy changes, and to pay for training, workshops and initiatives. But the most important change must come in the attitude of scientists. If you are still hesitant about releasing your code, then ask yourself this question: does it perform the algorithm you describe in your paper? If it does, your audience will accept it, and maybe feel happier with its own efforts to write programs. If not, well, you should fix that anyway.”

____________________________________________________________

As a handful of slime mold. That’s how smart is the whole net.

I record here, to be sure that it’s read, a message of explanations I wrote today.

Before that, or after that you may want to go check the new chemlambda demos, or to look at the older ones down the page, in case you have not noticed them. There are also additions in the moves page. Most related to this post is the vision page.

“First, I have lots of examples which are hand drawn. Some of them appeared from theoretical reasons, but the bad thing used to be that it was too complex to follow (or to be sure of that) on paper what’s happening. From the moment I started to use the mol files notation (aka g-patterns) it has been like I suddenly became able to follow in much more depth. For example that’s how I saw the “walker”, “ouroboros” and finally the first quine by reducing the graph associated to the predecessor (as written in lambda calculus). Finally, I am able now to see what happens dynamically.
However, all these examples are far from where I would like to be now, they correspond to only the introduction into the matter. But with every “technical” improvement there are new things which appear. For example, I was able to write a program which generates, one by one, each of about 1000 initial graphs made by 2 nodes and follow their reduction behaviour for a while, then study the interesting exemplars. That is how I found all kind of strange guys, like left or write propagators, switchers, back propagators, all kind of periodic (with period > 1) graphs.
So I use mainly as input mol files of graphs which I draw by hand or which I identify in bigger mol files as interesting. This is a limitation.
Another input would be a program which turns a lambda term into a mol file. The algorithm is here.
Open problem: pick a simplified programming language based on lambda calculus and then write a “compiler”, better said a parser, which turns it into a mol file. Then run the reduction with the favourite algorithm, for the moment the “stupid” determinist and the “stupid” random ones.
While this seems feasible, this is a hard project for my programming capabilities (I’d say more because of my bad character, if I don’t succeed fast then I procrastinate in that direction).
It is tricky to see how a program written in that hypothetical simple language reduces as a graph, for two reasons:
  1. it has to be pure lambda beta, but not eta (so the functions are not behaving properly, because of the lack of eta)
  2. the reductions in chemlambda do not parallel any reduction strategy I know about for lambda calculus.
The (2) makes things interesting to explore as a side project, let me explain. So there is an algorithm for turning a lambda term into a mol file. But from that part, the reductions of the graph represented by the mol file give other graphs which typically do not correspond to lambda terms. However, magically, if the initial lambda term can be reduced to a lambda term where no further reductions are possible, then the final  graph from the chemlambda reduction does correspond to that term. (I don’t have a proof for that, because even if I can prove that if restricted to lambda terms written onlly as combinators, chemlambda can parallel the beta reduction, the reduction of the basis combinators and is able to fan-out a term, it does not mean that what I wrote previously is true for any term, exactly because of the fact that during chemlambda reductions one gets out from the real of lambda terms.)
In other cases, like for the Y combinator, there is a different phenomenon happening. Recall that in chemlambda there is no variable or term which is transmitted from here to there, there is nothing corresponding to evaluation properly. In the frame of chemlambda the behaviour of the Y combinator is crystal clear though. The graph obtained from Y as lambda term, connected to an A node (application node) starts to reduce even if there is nothing else connected to the other leg of the A node. This graph reduces in chemlambda to just a pair of nodes, A and FO, which then start to shoot pairs of nodes A and FOE (there are two fanout nodes, FO and FOE). The Y is just a gun.
Then, if something else is connected to the other leg of the application node I mentioned, the following phenomenon happens. The other graph starts to self-multiply (due to the FOE node of the pair shot by Y) and, simultaneously, the part of it which is self-multiplied already starts to enter in reaction with the application node A of the pair.
This makes the use of Y spectacular but also problematic, due to too much exuberance of it. That is why, for example, even if the lambda term for the Ackermann function reduces correctly in chemlambda, (or see this 90 min film) I have not yet found a lambda term for the factorial which does the same (because the behaviour of Y has to be tamed, it produces too many opportunities to self-multiply and it does not stop, albeit there are solutions for that, by using quines).
So it is not straightforward that mol files obtained from lambda terms reduce as expected, because of the mystery of the reduction strategy, because of the fact that intermediary steps go outside lambda calculus, and because the graphs which correspond to terms which are simply trashed in lambda calculus continue to reduce in chemlambda.
On the other side, one can always modify the mol file or pre-reduce it and use it as such, or even change the strategy of the algorithm represented by the lambda term. For example, recall that because of lack of eta, the strategy based on defining functions and then calling them, or in general, just currying stuff (for any other reasons than for future easy self-multiplication) is a limitation (of lambda calculus).
These lambda terms are written by smart humans who stream everything according to the principle to turn everything into a function, then use the function when needed. By looking at what happens in chemlambda (and presumably in a processor), most of this functional book-keeping is beautiful for the programmer, but a a limitation from the point of view of the possible alternative graphs which do the same as the functionally designed one, but easier.
This is of course connected to chemistry. We may stare to biochemical reactions, well known in detail, without being able to make sense of them because things do happen by shortcuts discovered by evolution.
Finally, the main advantage of a mol file is that any collection of lines from the mol file is also a mol file. The free edges (those which are capped by the script with FRIN (i.e. free in) and FROUT (i.e. free out) nodes) are free as a property of the mol file where they reside, so if you split a mol file into several pieces and send them to several users then they are still able to communicate by knowing that this FRIN of user A is connected (by a channel? by an ID?) to that FROUT of user B. But this is for a future step which would take things closer to real distributed computing.
And to really close it, if we would put on every computer linked to internet a molecule then the whole net would be as complex (counting the number of molecules) as a mouse. But that’s not fair, because the net is not as structured as a mouse. Say as a handful of slime mold. That’s how smart is the whole net. Now, if we could make it smarter than that, by something like a very small API and a mol file as big as a cookie… “
_________________________________________________________________________