The shuffle trick, or how to eliminate commutativity and associativity, for the benefit of better self-multiplication

 On a set with an operation which has a neutral element, there is an algebraic axiom which is equivalent with both associativity and commutativity of the operation. I call it the shuffle (does it have a name?).
It is this one:

 

(ab)(cd) = (ac)(bd)

 

so the middle terms b and c switch their position.

Of course, by duality, that means that there is an axiom which is equivalent with both co-associativity and co-commutativity, in an algebra with co-unit.

This is a justification for the existence of TWO fanout nodes in chemlambda, called FO and FOE. The graphical representation of the shuffle, called now the shuffle trick, is simply a combination of two graph rewrites, as shown in the demo on the shuffle trick.
On this blog the shuffle trick is mentioned several times, and the “shuffle move” is proposed as various compositions of moves from graphic lambda calculus.
But the CO-COMM and CO-ASSOC moves from graphic lambda calculus are no longer needed, being replaced by FO-FOE and FI-FOE (aka fan-in) moves. This is good because the graph rewrites which express the associativity and commutativity are too symmetric, they don’t have a natural direction of application, therefore any choice of a preferred direction would be artificial.
The shuffle trick is essential for self-multiplication, where there has to be a way to multiply trees made by FO nodes, in such a way so that the leaves of the copies of the tree are not entangled, see this demo.
________________________________________________

What chemlambda can do for you

This post follows after What can I do with chemlambda, which questions to ask about it? and FAQ: chemlambda in real and virtual worlds.

UPDATE:

0.(for chemists)  With chemlambda you can build molecular computers. For this you (in case you’re a chemist) could identify the real chemical reactions which embody the moves of this made-up chemistry. They are simple enough to be possible in the real world. See All chemical reactions needed for a molecular computer.

___________________

1. Is chemlambda another visualization tool for lambda calculus?

NO. Visualization is an independent part, for the moment I use d3.js. Moreover chemlambda is only tangentially related to lambda calculus.

2. What is new in chemlambda then?

You have an example of a computation model which does not use values, calls, variables, variable passing.

Have you seen any other model with these properties?

3. Why should I be excited about a computation with no values, no evaluations, no calls, etc?

For several reasons:

  • nobody thinks about that (excepting perhaps biology related researchers, who don’t see, by looking through the microscope, the structure loved by CS people)
  • this is a blind spot, an accident of the history, because computers appeared after the telephone, during and after the WW2 when the problem was to encrypt and decrypt a message sent from A to B, and because physically the computers we know how to build are based on electronics
  • think, if you don’t have the problem of how to pass a value, or how to call a function in the internet, then what could be possible which now is not?
  • think about building real or virtual computers which are life like, in the sense that their work is complex to the point of seeming incomprehensible, but their complexity is based on very simple mechanisms, used again and again, without any director to channel them and without any hierarchy to maintain or enforce.

4. So should I pack my bags and look for other research subjects than what is popular now, or maybe should I just ignore chemlambda and go with the business as usual? After all, you know, cloud computing is great. Process calculi, types, all this.

It’s your choice, but now you know that there is something which is closer to real life like computation, which may hold the promise for a free sky, cloudless internet.

5. Suppose I want to play a bit with these ideas, but not to the point of getting out of my comfort zone. Which questions would be interesting to ask?

Right, this is a good strategy. Here are some questions which are related to lambda calculus, say.

  •  there is an algorithm which transforms a (untyped lambda beta calculus) term into a chemlambda molecule, but even if one can prove that beta move translates into the BETA move (like Wadsworth-Lamping) and that eventually the translations of SKI or BCKW combinators reduce in chemlambda like they should, there is the question: how does chemlambda does by local writes the thing which replaces  an evaluation strategy? A study by examples may give interesting insights.
  • how to program with chemlambda? Indeed, it can reproduce some behaviours of untyped lambda beta, but even so it does it in ways which are not like usually expected, as proven by the demos. To take an older example, the Y combinator simplifies to a 2 nodes molecule, story told in this article http://mitpress.mit.edu/sites/default/files/titles/content/alife14/ch079.html . The fact is that because one does not have eta (which looks like a global move in chemlambda, i.e. there is no a priory bound on the number of nodes and links to check for application of eta) then there are no functions. It’s a kind of extremal functional programming where there are no functions.
  • (c) what are the right replacements for lists, currying, booleans, which could take advantage from  chemlambda?
  • (d) by a try and explore procedure, which are the less stupid reduction algorithms and in which sense do they work exactly? This has to do with locality, which constrains a lot the design of those algorithms.
  • (e) finally, what can chemlambda do outside the sector of untyped lambda beta, which is only a small part of the possible chemlambda molecules?

_____________________________________________________________________

The working factorial

Continuing from Experiments with a little lisper tutorial, recall that at that moment I succeeded to compute with chemlambda a relative of the factorial, but not the factorial itself.

Now, I learned how to do it and it works all the time (compared with 20% success last time).

Last time I took a lambda term for the factorial from the lambda calculus tutorial by Mayer Goldberg from the little-lisper.org, par. 57, page 14. Then I modified it and got a molecule which computes the factorial in about 20% of the cases. Now, in this working factorial example, I made two supplementary modifications. The first consists in starting from a lambda term which uses the mutiplication in the form L mnf.m(nf) instead of the one used in the tutorial. Secondly, the result of the computation (i.e. the “value” of the factorial) is applied to a SUCC (successor) which is then applied to c0, which result in the generation of the correct result.

Link to the demo with factorial(4)=24.

Here is the video, recorded as seen in safari, with 2X speed (firefox behaves crappy with the d3.js demos I make, have no precise idea why; that is why I recorded my experience with the demo, then re-recorded the video with 2X speed, all this done with QuickTime)

It works very well also with factorial(5)=120, but because the visualization of that computation takes some time (which may challenge people with short attention span), here is a video with the last part of the computation at 8X speed.

____________________________________________________

What can I do with chemlambda, which questions to ask about it?

[I continue the habit to use sometimes the drafts I write and to extract from them and process a bit those parts which may be of interest to any known or unknown fellow thinker, tinker, hacker.

This is the first part, there will be a second one.]

____________________________________________________________

See also: FAQ: chemlambda in real and virtual worlds

____________________________________________________________

1. Can I play with chemlambda like in the demos?

YES, if you go  at https://github.com/chorasimilarity/chemlambda-gui/tree/gh-pages/dynamic  , then you’ll  find everything needed to play with chemlambda.

What you need:

  • scripts like moving_random_metabo.sh,  which calls
  • the main program which is check_1_mov2_rand_metabo.awk
  • you need initial molecules, which are  in the files with the extension .mol. Read here about the mol format (1), (2). It’s just a list of nodes and their ports.
  • You shall need also the file firstpart.txt.

Then figure it out:

  • bash moving_random_metabo.sh asks you to type the name of a mol file, say lalala.mol
  • it produces lalala.html which you can see in the browser, as a d3.js animation.

In the awk script you have at the beginning the variables:  metabo (which decides the color of the new nodes; periodically, with period metabo, the color of the new nodes is turned to #222 or it is left unchanged, in order to help visualize the flow and replenishment of the new nodes in the molecule, i.e. the metabolism of the molecule) and cycount which is the maximum number of steps in the main cycles (if you expect that the computation stops quick, or if on the contrary, you suspect it will never stop, then take cycount=100, if you are sure the computation will stop, but perhaps after more than 100 steps, then take cycount=1000 for example).

The main cycle starts at line 931.

At about the line 900 there is a function nextval()… where there is a piece “return 3000 + (5*step“, now you can modify 3000 to something else to decide the initial time before the animation starts, and 5 to something like 30 or bigger, if you want to look at the result in firefox or if generally the browser has problems with rendering.

 

For the list of moves and more details see the page of moves. Go down to that page to see a list of articles.

 

2. Is chemlambda something fixed  or it changes? Can I change it? What can I do if I want to tinker with it, just for fun?

It changes. First there was “graphic lambda calculus”, then there was chemlambda without the FOE node and moves, now after many experiments there is THIS chemlambda.

You can change it any way you want (and be polite to cite my version and to notice me somehow, preferably by posting something in a public place).

If you have real programming skills, not like me (I’m just a mathematician) then you can take the awk script and:

  • transform it into a js script, instead of the actual system
  • make it quicker (presently there is a preset of 5ms which decides the speed of the animation, but this may give problems to many browsers; why? I have no idea, but it may be a trivial or a clever fix to that)
  • add a molecule builder UI
  • make a game out of it
  • make an app
  • make a script which converts lambda terms to chemlambda molecules, according to the algorithm from here.

These are immediately useful things to do. There are others too, see next question.

3. What does the script about chemlambda?

The script implements a model of computation based on chemlambda.

So, chemlambda is a purely local graph rewrite system (it is a list of patterns with up to 4 nodes which are replaced by other patterns, perhaps according to rules which involve a condition which can be verified by checking maybe yet another N (a priori fixed, small) links and nodes).

It does not compute by itself, you have to add an algorithm of reductions, which says how to use the graph rewrites, aka the moves.

In the script is used the following algorithm, called in previous post the “stupid” one, because it is really the most simple: after the mol file is converted into a graph (given as nodes and links arrays), after the head of the html file is written, then the algorithm enters a cycle. The structure of this cycle is the following:

  • look for patterns to change, in the order of the graph rewrites priority.  (What’s this priority? It may happen that there are nodes or links which are part of two distinct, but overlapping patterns for graph rewrites. This results into a conflict: which move to apply? By close examplnation of the moves, there is a order of looking for the moves which eliminates the conflicts if we look first at the move FO-FOE, then at the other DIST moves, then at BETA (i.e. A-L) move and FAN-IN (i.e. FI-FOE), then at the PRUNING moves, and each time we find a pattern we put the respective move in a list of proposed moves and we block the nodes so they are not available for being part of other patterns during the search of patterns )
  • do the proposed moves (in the awk script this means to do the moves in the sense of the graph data variables from the script and in the same time, to write in the html file what you just did, in a format intelligible in d3.js)
  • there may be lots of Arrow nodes which appear, and in order to eliminate as many as possible there is an Arrow cycle (see the moves page) which eliminates all Arrow nodes which can be eliminated by COMB moves. (Same thing apply here, you have to write in the html file what you did with the graph during that internal cycle)

This is the stupid deterministic algorithm.

There is a random variant of it, which is exactly what the awk script does.

Whenever a pattern is identified, there is a coin flipped, and with probability about 50% (see further a small detail) the move goes to the list of proposed moves and the nodes are blocked, or not.

In this way the priority of moves is made much less important.

Small detail: there are two kinds of moves, those who increase the number of nodes (DIST) and the others, who decrease the number of nodes (Arrow not taken into consideration). The small detail is that,  after each step of the cycle, the probabilities of the moves are slightly modified so that if in the cycle there has been more DIST moves than the others then the probability of DIST moves decrease, or else if there has been less DIST moves than the others then the probability of DIST move increases in the next cycle.

Why randomness: is a cheap substitute, in this stupid reduction version, for asynchronous.

4. How can I seriously change the  chemlambda reduction algorithm?

By changing from “stupid” to more interesting, it’s up to you. What about bringing into the game Node.js? Know why? because if you look at a mol file, you notice that if you split the file in ten pieces, then each of them is again a mol file.

_______________________________________________________________________

 

Experiments with a little lisper tutorial

I used the excellent lambda calculus tutorial by Mayer Goldberg from the little-lisper.org for some experiments in chemlambda.

UPDATE: See also the post The working factorial and the video

_______________

There are five of them.

The mentioned tutorial is the source for the lambda term which I translated into chemlambda and then used in the Ackermann function demos. The most outrageously successful is the computation of Ack(2,2) while self-multiplicating. The daughter molecules do not end at the same time, moreover.

Here is a video of a screen recording at 2X speed.

Now, there is a very puzzling thing here. In chemlambda with a random, purely local graph rewriting algorithm I can compute the Ackermann function. But what about simpler functions, like the factorial?

I tried the easy way consisting into translation of lambda terms for the factorial into chemlambda and the results are ugly. As a geometer who wants to decipher computation without using variables, values, variable passing or evaluations, I know that chemlambda is different because of this feature it has: it uses something like signal transduction instead of gates-and-wires general model from IT. So there has to be consequences of that.

Let’s see what happens in the case of the factorial. In this demo I took the lambda term from the same tutorial, par. 57, page 14. I hoped will work better than other proposals, based on the experience with the Ackermann function. I noticed in other experiments with terms for the factorial that the reduction in chemlambda is extremely wasteful, producing thousands of nodes and lots of trash. Moreover, there is a problem with the fix point combinator, explained in the article with Louis Kauffman Chemlambda, universality and self multiplication, and described in the older demos like this one. In chemlambda, the molecule which corresponds to the Y combinator reduces to a very simple one (which does not has an equivalent as a lambda term), made by only two nodes, fanout and application. Then the molecule becomes a gun which shoots pairs of fanout and application node. There is no mystery about the Y combinator in chemlambda, the real mechanism of it consists in the self-multiplication by the fanout node. [Note: in the old demo and in the article linked there is no FOE fanout. The FOE and the moves associated to it are a recent addition to the formalism, see the page of moves.]

The problem of using the Y combinator is that it never stops generating pairs of A and FOE nodes. In a computation which implements recurrence by the Y combinator, at some point, according to a IFTHENELSE or ISZERO, the cycle of Y is broken. But in chemlambda the Y gun molecule continues to exist and this leads to a never ending computation. From one side this is OK, see for example the life-like computations with chemlambda quines. Actually there is a stronger relation between chemlambda quines and the Y combinator molecule. One can design the computation to be such that when the Y molecule is no longer needed, it is encapsulated into a quine, but this is for another time to explain in detail.

I come back to the factorial example. In the demo I linked you can see that the computation of the factorial is wasteful (and paradoxically leads to a Y molecule), even if it does not use a fix point combinator.

Why?

First I thought it is because of currying and uncurrying. In chemlambda, because it is a graph rewrite system, there is no particular need to use currying all the time.

Then, to check this, I modified the molecule from the little lisper tutorial in order to geometrically compute the repeated application of a function f(a,b)=(succ(a), succ(a)b). The function is a piece of a graph with two in and two out links which is self-multiplying under the action of a number in the Church encoding.

Here is a successful computation with this molecule. But does it work all the time or have I been lucky? The reduction algorithm is random and different runs may give different results. It is the case with the Ackermann function computation as well, but that one was successful all the time.

Oh, it turns out that the computation with that molecule works well in about 20% of the runs. Here is an unsuccessful run.

So there is still a problem, but which one?

Under close examination the computation is still wasteful, because of the term (piece of molecule) c0, for the 0 in the Church encoding. In chemlambda this term corresponds to a small molecule which has a termination node inside.

When we want to thrash, like programmers do, something useless, in chemlambda the useless piece does not disappear. Is like in nature, the thing continues to exist and continues to interact, while in the process of degradation, with the rest of the environment.

The termination node, via the PRUNING moves, destroys slowly the useless part, but pother pieces form the useless part continue to interact with the rest of the graph.

Is this the problem?

In order to check this I further modified the molecule which was successful 20% of the time. I just replaced c0 by c1, which is the (molecule for) the lambda term for 1 in the Church encoding. Now, c1 does not have any termination node inside.

The price is that I no longer compute the factorial, but instead I compute the repeatedly applied function

F(a,b)=(succ(a), tms(a,b))

where tms(a,b)=ab+1. Here is a demo for the computation of tms(3,3) in chemlambda., and further is a video for tms(5,5)=26, where you can see a new creature, more complex than the walker discover in the predecessor.

I checked to see what is the closed expression, if any, for the function I compute, namely

f(0)=1, f(n+1) = (n+1)f(n) + 1

and the Wolfram Alpha engine has an interesting answer.

Well, this time I hit the right spot. The computation works, again and again and again.

So we have to learn to program ecologically with chemlambda.

____________________________________________________________________

Life thrives in randomness, creatures die or blow out in the absence of it

Randomness is a manifestation of locality. The world is big and anything works at a local level, asynchronously, and randomness ensues.

I want to advance the following hypothesis about the origin of life.

Life is a manifestation of the computational universality of a collection of chemical reactions.

Indeed, there probably are many small collections of chemical reactions which, coupled with a random chemical reduction algorithm, form a universal computing model.

A proof of principle for this is chemlambda. There are still to discover real chemical reactions which implement the (invisible in chemlambda formalism, for the moment) moves shown at the chemlambda moves page.

But they are so simple that there have to be many, many such chemical reaction.

In a system, in a chemical soup, if it happens to appear these chemical reactions, the following is a game of computation and self-multiplication.

Because universality means, in this particular case, that with non-negligible probability, anything can be achieved.

The role of randomness is tricky. On one side randomness selects resilient creatures. That’s a funny thing, for example in chemlambda good candidates for living creatures are quines.

A quine in chemlambda is a molecule which stays the same in a daterministic world. This gives to the quine molecule a certain resilience when faced with randomness, which makes it to have a life: it may grow, it may decrease, for a time it may play around the deterministic state, and it may eventually die.

This is illustrated in the first battle of microbes demo, where several quines are put together and “fed” with enzymes, which appear randomly, but such that if at a moment there are more enzymes for the moves which increase the number of nodes, then the next time the probability of appearance of such enzymes decreases in favour of those which decrease the number of moves.

So globally it appears as if the the quines compete for the moves and those quines having a greater diversity of possible moves thrive, while the other die.

The 9_quine is the most fragile quine, as you see in the demo many of them die (i.e. they transform into a small molecule which is inert to any reduction).

There is a lot to add about this, for example there are other quines which behave like they adopt the strategy of spores, i.e. they regress to an “egg” state and they flourish later, from time to time, when they have to “compete” with bigger quines.

Of course, all this is in the eye of the observer, it is an emergent behaviour, like the intelligence of a Braitenberg vehicle.

But what if quines are a bit too fragile for life? Maybe there are molecules who grow to an approximately stable configuration, in random conditions, for a time, at least until they self-multiply.

[Have you seen the story of the 16 bubble quine, from egg to motherhood?]

Suppose, just suppose that in deterministic conditions such a molecule would grow slowly, instead of being a quine.

This is consistent with the requirement to be resilient in random conditions, there will be a second part of this post when the demos are prepared.

But it has a curious corollary, namely that such a living creature will blow out, like a cancer, in too calm, too deterministic conditions.

The example I have and play with is a molecule made by two 9_quine and a 5 atoms molecule which, if left single, it grown in a regular pattern, but in the deterministic algorithm, when coupled by some bonds with the two quines, it grows very very slow.

This molecule, under random conditions, display an amazing variety of shapes. But all the runs show the same thing, more or less: that the two 9_quines have a role of taming the growth of the molecule, keeping it controlled, but at some moment the 9_quines die, somewhere in the molecule, in some unrecognizable shape, and after that the molecule reverts slowly to the regular growth pattern (which makes it unsustainable if there are phisical limits to the growth).

So not only that randomness select creatures who can survive (and self-multiply) in random conditions, but it may select creatures who live in random conditions, but who die in deterministic conditions.

Maybe that is why life hates when everything is predictable.

I close this post with the comment that however, there are molecules which arrive at a determined state in random conditions.

This may be very useful for computer like computations. The exmple I have is again the remarkable molecule for the Ackermann function.

See it in this video self-reproducing while it computes.

Apparently some molecules display a huge resilience to randomness. The Ackermann function molecule daughters finish the computation at different times, but they do finish it.

_______________________________________________________________________