Dynamic rendering of the Ackermann function computation, deterministical and random

Source code here.

This video contains two examples of the computation of the Ackermann function by using artificial chemistry. The (graph rewriting) rules are those of chemlambda and there are two models of computation using them.


In the first one the rules are applied deterministically, in the order of their importance. The rules which increase the number of nodes are considered more important than those which decrease the number of nodes. The rules are applied in parallel, as long as there is no conflict (i.e. as long as they don’t apply to the same node). When there is conflict the rule with higher importance takes the priority.
In the first part of the video you see what happens if this model is applied to a graph which represents (according to the rules of chemlambda) the Ackermann function applied to (2,2). The expected result is 7 (as it appears in the Church encoding, which is then transformed in the chemlambda convention). There are no tricks, like pre-computing the expression of the function, everything goes at a very basic level. The application of the rules does not parallel the application of lambda calculus reduction rules to the lambda term which represents the Ack(2,2), with any reduction strategy. However, the result is obtained correctly, even if many of the intermediary steps are not graphs which represent a lambda term.

 

The model does not use any variable passing, nor any evaluation strategy, moreover!

In the second example is used a different model of computation. The rules are applied randomly. That means the following. For any configuration from the graph which may be subject to a rule, a coin is flipped and the rule is applied with probability 50%. The rules are equally important. The rules are still applied in parallel, in the sense that an update on the graph is done after all edges are visited.

As you see in the second part of the video, the process takes longer, because essentially at each step there are always less rules applied to the whole graph. The comparison is not very accurate, because the reduction process may depend on the particular run of the program. Even if lambda beta calculus (with some model of reduction) is confluent, chemlambda is surely not. It is an open problem if, starting from a graph which represents a lambda term, in chemlambda, knowing that in lambda calculus the term has a normal form, then the random model of computation with chemlambda always arrives eventually to the graph which represents the normal form of the respective term.
At least for the term I use here for Ack(2,2), it looks like that it does. This is of course not a proof.

 

UPDATE: A quine is reduced in chemlambda, first using a deterministic model of computation then using a model which has a random ingredient.


These two models are the same as the ones used for the Ackermann function video.
The quine is called the 9_quine, has been introduced in https://chorasimilarity.wordpress.com/…

In the deterministic reduction you see that at each step the graph reduces and reconstruct itself. It goes on forever like that.

In the random reduction the process is different. In fact, if you look at the list of reductions suffered by the 9_quine, then you see that after each cycle of reduction (in the deterministic version) the graph is isomorphic with the one before because there is an equilibrium between the rules which add nodes and the rules which destroy nodes.
In the random version this equilibrium is broken, therefore you see how the graph grows either by having more and more red nodes, or by having more and more green nodes.
However, because each rule is applied with equal probability, in the long term the graph veers towards a dominant green or towards dominant red states, from one to the other, endlessly.
This is proof that the reductions in chemlambda vary according to the order of application of moves. On the other side, this is evidence (but not proof) that there is a sort of fair effort towards eventual confluence. I use “confluence” in a vague manner, not related to lambda calculus (because the 9_quine does not come from a lambda term), but more related to the graph rewriting world.

_________________________________________________________________________

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s