What you see in this links: I take a lambda term, transform it into a artificial molecule, then let it reduce randomly, with no evaluation strategy. That is what I call the most stupid algorithm. Amazing is that is works.

You don’t have to believe me, because you can check it independently, by using the programs available in the github repo.

Here is the list of demos where lambda terms are used.

- – experiments with lambda terms from a little lisper tutorial http://chorasimilarity.github.io/chemlambda-gui/dynamic/lisfact_4.html
- – factorial of 4 = 24 http://chorasimilarity.github.io/chemlambda-gui/dynamic/lisfact_2_mod.html
- – computation of the 4th and 5th Fobonacci numbers http://chorasimilarity.github.io/chemlambda-gui/dynamic/fibo.html
- – Ackermann(2,2)=7 http://chorasimilarity.github.io/chemlambda-gui/dynamic/random_ackermann_2_2.html
- – Ackermann(3,2)=29 http://chorasimilarity.github.io/chemlambda-gui/dynamic/ackermann_3_2.html (for the heck of it, you need more than 1h to see the visualisation)
- – Ackermann(2,2)=7 and simultaneously a self-multiplication of the graph ((i.e. the root of the syntactic tree of Ackermann(2,2) is connected to a fanout node (FO), so the reduction happens in the same time as the whole shebab is duplicated) http://chorasimilarity.github.io/chemlambda-gui/dynamic/fo_ackermann_2_2.html
- – the predecessor function http://chorasimilarity.github.io/chemlambda-gui/dynamic/pred_2.html
- – related to the predecessor, but outside lambda calculus: two entwined predecessors http://chorasimilarity.github.io/chemlambda-gui/dynamic/toorings.html
- – the Y combinator, writen with the S,K,I combinators as Y = S (K (S I I)) (S (S (K S) K) (K (S I I))), is applied to I. The out port of this graph (i.e. the root of the syntactic tree) is connected to a fanout node (FO). so the reduction of S (K (S I I)) (S (S (K S) K) (K (S I I))) I happens in the same time as the self-multiplication.http://chorasimilarity.github.io/chemlambda-gui/dynamic/yfrcombtreefo.html–
- The Y-v combinator and a reduction related to it. http://chorasimilarity.github.io/chemlambda-gui/dynamic/yv_combinator.html

Now, details of the process:

– I take a lambda term and I draw the syntactic tree

– this tree has as leaves the variables, bound and free. These are eliminated by two tricks, one for the bound variables, the other for the free ones. The bound variables are eliminated by replacing them with new arrows in the graph, going from one leg of a lambda abstraction node, to the leaf where the variable appear. If there are more places where the same bound variable appears, then insert some fanout nodes (FO). For the free variable do the same, by adding for each free variable a tree of FO nodes. If the bound variable does not appear anywhere else then add a termination (T) node.

– in this way the graph which is obtained is no longer a tree. It is a trivalent graph mostly, with some free ends. It is an oriented graph. The free ends which correspond to a “in” arrow are there for each free variable. There is only one end which correspond to an “out” arrow, coming from the root of the syntactic tree.

– I give a unique name to each arrow in the graph

– then I write the “mol file” which represents the graph, as a list of nodes and the names of arrows connected to the nodes (thus an application node A which has the left leg connected to the arrow “a”, the right leg connected to the arrow “b” and the out leg connected to “c”, is described by one line “A a b c” for example.

OK, now I have the mol file, I run the scripts on it and then I look at the output.

What is the output?

The scripts take the mol file and transform it into a collection of associative arrays (that’s why I’m using awk) which describe the graph.

Then they apply the algorithm which I call “stupid” because really is stupidly simplistic: do a predetermined number of cycles, where in each cycle do the following

– identify the places (called patterns) where a chemlambda rewrite is possible (these are pairs of lines in the mol file, so pairs of nodes in the graph)

– then, as you identify a pattern, flip a coin, if the coin gives “0” then block the pattern and propose a change in the graph

– when you finish all this, update the graph

– some rewrites involve the introduction of some 2-valent nodes, called “Arrow”. Eliminate them in a inner cycle called “COMB cycle”, i.e. comb the arrows

-repeat

As you see, there is absolutely no care about the correctness of the intermediary graphs. Do they represent lambda terms? Generically no!

Are there any variable which are passed, or evaluations of terms which are done in some clever order (eager, lazy, etc)? Not at all, there are no other variables than the names of the arrows of the graph, or these ones have the property that they are names which appear twice in the mol file (once in a port “in”, 2nd in a port “out”). When the pattern is replaced these names disappear and the names of the arrows from the new pattern are generated on the fly, for example by a counter of arrows.

The scripts do the computation and they stop. There is a choice made over the way of seeing the computation and the results.

One obvious choice would be to see the computation as a sequence of mol files, corresponding to the sequence of graphs. Then one could use another script to transform each mol file into a graph (via, say, a json file) and use some graph visualiser to see the graph. This was the choice in the first scripts made.

Another choice is to make an animation of the computation, by using d3.js. Nodes which are eliminated are first freed of links and then they vanish, while new nodes appear, are linked with their ports, then linked with the rest of the graph.

This is what you see in the demos. The scripts produce a html file, which has inside a js script which uses d3.js. So the output of the scripts is the visualisation of the computation.

Recall hat the algorithm of computation is random, therefore it is highly likely that different runs of the algorithm give different animations. In the demos you see one such animation, but you can take the scripts from the repo and make your own.

What is amazing is that they give the right results!

It is perhaps bizzare to look at the computation and to not make any sense of it. What happens? Where is the evaluation of this term? Who calls whom?

Nothing of this happens. The algorithm just does what I explained. And since there are no calls, no evaluations, no variables passed from here to there, that means that you won’t see them.

That is because the computation does not work by the IT paradigm of signals sent by wires, through gates, but it works by what chemists call signal transduction. This is a pervasive phenomenon: a molecule enters in chemical reactions with others and there is a cascade of other chemical reactions which propagate and produce the result.

About what you see in the visualisations.

Because the graph is oriented, and because the trivalent nodes have the legs differentiated (i.e. for example there might be a left.in leg, a right.in leg and a out leg, which for symmetry is described as a middle.out leg) I want to turn it into an unoriented graph.

This is done by replacing each trivalent node by 4 nodes, and each free end or termination node by 2 nodes each.

For trivalent nodes there will be one main node and 3 other nodes which represents the legs. These are called “ports”. There will be a color-coded notation, and the choice made is to represent the nodes A (application) and FO by the main node colored green, the L (lambda) and FI (fanin, exists in chemlamda only) by red (actually in the demos this is a purple)

and so on. The port nodes are coloured by yellow for the “in” ports and by blue for the “out” ports. The “left”, right”, “middle” types are encoded by the radius of the ports.

__________________________________________________

## One thought on “All successful computation experiments with lambda calculus in one list”