Tag Archives: lambda calculus

Deterministic vs random, an example of grandiose shows vs quieter, functioning anarchy

In the following video you can see the deterministic, at the right random evolution of the same molecule, duplex.mol from the chemlambda repository. They take about the same time.

The deterministic one is like a ballet, it has a comprehensible development, it has rhythm and drama. Clear steps and synchronization.

The random one is more fluid, less symmetric, more mysterious.

What do you prefer, a grand synchronized show or a functioning, quieter anarchy?

Which one do you think is more resilient?

What is happening here?

The molecule is inspired from lambda calculus. The computation which is encoded is the following. Consider the lambda term for the identity function, i.e. I=Lx.x. It has the property that IA reduces to A for any term A. In the molecule it appears as a red trivalent node with two ports connected, so it looks like a dangling red globe. Now, use a tree of fanouts to multiply (replicate) this identity 8 times, then build the term

(((II)(II))((II)(II)))(((II)(II))((II)(II)))

Then use one more fanout to replicate this term into two copies and reduce all. You’ll get two I terms, eventually.
In the deterministic version the following happens.

– the I term (seen as a red dangling node) is replicated (by sequence of two rewrites, detail) and gradually the tree of fanouts is destroyed

– simultaneously, the tree of applications (i.e. the syntactic tree of the term, but seen with the I’s as leaves) replicates by the fanout from the end

– because the reduction is deterministic, we’ll get 16 copies of I’s exactly when we’ll get two copies of the application tree, so in the next step there will be a further replication of the 16 I’s into 32 and then there will be two, disconnected, copies of the molecule which represents ((II)(II))((II)(II))

– after that, this term-molecule reduces to (II)(II), then to II, then to I, but recall that there are two copies, therefore you see this twice.

In the random version everything mixes. Anarchy. Some replications of the I’s reach the tree of applications before it has finished to replicate itself, then reductions of the kind II –> I happen in the same time with replications of other pieces. And so on.
There is no separation of stages of this computation.
And it still works!

I used quiner_experia, with the mol file duplex.mol. The first time I modified all the weights to 0 (to have deterministic application) and took the rise parameter=0 (this is specific to quiner_experia, not present in quiner) too, because the rise parameter lower the probabilities of new rewrites, exponentially, during the same step, in order to give fair chances to any subset of all possible rewrites possible.
Then I made a screencast of the result, without speeding it, and by using safari to run the result.
For the random version I took all the weights equal to 1 and the rise parameter equal to 8 (empirically, this gives the most smooth evolution, for a generic molecule from the list of examples). Ran the result with safari and screencasted it.
Then I put the two movies one near the other (deterministic at left, random at right) and made a screencast of them running in parallel. (Almost, there is about 1/2 second difference because I started the deterministic one first, by hand).
That’s it, enjoy!
For chemlambda look at the trailer from the collections of videos I have here on vimeo.

Advertisements

Crossings as pairs of molecular bonds; boolean computations in chemlambda

I collect here the slightly edited versions of a stream of posts on the subject from the microblogging chemlambda collection. Hopefully this post will give a more clear image about this thread.

Here at the chorasimilarity open notebook, the subject has been touched previously, but perhaps that the handmade drawings made it harder to understand (than with the modern 🙂 technology from the chemlambda repository):

Teaser: B-type neural networks in graphic lambda calculus (II)

(especially the last picture!)

All this comes with validation means. This is a very powerful idea: in the future validation will replace peer review, because it is more scientific than hearsay from anonymous authority figures (aka old peer review) and because it is more simple to implement than a network of hearsay comments (aka open peer review).

All animations presented here are obtained by using the script quiner.sh and various mol files. See instructions about how you can validate (or create your own animations) here:
https://github.com/chorasimilarity/chemlambda-gui/blob/gh-pages/dynamic/README.md

Here starts the story.

(Source FALSE is the hybrid of TRUE, boole.mol file used)

boole

Church encoding gives a way to define boolean values as terms in the lambda calculus. It is easy:

TRUE= Lx.(Ly.x)

FALSE= Lx.(Ly.y)

So what? When we apply one of these terms to another two, arbitrary terms X and Y, look what happens: (arrows are beta reductions (Lx.A)B – – > A[x:=B] )

(TRUE X) Y – – > (Ly.X) Y – – > X (meaning Y goes to the garbage)

(FALSE X) Y – – > (Ly.y) Y – – > Y (meaning X goes to the garbage)

It means that TRUE and FALSE select a way for X and Y: one of them survives, the other disappears.

Good: this selection is an essential ingredient for computation

Bad: too wasteful! why send a whole term to the garbage?

Then, we can see it otherwise: there are two outcomes: S(urvival) and G(arbage), there is a pair of terms X and Y.

– TRUE makes X to connect with S and Y to connect with G

– FALSE makes X to connect with G and Y to connect with S

The terms TRUE and FALSE appear as molecules in chemlambda, each one made of two red nodes (lambdas) and a T (termination) node. But we may dispense of the T nodes, because they lead to waste, and keep only the lambda nodes. So in chemlambda the TRUE and FALSE molecules are, each, made of two red (lambda) nodes and they have one FROUT (free out).

They look almost alike, only they are wired differently. We want to see how it looks to apply one term to X and then to Y, where X and Y are arbitrary. In chemlambda, this means we have to add two green (A) application nodes, so TRUE or FALSE applied to some arbitrary X and Y appear, each, as a 4 node molecule, made of two red (lambda) two green (A), with two FRIN (free in) nodes corresponding to X and Y and two FROUT (free out) nodes, corresponding: one to the deleted termination node, thus this is the G(arbage) outcome, and the other to the “output” of the lambda terms, thus this is the S(urvival) outcome.

But the configuration made of two green A nodes and two red L nodes is the familiar zipper which you can look at in this post

betazipper

In the animation you see TRUE (at left) and FALSE (at right), with the magenta FROUT nodes and the yellow FRIN nodes.

The zipper configurations are visible as the two vertical strings made of two green, two red nodes.

What’s more? Zippers, they do only one thing: they unzip.

The wiring of TRUE and FALSE is different. You can see the TRUE and FALSE in the lower part of the animation. I added four Arrow (white) nodes in order to make the wiring more visible. Arrow nodes are eliminated in the COMB cycle, they have only a fleeting existence.

This shows what is really happening: look at each (TRUE-left, FALSE-right) configuration. In the upper side you have 4 nodes, two magenta, two yellow, which are wired together at the end of the computation. In the case of TRUE they end up wired in a X pattern, in the case of FALSE they end up wired in a = pattern.

At the same time, in the lower side, before the start of the computation, you see the 4 white nodes which: in the case of TRUE are wired in a X pattern, in the case of FALSE are wired in a = pattern. So what is happening is that the pattern ( X or = ) is teleported from the 4 white nodes to the 4 magenta-yellow nodes!

The only difference between the two molecules is in this wire pattern, X vs =. But one is the hybrid of the other, hybridisation is the operation (akin to the product of knots) which has been used and explained in the post about senescence and used again in more recent posts. You just take a pair of bonds and switch the ends. Therefore TRUE and FALSE are hybrids, one of the other.

(Source Boolean wire, boolewire.mol file used )

If you repeat the pattern which is common to TRUE and FALSE molecules then you get a boolean wire, which is more impressive “crossings teleporter”. This time the crosses boxed have been flattened, but the image is clear:

boolewire

Therefore, TRUE and FALSE represent choices of pairs of chemical bonds! Boolean computation (as seen in chemlambda) can be seen as management of promises of crossings.

(Source Promises of crossings, ifthenelsecross.mol file used )

You see 4 configurations of 4 nodes each, two magenta and two yellow.

In the upper left side corner is the “output” configuration. Below it and slightly to the right is the “control” configuration. In the right side of the animation there are the two other configurations, stacked one over the other, call them “B” (lower one) and “C” (upper one).

Connecting all this there are nodes A (application, green) and L (lambda, red).

ifthenelsecross

You see a string of 4 green nodes, approximately vertical, in the middle of the picture, and a “bag” of nodes, red and green, in the lower side of the picture. This is the molecule for the lambda term

IFTHENELSE = L pqr. pqr

applied to the “control” then to the “B” then to the “C”, then to two unspecified “X” and “Y” which appear only as the two yellow dots in the “output” configuration.

After reductions we see what we get.

Imagine that you put in each 4 nodes configuration “control”, “B” and “C”, either a pair of bonds (from the yellow to the magenta nodes) in the form of an “X” (in the picture), or in the form of a “=”.

“X” is for TRUE and “=” is for false.

Depending on the configuration from “control”, one of the “B” or “C” configuration will form, together with its remaining pair of red nodes, a zipper with the remaining pair of green nodes.

This will have as an effect the “teleportation” of the configuration from “B” or from “C” into the “output”, depending on the crossing from “control”.

You can see this as: based on what “control” senses, the molecule makes a choice between “B” and “C” promises of crossings and teleports the choice to “output”.

(Source: Is there something in the box?, boxempty.mol used)

I start from the lambda calculus term ISZERO and then I transform it into a box-sensing device.

In lambda calculus the term ISZERO has the expression

ISZERO = L a. ((a (Lx.FALSE)) TRUE)

and it has the property that ISZERO N reduces either to TRUE (if N is the number 0) or FALSE (if N is a number other than 0, expressed in the Church encoding).

The number 0 is
0 = FALSE = Lx.Ly.y

For the purpose of this post I take also the number 2, which is in lambda calculus

2=Lx.Ly. x(xy)

(which means that x is applied twice to y)

Then, look: (all reductions are BETA: (Lx.A)B – – > A[x:=B] )

ISZERO 0 =
(L a. ((a (Lx.FALSE)) TRUE) ) (Lx.Ly.y) – – >
((Lx.Ly.y) (Lx.FALSE)) TRUE – – >
(Ly.y)TRUE – – > (remark that Lx.FALSE is sent to the garbage)
TRUE (and the number itself is destroyed in the process)

and

ISZERO 2 =
(L a. ((a (Lx.FALSE)) TRUE) ) (Lx.Ly. x(xy)) – – >
((Lx.Ly. x(xy)) (Lx.FALSE)) TRUE – – > (fanout of Lx.FALSE performed secretly)
(Lx.FALSE) ((Lx.FALSE) TRUE) – – >
FALSE ( and (Lx.FALSE) TRUE sent to the garbage)

Remark that in the two cases there was the same number of beta reductions.

Also, the use of TRUE and FALSE in the ISZERO term is… none! The same reductions would have been performed with an unspecified “X” as TRUE and an unspecified “Y” as FALSE.

(If I replace TRUE by X and FALSE by Y then I get a term which reduces to X if applied to 0 and reduces to Y if applied to a non zero Church number.)

Of course that we can turn all this into chemlambda reductions, but in chemlambda there is no garbage and moreover I want to make the crossings visible. Or, where are the crossings, if they don’t come from TRUE and FALSE (because it should work with X instead of TRUE and Y instead of FALSE).

Alternatively, let’s see (a slight modification of) the ISZERO molecule as a device which senses if there is a number equal or different than 0, then transforms, according to the two cases, into a X crossing or a = crossing.

Several slight touches are needed for that.

1. Numbers in chemlambda appear as stairs of pairs of nodes FO (fanout, green) and A (application, green), as many stairs as the number which is represented. The stairs are wrapped into two L (lambda, red) nodes and their bonds.
We can slightly modify this representation so that it appears like a box of stairs with two inputs and two outputs, and aby adding a dangling green (A, application) node with it’s output connected to one of its inputs (makes no sense in lamnda calculus, but behaves well in the beta reductions as performed in chemlambda).

In the animation you can see, in the lower part of the figure:
-at left the number 0 with an empty box (there are two Arrow (white) nodes added for clarity)
-at right the number 2 with a box with 2 stairs
… and in each case there is this dangling A node (code in the mol file of the form A z u u)
boxempty

2. The ISZERO is modified by making it to have two FRIN (free in, yellow) and two FROUT (free out, magenta) nodes which will be involved in the final crossing(s). This is done by a clever (hope) change of the translation of the ISZERO molecule into chemlambda: first the two yellow FRIN nodes represent the “X” and the “Y” (which they replace the FALSE and the TRUE, recall), and there are added a FOE (other fanout node, yellow) and a FI (fanin node, red) in strategic places.

________________________________

let x=A in B in chemlambda

The beta reduction from lambda calculus is easy to be understood as the command
let x=B in A
It corresponds to the term (Lx.A)B which reduces to A[x=B], i.e to the term A where all instances of x have been replaced by B.
(by an algorithm which is left to be added, there are many choices among them!)

In Christopher P. Wadsworth, Semantics and Pragmatics of the Lambda Calculus , DPhil thesis, Oxford, 1971, and later John Lamping, An algorithm for optimal lambda calculus reduction, POPL ’90 Proceedings of the 17th ACM SIGPLAN-SIGACT symposium on Principles of programming languages, p. 16-30, there are proposals to replace the beta rule by a graph rewrite on the syntactic tree of the term.

These proposals opened many research paths, related to call by name strategies of evaluations, and of course going to the Interaction Nets.

The beta rule, as seen on the syntactic tree of a term, is a graph rewrite which is simple, but the context of application of it is complex, if one tries to stay in the real of syntactic trees (or that of some graphs which are associated to lambda terms).

This is one example of blindness caused by semantics. Of course that it is very difficult to conceive strategies of applications of this local rule (it involves only two nodes and 5 edges)  so that the graph after the rewrite has a global property (means a lambda term).

But a whole world is missed this way!

In chemlambda the rewrite is called BETA or A-L.
see the list of rewrites
http://chorasimilarity.github.io/chemlambda-gui/dynamic/moves.html

In mol notation this rewrite is

L 1 2 c , A c 3 4  – – > Arrow 1 4 , Arrow 3 2

(and then is the turn of COMB rewrite to eliminate the arrow elements, if possible)

As 1, 2, 3, 4, c are only port variables in chemlambda, let’s use other:

L A x in , A in B let  – – >  Arrow A let , Arrow B x

So if we interpret Arrow (via the COMB rewrite) as meaning “=”, we get to the conclusion that

let x=B in A

is in chemlambda

L A x in

A in B let

Nice. it almost verbatim the same thing.  Remark that the command”let” appears as a port variable too.

Visually the rewrite/command is this:

beta
As you see this is a Wadsworth-Lamping kind of graph rewrite, with the distinctions that:
(a) x, A, B are only ports variables, not terms
(b) there is no constraint for x to be linked to A

The price is that even if we start with a graph which is related to a lambda term, after performing such careless rewrites we get out of the realm of lambda terms.

But there is a more subtle difference: the nodes of the graphs are not gates and the edges are not wires which carry signals.

The reduction works well for many fundamental examples,
http://chorasimilarity.github.io/chemlambda-gui/dynamic/demos.html
by a simple combination of the beta rewrite and those rewrites from the DIST family I wrote about in the post about duplicating trees.
https://chorasimilarity.wordpress.com/2015/05/04/the-illustrated-shuffle-trick-used-for-tree-duplication/

So, we get out of lambda calculus, what’s wrong with that? Nothing, actually, it turns out that the world outside lambda, but in chemlambda, has very interesting features. Nobody explored them, that is why is so hard to discuss about that without falling into one’s preconceptions (has to be functional programming, has to be lambda calculus, has to be a language, has to have a global semantics).
___________________________________________

All successful computation experiments with lambda calculus in one list

What you see in this links: I take a lambda term, transform it into a artificial molecule, then let it reduce randomly, with no evaluation strategy. That is what I call the most stupid algorithm. Amazing is that is works.
You don’t have to believe me, because you can check it independently, by using the programs available in the github repo.

Here is the list of demos where lambda terms are used.

Now, details of the process:
– I take a lambda term and I draw the syntactic tree
– this tree has as leaves the variables, bound and free. These are eliminated by two tricks, one for the bound variables, the other for the free ones. The bound variables are eliminated by replacing them with new arrows in the graph, going from one leg of a lambda abstraction node, to the leaf where the variable appear. If there are more places where the same bound variable appears, then insert some fanout nodes (FO). For the free variable do the same, by adding for each free variable a tree of FO nodes. If the bound variable does not appear anywhere else then add a termination (T) node.
– in this way the graph which is obtained is no longer a tree. It is a trivalent graph mostly, with some free ends. It is an oriented graph. The free ends which correspond to a “in” arrow are there for each free variable. There is only one end which correspond to an “out” arrow, coming from the root of the syntactic tree.
– I give a unique name to each arrow in the graph
– then I write the “mol file” which represents the graph, as a list of nodes and the names of arrows connected to the nodes (thus an application node A which has the left leg connected to the arrow “a”, the right leg connected to the arrow “b” and the out leg connected to “c”, is described by one line “A a b c” for example.

OK, now I have the mol file, I run the scripts on it and then I look at the output.

What is the output?

The scripts take the mol file and transform it into a collection of associative arrays (that’s why I’m using awk) which describe the graph.

Then they apply the algorithm which I call “stupid” because really is stupidly simplistic: do a predetermined number of cycles, where in each cycle do the following
– identify the places (called patterns) where a chemlambda rewrite is possible (these are pairs of lines in the mol file, so pairs of nodes in the graph)
– then, as you identify a pattern, flip a coin, if the coin gives “0” then block the pattern and propose a change in the graph
– when you finish all this, update the graph
– some rewrites involve the introduction of some 2-valent nodes, called “Arrow”. Eliminate them in a inner cycle called “COMB cycle”, i.e. comb the arrows
-repeat

As you see, there is absolutely no care about the correctness of the intermediary graphs. Do they represent lambda terms? Generically no!
Are there any variable which are passed, or evaluations of terms which are done in some clever order (eager, lazy, etc)? Not at all, there are no other variables than the names of the arrows of the graph, or these ones have the property that they are names which appear twice in the mol file (once in a port “in”, 2nd in a port “out”). When the pattern is replaced these names disappear and the names of the arrows from the new pattern are generated on the fly, for example by a counter of arrows.

The scripts do the computation and they stop. There is a choice made over the way of seeing the computation and the results.
One obvious choice would be to see the computation as a sequence of mol files, corresponding to the sequence of graphs. Then one could use another script to transform each mol file into a graph (via, say, a json file) and use some graph visualiser to see the graph. This was the choice in the first scripts made.
Another choice is to make an animation of the computation, by using d3.js. Nodes which are eliminated are first freed of links and then they vanish, while new nodes appear, are linked with their ports, then linked with the rest of the graph.

This is what you see in the demos. The scripts produce a html file, which has inside a js script which uses d3.js. So the output of the scripts is the visualisation of the computation.

Recall hat the algorithm of computation is random, therefore it is highly likely that different runs of the algorithm give different animations. In the demos you see one such animation, but you can take the scripts from the repo and make your own.

What is amazing is that they give the right results!

It is perhaps bizzare to look at the computation and to not make any sense of it. What happens? Where is the evaluation of this term? Who calls whom?

Nothing of this happens. The algorithm just does what I explained. And since there are no calls, no evaluations, no variables passed from here to there, that means that you won’t see them.

That is because the computation does not work by the IT paradigm of signals sent by wires, through gates, but it works by what chemists call signal transduction. This is a pervasive phenomenon: a molecule enters in chemical reactions with others and there is a cascade of other chemical reactions which propagate and produce the result.

About what you see in the visualisations.
Because the graph is oriented, and because the trivalent nodes have the legs differentiated (i.e. for example there might be a left.in leg, a right.in leg and a out leg, which for symmetry is described as a middle.out leg) I want to turn it into an unoriented graph.
This is done by replacing each trivalent node by 4 nodes, and each free end or termination node by 2 nodes each.
For trivalent nodes there will be one main node and 3 other nodes which represents the legs. These are called “ports”. There will be a color-coded notation, and the choice made is to represent the nodes A (application) and FO by the main node colored green, the L (lambda) and FI (fanin, exists in chemlamda only) by red (actually in the demos this is a purple)
and so on. The port nodes are coloured by yellow for the “in” ports and by blue for the “out” ports. The “left”, right”, “middle” types are encoded by the radius of the ports.

__________________________________________________

Another chemlambda reduction and a weird Google bug

 

This is the end of the computation of the monus function monus(a,b)=a-b if a>=b, else 0, for a=3 and b=20.

It is the last part of the computation, as recorded at 8X speed from my screen.

If you want to see all of it then go to https://github.com/chorasimilarity/chemlambda-gui/tree/gh-pages/dynamic , clone it and then type: bash moving_alt.sh and choose monus.mol from the list of mol files.

It is used the deterministic algorithm, this time, which makes all possible moves every time.

What is interesting about this computation is the quantity of trash it produces.

Indeed, I took a lambda term for the monus, applied it to 3 and 20 (in Church encoding), then applied it to successor then applied it to 0.

In the video you see that after the result (the small molecule connected to the blue dot, it i sthe chemlambda version of 0 in the Church encoding), there is still a lot of activity of destroying the rest of the molecule. It looks nice though.

Something strange happened when I tried to publish the video (there are so many strange things related to the dissemination of chemlambda that they become a family joke, some of the readers of this blog are aware of some of those).

After I completed the description I got the warning shown in the following video:  “Brackets aren’t allowed in your description”

 

For example this previous video has brackets in the description and in the title as well

and all worked well.

Anyway I eliminated the brackets but the warning remained, so eventually I got out from my google account and done something else.

Sometimes later I tried again, experience described in this video (which took a LOT of time to be processed)

At the beginning the fields for title and description were void and no error was announced.

After I filled them happened what you see in the video.

I understand that Google uses a distributed system (which probably needs lots of syncs, because you know, that is how intelligent people design programs today), but:

  • the “brackets are not allowed” is bogus, because a previous video worked perfect with brackets in the description,
  • the “unknown error” means simply that there was some trace left on my computer that there was an imaginary error in the previous try, so instead of checking if there is an error this time, I suppose that the browser was instructed to read from the ton of extremely useful cookies google puts in the user’s place.

_________________________________________________________________

What chemlambda can do for you

This post follows after What can I do with chemlambda, which questions to ask about it? and FAQ: chemlambda in real and virtual worlds.

UPDATE:

0.(for chemists)  With chemlambda you can build molecular computers. For this you (in case you’re a chemist) could identify the real chemical reactions which embody the moves of this made-up chemistry. They are simple enough to be possible in the real world. See All chemical reactions needed for a molecular computer.

___________________

1. Is chemlambda another visualization tool for lambda calculus?

NO. Visualization is an independent part, for the moment I use d3.js. Moreover chemlambda is only tangentially related to lambda calculus.

2. What is new in chemlambda then?

You have an example of a computation model which does not use values, calls, variables, variable passing.

Have you seen any other model with these properties?

3. Why should I be excited about a computation with no values, no evaluations, no calls, etc?

For several reasons:

  • nobody thinks about that (excepting perhaps biology related researchers, who don’t see, by looking through the microscope, the structure loved by CS people)
  • this is a blind spot, an accident of the history, because computers appeared after the telephone, during and after the WW2 when the problem was to encrypt and decrypt a message sent from A to B, and because physically the computers we know how to build are based on electronics
  • think, if you don’t have the problem of how to pass a value, or how to call a function in the internet, then what could be possible which now is not?
  • think about building real or virtual computers which are life like, in the sense that their work is complex to the point of seeming incomprehensible, but their complexity is based on very simple mechanisms, used again and again, without any director to channel them and without any hierarchy to maintain or enforce.

4. So should I pack my bags and look for other research subjects than what is popular now, or maybe should I just ignore chemlambda and go with the business as usual? After all, you know, cloud computing is great. Process calculi, types, all this.

It’s your choice, but now you know that there is something which is closer to real life like computation, which may hold the promise for a free sky, cloudless internet.

5. Suppose I want to play a bit with these ideas, but not to the point of getting out of my comfort zone. Which questions would be interesting to ask?

Right, this is a good strategy. Here are some questions which are related to lambda calculus, say.

  •  there is an algorithm which transforms a (untyped lambda beta calculus) term into a chemlambda molecule, but even if one can prove that beta move translates into the BETA move (like Wadsworth-Lamping) and that eventually the translations of SKI or BCKW combinators reduce in chemlambda like they should, there is the question: how does chemlambda does by local writes the thing which replaces  an evaluation strategy? A study by examples may give interesting insights.
  • how to program with chemlambda? Indeed, it can reproduce some behaviours of untyped lambda beta, but even so it does it in ways which are not like usually expected, as proven by the demos. To take an older example, the Y combinator simplifies to a 2 nodes molecule, story told in this article http://mitpress.mit.edu/sites/default/files/titles/content/alife14/ch079.html . The fact is that because one does not have eta (which looks like a global move in chemlambda, i.e. there is no a priory bound on the number of nodes and links to check for application of eta) then there are no functions. It’s a kind of extremal functional programming where there are no functions.
  • (c) what are the right replacements for lists, currying, booleans, which could take advantage from  chemlambda?
  • (d) by a try and explore procedure, which are the less stupid reduction algorithms and in which sense do they work exactly? This has to do with locality, which constrains a lot the design of those algorithms.
  • (e) finally, what can chemlambda do outside the sector of untyped lambda beta, which is only a small part of the possible chemlambda molecules?

_____________________________________________________________________

Experiments with a little lisper tutorial

I used the excellent lambda calculus tutorial by Mayer Goldberg from the little-lisper.org for some experiments in chemlambda.

UPDATE: See also the post The working factorial and the video

_______________

There are five of them.

The mentioned tutorial is the source for the lambda term which I translated into chemlambda and then used in the Ackermann function demos. The most outrageously successful is the computation of Ack(2,2) while self-multiplicating. The daughter molecules do not end at the same time, moreover.

Here is a video of a screen recording at 2X speed.

Now, there is a very puzzling thing here. In chemlambda with a random, purely local graph rewriting algorithm I can compute the Ackermann function. But what about simpler functions, like the factorial?

I tried the easy way consisting into translation of lambda terms for the factorial into chemlambda and the results are ugly. As a geometer who wants to decipher computation without using variables, values, variable passing or evaluations, I know that chemlambda is different because of this feature it has: it uses something like signal transduction instead of gates-and-wires general model from IT. So there has to be consequences of that.

Let’s see what happens in the case of the factorial. In this demo I took the lambda term from the same tutorial, par. 57, page 14. I hoped will work better than other proposals, based on the experience with the Ackermann function. I noticed in other experiments with terms for the factorial that the reduction in chemlambda is extremely wasteful, producing thousands of nodes and lots of trash. Moreover, there is a problem with the fix point combinator, explained in the article with Louis Kauffman Chemlambda, universality and self multiplication, and described in the older demos like this one. In chemlambda, the molecule which corresponds to the Y combinator reduces to a very simple one (which does not has an equivalent as a lambda term), made by only two nodes, fanout and application. Then the molecule becomes a gun which shoots pairs of fanout and application node. There is no mystery about the Y combinator in chemlambda, the real mechanism of it consists in the self-multiplication by the fanout node. [Note: in the old demo and in the article linked there is no FOE fanout. The FOE and the moves associated to it are a recent addition to the formalism, see the page of moves.]

The problem of using the Y combinator is that it never stops generating pairs of A and FOE nodes. In a computation which implements recurrence by the Y combinator, at some point, according to a IFTHENELSE or ISZERO, the cycle of Y is broken. But in chemlambda the Y gun molecule continues to exist and this leads to a never ending computation. From one side this is OK, see for example the life-like computations with chemlambda quines. Actually there is a stronger relation between chemlambda quines and the Y combinator molecule. One can design the computation to be such that when the Y molecule is no longer needed, it is encapsulated into a quine, but this is for another time to explain in detail.

I come back to the factorial example. In the demo I linked you can see that the computation of the factorial is wasteful (and paradoxically leads to a Y molecule), even if it does not use a fix point combinator.

Why?

First I thought it is because of currying and uncurrying. In chemlambda, because it is a graph rewrite system, there is no particular need to use currying all the time.

Then, to check this, I modified the molecule from the little lisper tutorial in order to geometrically compute the repeated application of a function f(a,b)=(succ(a), succ(a)b). The function is a piece of a graph with two in and two out links which is self-multiplying under the action of a number in the Church encoding.

Here is a successful computation with this molecule. But does it work all the time or have I been lucky? The reduction algorithm is random and different runs may give different results. It is the case with the Ackermann function computation as well, but that one was successful all the time.

Oh, it turns out that the computation with that molecule works well in about 20% of the runs. Here is an unsuccessful run.

So there is still a problem, but which one?

Under close examination the computation is still wasteful, because of the term (piece of molecule) c0, for the 0 in the Church encoding. In chemlambda this term corresponds to a small molecule which has a termination node inside.

When we want to thrash, like programmers do, something useless, in chemlambda the useless piece does not disappear. Is like in nature, the thing continues to exist and continues to interact, while in the process of degradation, with the rest of the environment.

The termination node, via the PRUNING moves, destroys slowly the useless part, but pother pieces form the useless part continue to interact with the rest of the graph.

Is this the problem?

In order to check this I further modified the molecule which was successful 20% of the time. I just replaced c0 by c1, which is the (molecule for) the lambda term for 1 in the Church encoding. Now, c1 does not have any termination node inside.

The price is that I no longer compute the factorial, but instead I compute the repeatedly applied function

F(a,b)=(succ(a), tms(a,b))

where tms(a,b)=ab+1. Here is a demo for the computation of tms(3,3) in chemlambda., and further is a video for tms(5,5)=26, where you can see a new creature, more complex than the walker discover in the predecessor.

I checked to see what is the closed expression, if any, for the function I compute, namely

f(0)=1, f(n+1) = (n+1)f(n) + 1

and the Wolfram Alpha engine has an interesting answer.

Well, this time I hit the right spot. The computation works, again and again and again.

So we have to learn to program ecologically with chemlambda.

____________________________________________________________________