Tag Archives: lambda calculus

Nature vs nurture in lambda calculus terms

This is one of the experiments among those accessible from arXiv:2003.14332 ,  or as well from figshare.

Btw if you want to know how you can contribute contribute read especially section 2.

Imagine that the lambda term

(\a.a a)(\x.((\b.b b)(\y.y x)))

is the gene of a future organism. How will it grow? Only nature (the term, the gene) matters or nurture (random evolution) matter?

Maybe you are familiar with this term, I saw it first time in arXiv:1701.04691, it is one which messes many purely local graph rewrite algorithms for lambda calculus. (By purely local I also mean that the parsing of the lambda term into a graph does not introduce non-local boxes, croissants, or other devices.)

As a term, it should reduce to the omega combinator. This is a quine in lambda calculus and it is also a chemlambda quine. You can play with it here.

I was curious if chemlambda can reduce corectly this term.  At first sight it does not. In this page use the “change” button until you have the message “random choices” and put the rewrites weights slides in the middle between “grow” and “slim”.  These are the default settings so at first you don’t have to do anything about.

Click the “start” button. You’ll see that after a while the graph (generated by the lambda to chemlambda parser) reduces to only one yellow node (a FOE) and the FROUT (free out) node of the root (of the lambda term). So chemlambda seems to be not capable to reduce this term correctly.

You can play with the (graph of the) term reducing it by hand, i.e. by hovering with the mouse over the nodes to trigger rewrites. If you do this carefully then you’ll arrive to reduce the graph to one which is identical with the graph of the omega combinator, with the exception of the fact that it has FOE nodes instead of FO nodes. By using the mol>lambda button you can see that such a graph does represent the omega combinator.

Now, either you succeeded this task, or not. Fo not just erload (by using lambda>mol button) and click “change” to get “older first” and move the rewrites weigths slider to “GROW”. This is the standard setting to check if the graph is a quine. Push start. What happens?

The reduction continues forever. This is a quine!

You’ll see that the mol>lambda gives nothing. This means that the decoration with lambda terms does not reach the root node. We are outside lambda calculus with this quine, which is different from the quine given by the omega combinator.

The two quines have the same gene. One of the quines (the graph of omega) never dies. The other quine is mortal: it dies by reducing eventually (in the “random choices” case) to a FOE node.

 

The decorator: a glimpse into the way chemlambda does lambda calculus

The lambda2chemlambda parser has now a decorator. Previously there was a button (function) which translates (parses) a lambda term into a mol. The reverse button (function) is the decorator, which takes a mol and turns it into a lambda term and some relations.

Let’s explain.

The lambda calculus to mol function  and the mol to lambda calculus are not inverse one to the other. This means that if we start from a mol file, translate it into a lambda term, then translate the lambda term back into mol, then we don’t always get the same mol. (See for example this.)

Example 1: In this page we first have the lambda term PRED ((POW 3) 4), which is turned into a mol. If we use the mol>lambda button then we get the same term, up to renaming of variables. You can check this by copy-paste of the lambda term from the second textarea into the first and then use lambda>mol button. You can see down the page (below the animation window) the mol file. Is the same.

Now push “start” and let chemlambda reduce the mol file. It will stop eventually. Move the gravity slider to MIN and rescale with mouse wheel to see the structure of the graph. It is like a sort of a surface.

Push the mol>lambda button. You get a very nice term which is a Church number, as expected.

Copy-paste this term into the first textarea and convert it into a mol by using lambda>mol. Look at the graph, this time it is a long, closed string, like a Church number should look in chemlambda. But it is not the same graph, right?

The mol>lambda translation produces a decoration of the half-edges of the graph. The propagation of this decoration into the graph has two parts:

  • propagation of the decoration along the edges. Whenever an edge has already the source and targed decorated, we get a relation between these decorations (they are “=”)
  • propagation of the decoration through the nodes. Here the nodes A, L, and FO are decorated according to the operations application, lambda and fanout. The FOE nodes are decorated like fanouts (so a FO and a FOE node are decorated the same! something is lost in translation). There is no clear way about how to decorate a FI (fanin) node. I choose to decorate the output of a FI with a new variable and to introduce two relations: first port decoration = output decoration and second port decoration = output decoration.

The initial decoration is with variables, for the ports of FRIN nodes, for the 2nd ports of the L nodes and for the output ports of the FI nodes.

We read the decoration of the graph as the decoration of the FROUT node(s). Mols from lambda terms have only one FROUT,

The graph is well decorated if the FROUT node is decorated and there are no relations.

Example 2: Let’s see how the omega combinator reduces. First click the mol>lambda buton to see we have the same term back, the omega.

Now use the “step” button, which does one reduction step, randomly. In the second textarea you see the decoration of the FROUT node and the relations.

What happens? Some steps make sense in lambda calculus, but some others don’t. In these relations you can see how chemlambda “thinks” about lambda calculus.

If you use the “step” button to reduce the mol, then you’ll see that you obtain back the omega combinator, after several steps.

Enjoy!

 

Parser gives fun arrow names

Not yet released with modifications, but the lambda2chemlambda parser can be made to give fun arrow names. Like for example the term

((\g.((\x.(g (x x))) (\x.(g (x x))))) (\x.x))

which is the Y combinator applied to id, becomes the mol

FROUT [^L [((\g. [((\g [((\^L [((\g.((\x. [((\g.((\x [((\g.((\^FO [((\g [((\g.((\x.(g [((\g.((\x.(g*^A [((\g.((\x.(g [((\g.((\x.(g@( [((\g.((\x.(g@^FO [((\g.((\x [((\g.((\x.(g@(x [((\g.((\x.(g@(x*^A [((\g.((\x.(g@(x [((\g.((\x.(g@(x@x [((\g.((\x.(g@(x@^Arrow [((\g.((\x.(g@(x* [((\g.((\x.(g@(x@x^Arrow [((\g.((\x.(g@(x@ [((\g.((\x.(g@(^Arrow [((\g.((\x.(g@ [((\g.((\x.(^Arrow [((\g.((\x.( [((\g.((\x.^Arrow [((\g.((\ [((\g.((^A [((\g.(( [((\g.((\x.(g@(x@x)))@( [((\g.((\x.(g@(x@x)))@^L [((\g.((\x.(g@(x@x)))@(\x. [((\g.((\x.(g@(x@x)))@(\x [((\g.((\x.(g@(x@x)))@(\^Arrow [((\g.((\x.(g* [((\g.((\x.(g@(x@x)))@(\x.(g^A [((\g.((\x.(g@(x@x)))@(\x.(g [((\g.((\x.(g@(x@x)))@(\x.(g@( [((\g.((\x.(g@(x@x)))@(\x.(g@^FO [((\g.((\x.(g@(x@x)))@(\x [((\g.((\x.(g@(x@x)))@(\x.(g@(x [((\g.((\x.(g@(x@x)))@(\x.(g@(x*^A [((\g.((\x.(g@(x@x)))@(\x.(g@(x [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x [((\g.((\x.(g@(x@x)))@(\x.(g@(x@^Arrow [((\g.((\x.(g@(x@x)))@(\x.(g@(x* [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x^Arrow [((\g.((\x.(g@(x@x)))@(\x.(g@(x@ [((\g.((\x.(g@(x@x)))@(\x.(g@(^Arrow [((\g.((\x.(g@(x@x)))@(\x.(g@ [((\g.((\x.(g@(x@x)))@(\x.(^Arrow [((\g.((\x.(g@(x@x)))@(\x.( [((\g.((\x.(g@(x@x)))@(\x.^Arrow [((\g.((\x.(g@(x@x)))@(\ [((\g.((\x.(g@(x@x)))@(^Arrow [((\g.((\x.(g@(x@x)))@ [((\g.(^Arrow [((\g.( [((\g.^Arrow [((\ [((^A [(( [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@( [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@^L [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@(\x. [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@(\x [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@(\^Arrow [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@(\x [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@(\x.x^Arrow [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@(\x.x [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@(\x.^Arrow [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@(\ [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@(^Arrow [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@ [(^Arrow [( [

Recall that the parser is part of the landing page for all chemlambda projects.

So, if you write in the parser:

(\x.a) b

which is the pattern for a beta rewrite, then the relevant part of the mol (with funny arrow names) and the rewrite will have the form:

betastring

 

Use the lambda to chemlambda parser to see when the translation doesn’t work

I use the parser page mainly and other pages will be mentioned in the text.

So chemlambda does not solve the problem of finding a purely local conversion of lambda terms to graphs, which can be further reduced by a purely local random algorithm, always. This is one of the reasons I insist both into going outside lambda calculus and into looking at possible applications in real chemistry, where some molecules (programs) do reduce predictively and the span of the cascade of reactions (reductions) is much larger than one can achieve via massive brutal try-everything on a supercomputer strategy.

Let’s see: choose

(\a.a a)(\x.((\b.b b)(\y.y x)))

it should reduce to the omega combinator, but read the comment too. I saw this lambda term, with a similar behaviour, in [arXiv:1701.04691], section 4.

Another example took me by surprise. Now you can choose “omega from S,I combinators”, i.e. the term

(\S.\I.S I I (S I I)) (\x.\y.\z.(x z) (y z)) \x.x

It works well, but l previously used a related term, actually a mol file, which corresponds to the term where I replace I by S K K in S I I (S I I), i.e. the term

S (S K K) (S K K) (S (S K K) (S K K))

To see the reduction of this term (mol file) go to this page and choose “omega from S,K combinators”. You can also see how indeed S K K reduces to I.

But initially in the parser page menu I had  the term

(\S.\K.S (S K K) (S K K) (S (S K K) (S K K))) (\x.\y.\z.(x z) (y z)) (\x.(\y.x))

It should reduce well but it does not. The reason is close to the reason the first lambda term does not reduce well.

Now some bright side of it. Look at this page to see that the ouroboros quine is mortal. I believed it is obviously imortal until recently. Now I started to believe that imortal quines in chemlambda are rare. Yes, there are candidates like (the graph obtained from) omega, or why not (try with the parser) 4 omega

(\f.(\x.(f(f (f (f x)))))) ((\x.x x) (\x.x x))

and there are quines like the “spark_243501” (shown in the menu of this page) with a small range of behaviours. On the contrary, all quines in IC are imortal.

Lambda calculus to chemlambda parser (2) and more slides

This post has two goals: (1) to explain more about the lambda to chemlambda parser and (2) to talk about slides of presentations which are connected one with the other across different fileds of research.

(1) There are several incremental improvements to the pages from the quine graphs repository. All pages, including the parser one, have two sliders, each giving you control about some parameters.

The “gravity” slider is kind of obvious. Recall that you can use your mose (or pinching gestures) to zoom in or out the graph you see. With the gravity slider you control gravity. This allows you to see better the edges of the graph, for example, by moving the gravity slider to the minimum and then by zooming out. Or, on the contrary, if you have a graph which is too spreaded, you can increase gravity, which will have as aeffect a more compactly looking graph.

The “rewrites weights slider” has as extrema the mysterious words “grow” and “slim”. It works like this. The rewrites (excepting COMB, which are done preferentially anyway) are grouped into those which increase the number of nodes (“grow”) and the other ones, which decrease the number of nodes (“slim”).

At each step, the algorithm tries to pick at random a rewrite. If there is a COMB rewrite to pick, then it is done. Else, the algorithm will try to pick at random one “grow” and one “slim” rewrite. If there is only one of these available, i.e. if there a “grow” but no “slim” rewrite, then this rewrite is done. Else, if there is a choice between two randomly choses “grow” and “slim” rewrites, we flip a coin to choose among them. The coin is biased towards “grow” or “slim” with the rewrites weights slider.

This is interesting to use, for example with the graphs which come from lambda terms. Many times, but not always, we are interested in reducing the number of nodes as fast as possible. A strategy would be to move the slider to “slim”.

In the case of quines, or quine fights, it is interesting to see how they behave under “grow” or “slim” regime.

Now let’s pass to the parser. Now it works well, you can write lambda terms in a human way, but mind that “xy” will be seen as a variable, not as the application of “x” to “y”. Application is “x y”. Otherwise, the parser understands correctly terms like

(\x.\y.\z.z y x) (\x.x x)(\x. x x)\x.x

Then I followed the suggestion of my son Matei to immediately do the COMB rewrites, thus eliminating the Arrow nodes given by the parser.

About the parser itself. It is not especially short, because of several reasons. One reason is that it is made as a machine with 3 legs, moving along the string given by the lexer. Just like the typical 3-valent node. So that is why it will be interesting to see it in action, visually. Another reason is that the parser first builds the graph without fanout FO and termination T nodes, then adds the FO and and T nodes. Finally, the lambda term is not prepared in advance by any global means (excepting the check for balanced parantheses). For example no de Bruijn indices.

Another reason is that it allows to understand what edges of the (mol) graph are, or more precisely what port variables (edge variables) correspond to. The observation is that the edges are in correspondence with the position of the item (lparen, rparen, operation, variable) in the string. We need at most N edge names at this stage, where N is the length of the string. Finally, the second stage, which adds the FO and T nodes, needs at most N new edge names, practically much less: the number of duplicates of variables.

This responds to the question: how can we efficiently choose edge names? We could use as edge name the piece of the string up to the item and we can duble this number by using an extra special character. Or if we want to be secretive, now that we now how to constructively choose names, we can try to use and hide this procedure.

Up to now there is no “decorator”, i.e. the inverse procedure to obtain a lambda term from a graph, when it is possible. This is almost trivial, will be done.

I close here this subject, by mentioning that my motivation was not to write a parser from lambda to chemlambda, but to learn how to make a parser from a programming language in the making. You’ll see and hopefully you’ll enjoy 🙂

(2) Slides, slides, slides. I have not considered slides very interesting as a mean of communication before. But hey. slides are somewhere on the route to an interactive book, article, etc.

So I added to my page links to 3 related presentations, which with a 4th available and popular (?!) on this blog, give together a more round image of what I try to achieve.

These are:

  • popular slides of a presentation about hamiltonian systems with dissipation, in the form baptized “symplectic Brezis-Ekeland-Nayroles”.  Read them in conjuction with arXiv:1902.04598, see further why
  • (Artificial physics for artificial chemistry)   is a presentation which, first, explains what chemlambda is in the context of artificial chemistries, then proceeds with using a stochastic formulation of hamiltonian systems with dissipation as an artificial physics for this artificial chemistry. An example about billiard ball computers is given. Sure, there is an article to be written about the details, but it is nevertheless interesting to infer how this is done.
  • (A kaleidoscope of graph rewrite systems in topology, metric geometry and computer science)  are the most evolved technically slides, presenting the geometrical roots of chemlambda and related efforts. There are many things to pick from there, like: what is the geometrical problem, how is it related to emergent algebras, what is computation, knots,  why standard frames in categorical logic can’t help (but perhaps it can if they start thinking about it), who was the first programmer in chemlambda, live pages where you can play with the parser, closing with an announcement that indeed anharmonic lambda (in the imperfect form of kali, or kaleidoscope) soves the initial problem after 10 years of work. Another article will be most satisfactory, but you see, people rarely really read articles on subjects they are not familiar with. These slides may help.
  • and for a general audience my old (Chemlambda for the people)  slides, which you may appreciate more and you may think about applications of chemlambda in the real world. But again, what is the real world, else than a hamiltonian system with dissipation? And who does the computation?

 

 

Lambda calculus to chemlambda parser

To play with at this page.  There are many things to say, but will come back later with details about my first parser and why is it like this.

UPDATE: After I put the parser page online, it messed with the other pages, but now everything is allright.

UPDATE: I’ll demo this at a conference on Dec 4th, at IMAR, Bucharest.

Here are the slides.

The title is “A kaleidoscope of graph rewrite systems in topology,metric geometry and computer science“.

So if you are in Bucharest on Dec 4th, at 13h, come to talk. How to arrive there.

I already dream about a version which is purely “chemical”, with 3-legged parser spiders reading from the DNA text string and creating the molecules.

Will do, but long todo list.

Quine graphs (3), ouroboros, hapax and going public

Several news:

I decided that progressively I’m going to go public, with a combination of arXiv, Github and Zenodo (or Figshare), and publication. But there is a lot of stuff I have to publish and that is why this will happen progressively. Which means it will be nice to watch because it is interesting, for me at least,  to answer to the question:

What the … does a researcher when publishing? What is this for? Why?

Seriously, the questions are not at all directed against classical publication, nor are they biased versus OA. When you publish serially, like a researcher, you often tell again and again a story which evolves in time. To make a comparison, it is like a sequence of frames in a movie.

Only that it is not as simple. It is not quite like a sequence of frames,  is like a sequence of pictures, each one with it’s repeating tags, again and again.

Not at all compressed. And not at all like an evolving repository of programs which get better with time.

Lambda calculus inspires experiments with chemlambda

In the salvaged collection of google+ animations (see how  the collection was deleted ) I told several stories about how lambda calculus can bring inspiration for experiments with chemlambda. I select for this post a sequence of such experiments. For previous related posts here see this tag and this post.

Let’s go directly to the visuals. (UPDATE: or see the story of the ouroboros here)

Already in chemlambda v1 I remarked the interesting behaviour of the graph (or molecule) which is obtained from the lambda term of the predecessor applied to a Church number.  With the deterministic greedy algorithm of reductions, after the first stages of reduction there is a repeating pattern of  reduction, (almost) up to the end. The predecessor applied to the Church number molecule looks almost like a closed loop made of pairs A-FO (because that’s how a Church number appears in chemlambda), except a small region which contains the graph of the predecessor, or what it becomes after few rewrites.

In chemlambda v2 we have two kinds of fanouts: FO and FOE.  The end result of the reduction of the same molecule, under the same algorithm, is different: where in chemlambda v1 we had FO nodes (at the end of the reduction), now we have FOE nodes. Other wise there’s the same phenomenon.

Here is it, with black and white visuals

pred

Made by recording of this live (js) demo.

1. What happens if we start not from the initial graph, but from the graph after a short number of rewrites? We have just to cut the “out” root of the initial graph, and some nodes from it’s neighbourhood and glue back, so that we obtain a repeating pattern walking on a circular train track.

Here is it, this time with the random reduction algorithm:

bigpred_train-opt

I previously called this graph an “ouroboros”. Or a walker.

2. That is interesting, it looks like a creature (can keep it’s “presence”) which walks in a single direction in a 1-dimensional world.  What could be the mechanism?

Penrose comes to mind, so in the next animation I also use a short demonstration from a movie by Penrose.

bigpred_penrose-opt

 

3. Let’s go back to the lambda calculus side and recall that the algorithm for the translation of a lambda term to a chemlambda molecule is the same as the one from GLC, i.e the one from Section 3 here. There is a freedom in this algorithm, namely that trees of FO nodes can be rewired as we wish. From one side this is normal for GLC and chemlambda v1,  which have the CO-COMM and CO-ASSOC rewrites

convention_3

In chemlambda v2 we don’t have these rewrites at all, which means that in principle two diferent molecules,  obtained from the same lambda term, which differ only by the rewiring of the FO nodes may reduce differently.

In our case it would be interesting to see if the same is true for the FOE nodes as well. For example, remark that the closed loop, excepting the walker, is made by a tree of FOE nodes, a very simple one. What happens if we perturb this tree, say by permuting some of the leaves of the tree, i.e. by rewiring the connections between FOE and A nodes.

bigpred_train_perm-opt

The “creature” survives and now it walks in a world which is no longer 1 dimensional.

Let’s play more: two permutations, this time let’s not glue the ends of the loop:

bigpred_train_egg

It looks like a signal transduction from the first glob to the second. Can we make it more visible, say by making invisible the old nodes and visible the new ones? Also let’s fade the links by making them very large and almost transparent.

bigpred_train_egg_mist_blue

Signal transduction! (recall that we don’t have a proof that indeed two molecules from the same lambda term, but with rewired FO trees reduce to the same molecule, actually this is false! and true only for a class of lambda terms. The math of this is both fascinating and somehow useless, unless we either use chemlambda in practice or we build chemlambda-like molecular computers.)

4.  Another way to rewire the tree of FOE nodes is to transform it into another tree with the same leaves.

bigpred_tree-opt

 

5. Wait, if we understand how exactly this works, then we realize that we don’t really need this topology, it should also work for topologies like generalized Petersen graphs, for example for a dodecahedron.

dodecahedron_walker

 

This is a walker creature which walks in a dodecaheral “world”.

6. Can the creature eat? If we put something on it’s track, see if it eats it and if it modifies the track, while keeping it’s shape.

walker_bit-opt

So the creature seems to have a metabolism.

We can use this for remodeling the world of the creature. Look what happens after many passes of the creature:

walker_bit_new

 

7. What if we combine the “worlds” of two creatures, identical otherwise. Will they survive the encounter, will they interact or will they pass one through the other like solitons?

bigpred_bif

 

Well, they survive. Why?

8. What happens if we shorten the track of the walker, as much as possible? We obtain a graph wit the following property: after one (or a finite give number of) step of the greedy deterministic algorithm we obtain an isomorphic graph. A quine! chemlambda quine.

At first, it looks that we obtained a 28 nodes quine. After some analysis we see that we can reduce this quine to a 20 nodes quine. A 20-quine.

Here is the first observation of the 20-quine under the random algorithm

20_quine_50steps

According to this train of thoughts, a chemlambda quine is a graph which has a periodic evolution under the greedy deterministic algorithm, with the list of priority of rewrites set to DIST rewrites (which add nodes)  with greater priority than beta and FI-FOE rewrites (which subtract ndoes), and which does not have termination nodes (because it leads to some trivial quines).

These quines are interesting under the random reduction algorithm, which transform them into mortal living creatures with a metabolism.

____________

So this is an example of how lambda calculus can inspire chemlambda experiments, as well as interesting mathematical questions.

Graphic lambda calculus and chemlambda(III)

This post introduces chemlambda v2. I continue from the last post, which describes the fact that chemlambda v1, even if it has only local rewrites, it is not working well when used with the dumbest possible reduction algorithms.

Nature has to work with the dumbest algorithms, or else we live in a fairy tale.

Chemlambda v2 is an artificial chemistry, in the following sense:

  • it is a graph rewrite system over oriented fatgraphs made of a finite number of nodes, from the list: 5 types of 3-valent nodes, A (application), L (lambda abstraction), FO (fanout), FI (fanin), FOE (external fanout), 1 type of 2-valent node Arrow, 3 types of 1-valent nodes, FRIN (free in), FROUT (free out), T (termination). Compared to chemlambda v1, there is a new node, the FOE. The nodes, not the rewrites, are described in this early explanation called Welcome to the soup. (Mind that the gallery of example which is available at the end of these explanation mixes chemlambda v1 and chemlambda v2 examples. I updated the links so that is no longer pointing to this very early gallery of examples. However if you like it here is it.)
  • the rewrites CO-COMM and CO-ASSOC of chemlambda v1 are not available, instead there are several new DIST rewrites: FO-FOE, L-FOE, A-FOE, FI-FO, and a new beta like rewrite FI-FOE. As in chemlambda v1, the patterns of the rewrites fit with the interaction combinator rewrites if we forget the orientation of the edges, but the 3-valent nodes don’t have a principal port, so they don’t form interaction nets. Moreover, the are conflicts among the rewrites, i.e. there are configurations of 3 nodes such that we have a node which belongs to two pairs of nodes which may admit rewrites. The order of application of rewrites may matter for such conflicts.
  • there is an algorithm of application of rewrites, which is either the deterministic greedy algorithm with a list of priority of rewrites (for example beta rewrites have priority over DIST rewrites, whenever there is a conflict), or the random application algorithm.

 

Sources for chemlambda v2:

 

The goal of chemlambda v2: to explore the possibility of molecular computers in this artificial chemistry.

This needs explanations. Indeed,  does the system work with the simplest random algorithm? We are not interested into semantics, because it is, or it relies on global notions, We are not (very) interested into reduction strategies for lambda terms, because they are not as simple as the dumbest algorithms we use here. Likewise for readback, etc.

So, does chemlambda v2 work enough for making molecular computers?  Pure untyped lambda calculus reduction problems are an inspiration. If the system works for the particular case of graphs related to lambda terms then this is a bonus for this project.

As you see, instead of searching for an algorithm which could implement, decentralized say, a lambda calculus reduction strategy, we ask if a particular system reduces (graphs related to) terms with one algorithm from the fixed class of dumbest ones.

That is why the universality in the sense of Lafont is fascinating. In this post I argued that Lafont universality property of interaction combinators means, in this pseudo-chemical sense, that the equivalent molecular computer based on interaction combinators reactions (though not the translations) works for implementing a big enough class of reactions which are Turing universal in particular (Lafont  shows concretely that he can implement Turing machines).

(continues with the part IV)

Graphic lambda calculus and chemlambda (II)

Chemlambda v2 is an entirely different project than GLC and chemlambda v1. This post continues from the first part. It explains the passage towards chemlambda v2.

A problem of GLC and chemlambda v1 is that research articles are opinion pieces, not validated by programs and experiments. The attempt to use GLC with the Actor Model in order to build a decentralized computing proposal, aka distributed GLC, failed because of this. Does all of this work?

The CO-COMM and CO-ASSOC rewrites lead to the situation that,  in order to be useful, either:

  • they have to be applied by a human or by a(n unknown) very clever algorithm
  • or they are applied in both directions randomly, which implies that no GLC or chemlambda v1 reduction ever terminates.

Here is an early visual tutorial  which introduces the nodes of chemlambda v2.  At the end of it you are pointed to See also a gallery of examples which mixes chemlambda v1 with chemlambda v2, like these:

 

Or, another example, the Y combinator. In chemlambda v1, without using CO-COMM and CO-ASSOC, the Y combinator applied to an unspecified term behaves like this.  In chemlambda v2, where there is a supplimentary node and other rewrites, the Y combinator behaves almost identically, but some nodes (the yellow FOE here instead of the green FO before) are different:

 

ycombi

 

 

See this page for a list of works, tagged with the version of chemlambda used.

(continues with part III)

 

 

Graphic lambda calculus and chemlambda (I)

UPDATE: The article Graph rewrites, from emergent algebras to chemlambda explains the history of the subject, will all the needed informations.

________

Looks like there is a need to make a series of posts dedicated to the people who try to use this blog as a source in order to understand graphic lambda calculus (aka GLC) and chemlambda. This is the first one.

Sources for GLC:

  • the best source is the article M. Buliga, Graphic lambda calculus. Complex Systems 22, 4 (2013), 311-360   (link to article in journal) (link to article in arXiv)
  • you can see the GLC page (link) here, which has been updated many times after chemlambda appeared, but it is unmodified starting from the section “What is graphic lambda calculus?” and there are links to many others posts here which explain GLC as it is, that is before chemlambda.

GLC is a graph rewriting system for fatgraphs made of trivalent or 1-valent nodes. The trivalent nodes used are A (application), L (lambda), FO (fanout), epsilon (for dilations). There is one 1-valent node, T (termination). Loops with no nodes and arrows, i.e. oriented edges with no nodes are accepted as well. There is no algorithm proposed for the reduction of these graphs.

The graph rewrites are:

– local ones (i.e. involving only a finite number, a priori given, of nodes and edges)

  •   graphic beta move, which is like Lamping graph rewrite, only purely local, i.e. there is no limitation on the global shape of the graph, thus it is  somehow more general than the beta rewrite from untyped lambda beta calculus; a possible interpretation is C= let x=B in A rewrites to x=B and C=A

 

  • betaCO-COMM and CO-ASSOC rewrites for the FO (fanout) which are, due to the orientation of the edges, really like graphical, AST forms of co-commutativity and co-associativity
  • local pruning group of rewrites, which describe the interaction of the trivalent nodes with the T node; incidentally T and FO interact like if T is a co-unit for FO
  • a group of rewrites for the dilation nodes, which are those of emergent algebras, which involve the fanout FO and the dilation nodes

 

– global rewrites:

  • global fan-out
  • global pruning

 

In section 3 of the article on GLC is given an algorithm of conversion of untyped lambda terms into graphs which is a little more than a modification of the AST of the lambda term, in such a way that the orientation of the edges is respected. That is because the lambda node L has one incoming edge and two outgoing edges!

I prove then that GLC can be used for untyped lambda beta calculus, but also for emergent algebras and also for a variant of knot theoretic graphs where there is no need for them to be planar. Finally, I show that there might be other “sectors” of graphs which may be interesting, regardless of their meaning (or lack of it) with respect to lambda calculus.

The problem of GLC is that it has these global rewrites. Can these be replaced by local rewrites?

That’s how chemlambda appeared.

I was aware that some applications of global fanout can be done with local rewrites, but not all. Also, the interest in GLC was not extended to the interest into emergent algebras, to my dismay.

On the other side I became obsessed with the idea that if the global fanout can be replaced by local rewrites entirely then it should be possible in principle to see the rewrites as chemical reactions between individual molecules. A bigger problem would be then: would these reactions reduce the graph-molecules under the dumbest algorithm among all, the random one? Nature functions like this, so any algorithm which would be global in some sense is completely excluded.

This led me to write the article: M. Buliga, Chemical concrete machine (link to DOI) (link to arXiv version).  This is chemlambda v1, which is actually a transition from GLC to something else, or better said to another research subject in the making.

Other sources for this mix glc-chemlambda v1:

 

Chemlambda v1, or the “chemical concrete machine”, is a graph rewriting algorithm which uses the trivalent nodes A, L, FI (fan-in), FO, and the 1-valent node T and only local rewrites:

  • the graphic beta rewrite (between the nodes L and A) and the fan-in rewrite (between the nodes FI and FO)

 

convention_2

 

  • the CO-COMM and CO-ASSOC rewrites

 

 

convention_3

 

  • two DIST (from distributivity) rewrites, for the pairs A-FO and L-FO

 

 

convention_6

  • local pruning rewrites (involving the 1-valent node T)

 

convention_4

 

  •  and elimination of loops

 

convention_5

 

As you see, all rewrites except CO-COMM and CO-ASSOC are alike the ones from Lafont article Interaction combinators, except that the graphs are directed and the nodes don’t have a principal port for interaction. Here are the interaction combinators rewrites

 

 

lafont-2

In the chemical concrete machine article are mentioned Berry and Boudol chemical abstract machine and Fontana and Buss alchemy, but not Lafont.  My fault. From what I knew then the beta rewrite came from Lamping or better Wadsworth, the fan-in came from Turaev (knotted trivalent graphs), and the DIST rewrites came from the graphical version of linear emergent algebras, or even better from the Reidemeister 3 rewrite in knot theory.

 

 

In this article there is no algorithm for application of the rewrites. (You shall see that in chemlambda v2, which is an artificial chemistry,  there are actually several, among them the deterministic greedy one, with a list of priority of the rewrites, because there are collisions otherwise, and the random one, which interested me the most.)

However I suggest that the rewrites can be seen as done in a truly chemical sense, mediated by (invisible) enzymes, each rewrite type with it’s enzyme.

I proved that we can translate from GLC to chemlambda v1 and that global fan-out and global pruning can be replaced by sequences of rewrites from chemlambda v1. There was no proof that this replacement of the global rewrites with cascades of local rewrites can be done by the algorithms considered. I proved the Turing universality by using the BCKW system of combinators.

The chemical concrete machine article ends however with:

“With a little bit of imagination, if we look closer to what TRUE, FALSE and IFTHENELSE are doing, we see that it is possible to adapt the IFTHENELSE to a molecule which releases, under the detection of one molecule (like TRUE), the ”medicine” A, and under the detection of another molecule (like FALSE) the ”medicine” B.”

The chemlambda page (link) here is reliable for chemlambda v1, but it also contain links to newer versions.

[UPDATE: I retrieved this view from 2014  of the story.]

(continues with part II)

 

Deterministic vs random, an example of grandiose shows vs quieter, functioning anarchy

In the following video you can see the deterministic, at the right random evolution of the same molecule, duplex.mol from the chemlambda repository. They take about the same time.

The deterministic one is like a ballet, it has a comprehensible development, it has rhythm and drama. Clear steps and synchronization.

The random one is more fluid, less symmetric, more mysterious.

What do you prefer, a grand synchronized show or a functioning, quieter anarchy?

Which one do you think is more resilient?

What is happening here?

The molecule is inspired from lambda calculus. The computation which is encoded is the following. Consider the lambda term for the identity function, i.e. I=Lx.x. It has the property that IA reduces to A for any term A. In the molecule it appears as a red trivalent node with two ports connected, so it looks like a dangling red globe. Now, use a tree of fanouts to multiply (replicate) this identity 8 times, then build the term

(((II)(II))((II)(II)))(((II)(II))((II)(II)))

Then use one more fanout to replicate this term into two copies and reduce all. You’ll get two I terms, eventually.
In the deterministic version the following happens.

– the I term (seen as a red dangling node) is replicated (by sequence of two rewrites, detail) and gradually the tree of fanouts is destroyed

– simultaneously, the tree of applications (i.e. the syntactic tree of the term, but seen with the I’s as leaves) replicates by the fanout from the end

– because the reduction is deterministic, we’ll get 16 copies of I’s exactly when we’ll get two copies of the application tree, so in the next step there will be a further replication of the 16 I’s into 32 and then there will be two, disconnected, copies of the molecule which represents ((II)(II))((II)(II))

– after that, this term-molecule reduces to (II)(II), then to II, then to I, but recall that there are two copies, therefore you see this twice.

In the random version everything mixes. Anarchy. Some replications of the I’s reach the tree of applications before it has finished to replicate itself, then reductions of the kind II –> I happen in the same time with replications of other pieces. And so on.
There is no separation of stages of this computation.
And it still works!

I used quiner_experia, with the mol file duplex.mol. The first time I modified all the weights to 0 (to have deterministic application) and took the rise parameter=0 (this is specific to quiner_experia, not present in quiner) too, because the rise parameter lower the probabilities of new rewrites, exponentially, during the same step, in order to give fair chances to any subset of all possible rewrites possible.
Then I made a screencast of the result, without speeding it, and by using safari to run the result.
For the random version I took all the weights equal to 1 and the rise parameter equal to 8 (empirically, this gives the most smooth evolution, for a generic molecule from the list of examples). Ran the result with safari and screencasted it.
Then I put the two movies one near the other (deterministic at left, random at right) and made a screencast of them running in parallel. (Almost, there is about 1/2 second difference because I started the deterministic one first, by hand).
That’s it, enjoy!
For chemlambda look at the trailer from the collections of videos I have here on vimeo.

Crossings as pairs of molecular bonds; boolean computations in chemlambda

I collect here the slightly edited versions of a stream of posts on the subject from the microblogging chemlambda collection. Hopefully this post will give a more clear image about this thread.

Here at the chorasimilarity open notebook, the subject has been touched previously, but perhaps that the handmade drawings made it harder to understand (than with the modern 🙂 technology from the chemlambda repository):

Teaser: B-type neural networks in graphic lambda calculus (II)

(especially the last picture!)

All this comes with validation means. This is a very powerful idea: in the future validation will replace peer review, because it is more scientific than hearsay from anonymous authority figures (aka old peer review) and because it is more simple to implement than a network of hearsay comments (aka open peer review).

All animations presented here are obtained by using the script quiner.sh and various mol files. See instructions about how you can validate (or create your own animations) here:
https://github.com/chorasimilarity/chemlambda-gui/blob/gh-pages/dynamic/README.md

Here starts the story.

(Source FALSE is the hybrid of TRUE, boole.mol file used)

boole

Church encoding gives a way to define boolean values as terms in the lambda calculus. It is easy:

TRUE= Lx.(Ly.x)

FALSE= Lx.(Ly.y)

So what? When we apply one of these terms to another two, arbitrary terms X and Y, look what happens: (arrows are beta reductions (Lx.A)B – – > A[x:=B] )

(TRUE X) Y – – > (Ly.X) Y – – > X (meaning Y goes to the garbage)

(FALSE X) Y – – > (Ly.y) Y – – > Y (meaning X goes to the garbage)

It means that TRUE and FALSE select a way for X and Y: one of them survives, the other disappears.

Good: this selection is an essential ingredient for computation

Bad: too wasteful! why send a whole term to the garbage?

Then, we can see it otherwise: there are two outcomes: S(urvival) and G(arbage), there is a pair of terms X and Y.

– TRUE makes X to connect with S and Y to connect with G

– FALSE makes X to connect with G and Y to connect with S

The terms TRUE and FALSE appear as molecules in chemlambda, each one made of two red nodes (lambdas) and a T (termination) node. But we may dispense of the T nodes, because they lead to waste, and keep only the lambda nodes. So in chemlambda the TRUE and FALSE molecules are, each, made of two red (lambda) nodes and they have one FROUT (free out).

They look almost alike, only they are wired differently. We want to see how it looks to apply one term to X and then to Y, where X and Y are arbitrary. In chemlambda, this means we have to add two green (A) application nodes, so TRUE or FALSE applied to some arbitrary X and Y appear, each, as a 4 node molecule, made of two red (lambda) two green (A), with two FRIN (free in) nodes corresponding to X and Y and two FROUT (free out) nodes, corresponding: one to the deleted termination node, thus this is the G(arbage) outcome, and the other to the “output” of the lambda terms, thus this is the S(urvival) outcome.

But the configuration made of two green A nodes and two red L nodes is the familiar zipper which you can look at in this post

betazipper

In the animation you see TRUE (at left) and FALSE (at right), with the magenta FROUT nodes and the yellow FRIN nodes.

The zipper configurations are visible as the two vertical strings made of two green, two red nodes.

What’s more? Zippers, they do only one thing: they unzip.

The wiring of TRUE and FALSE is different. You can see the TRUE and FALSE in the lower part of the animation. I added four Arrow (white) nodes in order to make the wiring more visible. Arrow nodes are eliminated in the COMB cycle, they have only a fleeting existence.

This shows what is really happening: look at each (TRUE-left, FALSE-right) configuration. In the upper side you have 4 nodes, two magenta, two yellow, which are wired together at the end of the computation. In the case of TRUE they end up wired in a X pattern, in the case of FALSE they end up wired in a = pattern.

At the same time, in the lower side, before the start of the computation, you see the 4 white nodes which: in the case of TRUE are wired in a X pattern, in the case of FALSE are wired in a = pattern. So what is happening is that the pattern ( X or = ) is teleported from the 4 white nodes to the 4 magenta-yellow nodes!

The only difference between the two molecules is in this wire pattern, X vs =. But one is the hybrid of the other, hybridisation is the operation (akin to the product of knots) which has been used and explained in the post about senescence and used again in more recent posts. You just take a pair of bonds and switch the ends. Therefore TRUE and FALSE are hybrids, one of the other.

(Source Boolean wire, boolewire.mol file used )

If you repeat the pattern which is common to TRUE and FALSE molecules then you get a boolean wire, which is more impressive “crossings teleporter”. This time the crosses boxed have been flattened, but the image is clear:

boolewire

Therefore, TRUE and FALSE represent choices of pairs of chemical bonds! Boolean computation (as seen in chemlambda) can be seen as management of promises of crossings.

(Source Promises of crossings, ifthenelsecross.mol file used )

You see 4 configurations of 4 nodes each, two magenta and two yellow.

In the upper left side corner is the “output” configuration. Below it and slightly to the right is the “control” configuration. In the right side of the animation there are the two other configurations, stacked one over the other, call them “B” (lower one) and “C” (upper one).

Connecting all this there are nodes A (application, green) and L (lambda, red).

ifthenelsecross

You see a string of 4 green nodes, approximately vertical, in the middle of the picture, and a “bag” of nodes, red and green, in the lower side of the picture. This is the molecule for the lambda term

IFTHENELSE = L pqr. pqr

applied to the “control” then to the “B” then to the “C”, then to two unspecified “X” and “Y” which appear only as the two yellow dots in the “output” configuration.

After reductions we see what we get.

Imagine that you put in each 4 nodes configuration “control”, “B” and “C”, either a pair of bonds (from the yellow to the magenta nodes) in the form of an “X” (in the picture), or in the form of a “=”.

“X” is for TRUE and “=” is for false.

Depending on the configuration from “control”, one of the “B” or “C” configuration will form, together with its remaining pair of red nodes, a zipper with the remaining pair of green nodes.

This will have as an effect the “teleportation” of the configuration from “B” or from “C” into the “output”, depending on the crossing from “control”.

You can see this as: based on what “control” senses, the molecule makes a choice between “B” and “C” promises of crossings and teleports the choice to “output”.

(Source: Is there something in the box?, boxempty.mol used)

I start from the lambda calculus term ISZERO and then I transform it into a box-sensing device.

In lambda calculus the term ISZERO has the expression

ISZERO = L a. ((a (Lx.FALSE)) TRUE)

and it has the property that ISZERO N reduces either to TRUE (if N is the number 0) or FALSE (if N is a number other than 0, expressed in the Church encoding).

The number 0 is
0 = FALSE = Lx.Ly.y

For the purpose of this post I take also the number 2, which is in lambda calculus

2=Lx.Ly. x(xy)

(which means that x is applied twice to y)

Then, look: (all reductions are BETA: (Lx.A)B – – > A[x:=B] )

ISZERO 0 =
(L a. ((a (Lx.FALSE)) TRUE) ) (Lx.Ly.y) – – >
((Lx.Ly.y) (Lx.FALSE)) TRUE – – >
(Ly.y)TRUE – – > (remark that Lx.FALSE is sent to the garbage)
TRUE (and the number itself is destroyed in the process)

and

ISZERO 2 =
(L a. ((a (Lx.FALSE)) TRUE) ) (Lx.Ly. x(xy)) – – >
((Lx.Ly. x(xy)) (Lx.FALSE)) TRUE – – > (fanout of Lx.FALSE performed secretly)
(Lx.FALSE) ((Lx.FALSE) TRUE) – – >
FALSE ( and (Lx.FALSE) TRUE sent to the garbage)

Remark that in the two cases there was the same number of beta reductions.

Also, the use of TRUE and FALSE in the ISZERO term is… none! The same reductions would have been performed with an unspecified “X” as TRUE and an unspecified “Y” as FALSE.

(If I replace TRUE by X and FALSE by Y then I get a term which reduces to X if applied to 0 and reduces to Y if applied to a non zero Church number.)

Of course that we can turn all this into chemlambda reductions, but in chemlambda there is no garbage and moreover I want to make the crossings visible. Or, where are the crossings, if they don’t come from TRUE and FALSE (because it should work with X instead of TRUE and Y instead of FALSE).

Alternatively, let’s see (a slight modification of) the ISZERO molecule as a device which senses if there is a number equal or different than 0, then transforms, according to the two cases, into a X crossing or a = crossing.

Several slight touches are needed for that.

1. Numbers in chemlambda appear as stairs of pairs of nodes FO (fanout, green) and A (application, green), as many stairs as the number which is represented. The stairs are wrapped into two L (lambda, red) nodes and their bonds.
We can slightly modify this representation so that it appears like a box of stairs with two inputs and two outputs, and aby adding a dangling green (A, application) node with it’s output connected to one of its inputs (makes no sense in lamnda calculus, but behaves well in the beta reductions as performed in chemlambda).

In the animation you can see, in the lower part of the figure:
-at left the number 0 with an empty box (there are two Arrow (white) nodes added for clarity)
-at right the number 2 with a box with 2 stairs
… and in each case there is this dangling A node (code in the mol file of the form A z u u)
boxempty

2. The ISZERO is modified by making it to have two FRIN (free in, yellow) and two FROUT (free out, magenta) nodes which will be involved in the final crossing(s). This is done by a clever (hope) change of the translation of the ISZERO molecule into chemlambda: first the two yellow FRIN nodes represent the “X” and the “Y” (which they replace the FALSE and the TRUE, recall), and there are added a FOE (other fanout node, yellow) and a FI (fanin node, red) in strategic places.

________________________________

let x=A in B in chemlambda

The beta reduction from lambda calculus is easy to be understood as the command
let x=B in A
It corresponds to the term (Lx.A)B which reduces to A[x=B], i.e to the term A where all instances of x have been replaced by B.
(by an algorithm which is left to be added, there are many choices among them!)

In Christopher P. Wadsworth, Semantics and Pragmatics of the Lambda Calculus , DPhil thesis, Oxford, 1971, and later John Lamping, An algorithm for optimal lambda calculus reduction, POPL ’90 Proceedings of the 17th ACM SIGPLAN-SIGACT symposium on Principles of programming languages, p. 16-30, there are proposals to replace the beta rule by a graph rewrite on the syntactic tree of the term.

These proposals opened many research paths, related to call by name strategies of evaluations, and of course going to the Interaction Nets.

The beta rule, as seen on the syntactic tree of a term, is a graph rewrite which is simple, but the context of application of it is complex, if one tries to stay in the real of syntactic trees (or that of some graphs which are associated to lambda terms).

This is one example of blindness caused by semantics. Of course that it is very difficult to conceive strategies of applications of this local rule (it involves only two nodes and 5 edges)  so that the graph after the rewrite has a global property (means a lambda term).

But a whole world is missed this way!

In chemlambda the rewrite is called BETA or A-L.
see the list of rewrites
http://chorasimilarity.github.io/chemlambda-gui/dynamic/moves.html

In mol notation this rewrite is

L 1 2 c , A c 3 4  – – > Arrow 1 4 , Arrow 3 2

(and then is the turn of COMB rewrite to eliminate the arrow elements, if possible)

As 1, 2, 3, 4, c are only port variables in chemlambda, let’s use other:

L A x in , A in B let  – – >  Arrow A let , Arrow B x

So if we interpret Arrow (via the COMB rewrite) as meaning “=”, we get to the conclusion that

let x=B in A

is in chemlambda

L A x in

A in B let

Nice. it almost verbatim the same thing.  Remark that the command”let” appears as a port variable too.

Visually the rewrite/command is this:

beta
As you see this is a Wadsworth-Lamping kind of graph rewrite, with the distinctions that:
(a) x, A, B are only ports variables, not terms
(b) there is no constraint for x to be linked to A

The price is that even if we start with a graph which is related to a lambda term, after performing such careless rewrites we get out of the realm of lambda terms.

But there is a more subtle difference: the nodes of the graphs are not gates and the edges are not wires which carry signals.

The reduction works well for many fundamental examples,
http://chorasimilarity.github.io/chemlambda-gui/dynamic/demos.html
by a simple combination of the beta rewrite and those rewrites from the DIST family I wrote about in the post about duplicating trees.
https://chorasimilarity.wordpress.com/2015/05/04/the-illustrated-shuffle-trick-used-for-tree-duplication/

So, we get out of lambda calculus, what’s wrong with that? Nothing, actually, it turns out that the world outside lambda, but in chemlambda, has very interesting features. Nobody explored them, that is why is so hard to discuss about that without falling into one’s preconceptions (has to be functional programming, has to be lambda calculus, has to be a language, has to have a global semantics).
___________________________________________

All successful computation experiments with lambda calculus in one list

What you see in this links: I take a lambda term, transform it into a artificial molecule, then let it reduce randomly, with no evaluation strategy. That is what I call the most stupid algorithm. Amazing is that is works.
You don’t have to believe me, because you can check it independently, by using the programs available in the github repo.

Here is the list of demos where lambda terms are used.

Now, details of the process:
– I take a lambda term and I draw the syntactic tree
– this tree has as leaves the variables, bound and free. These are eliminated by two tricks, one for the bound variables, the other for the free ones. The bound variables are eliminated by replacing them with new arrows in the graph, going from one leg of a lambda abstraction node, to the leaf where the variable appear. If there are more places where the same bound variable appears, then insert some fanout nodes (FO). For the free variable do the same, by adding for each free variable a tree of FO nodes. If the bound variable does not appear anywhere else then add a termination (T) node.
– in this way the graph which is obtained is no longer a tree. It is a trivalent graph mostly, with some free ends. It is an oriented graph. The free ends which correspond to a “in” arrow are there for each free variable. There is only one end which correspond to an “out” arrow, coming from the root of the syntactic tree.
– I give a unique name to each arrow in the graph
– then I write the “mol file” which represents the graph, as a list of nodes and the names of arrows connected to the nodes (thus an application node A which has the left leg connected to the arrow “a”, the right leg connected to the arrow “b” and the out leg connected to “c”, is described by one line “A a b c” for example.

OK, now I have the mol file, I run the scripts on it and then I look at the output.

What is the output?

The scripts take the mol file and transform it into a collection of associative arrays (that’s why I’m using awk) which describe the graph.

Then they apply the algorithm which I call “stupid” because really is stupidly simplistic: do a predetermined number of cycles, where in each cycle do the following
– identify the places (called patterns) where a chemlambda rewrite is possible (these are pairs of lines in the mol file, so pairs of nodes in the graph)
– then, as you identify a pattern, flip a coin, if the coin gives “0” then block the pattern and propose a change in the graph
– when you finish all this, update the graph
– some rewrites involve the introduction of some 2-valent nodes, called “Arrow”. Eliminate them in a inner cycle called “COMB cycle”, i.e. comb the arrows
-repeat

As you see, there is absolutely no care about the correctness of the intermediary graphs. Do they represent lambda terms? Generically no!
Are there any variable which are passed, or evaluations of terms which are done in some clever order (eager, lazy, etc)? Not at all, there are no other variables than the names of the arrows of the graph, or these ones have the property that they are names which appear twice in the mol file (once in a port “in”, 2nd in a port “out”). When the pattern is replaced these names disappear and the names of the arrows from the new pattern are generated on the fly, for example by a counter of arrows.

The scripts do the computation and they stop. There is a choice made over the way of seeing the computation and the results.
One obvious choice would be to see the computation as a sequence of mol files, corresponding to the sequence of graphs. Then one could use another script to transform each mol file into a graph (via, say, a json file) and use some graph visualiser to see the graph. This was the choice in the first scripts made.
Another choice is to make an animation of the computation, by using d3.js. Nodes which are eliminated are first freed of links and then they vanish, while new nodes appear, are linked with their ports, then linked with the rest of the graph.

This is what you see in the demos. The scripts produce a html file, which has inside a js script which uses d3.js. So the output of the scripts is the visualisation of the computation.

Recall hat the algorithm of computation is random, therefore it is highly likely that different runs of the algorithm give different animations. In the demos you see one such animation, but you can take the scripts from the repo and make your own.

What is amazing is that they give the right results!

It is perhaps bizzare to look at the computation and to not make any sense of it. What happens? Where is the evaluation of this term? Who calls whom?

Nothing of this happens. The algorithm just does what I explained. And since there are no calls, no evaluations, no variables passed from here to there, that means that you won’t see them.

That is because the computation does not work by the IT paradigm of signals sent by wires, through gates, but it works by what chemists call signal transduction. This is a pervasive phenomenon: a molecule enters in chemical reactions with others and there is a cascade of other chemical reactions which propagate and produce the result.

About what you see in the visualisations.
Because the graph is oriented, and because the trivalent nodes have the legs differentiated (i.e. for example there might be a left.in leg, a right.in leg and a out leg, which for symmetry is described as a middle.out leg) I want to turn it into an unoriented graph.
This is done by replacing each trivalent node by 4 nodes, and each free end or termination node by 2 nodes each.
For trivalent nodes there will be one main node and 3 other nodes which represents the legs. These are called “ports”. There will be a color-coded notation, and the choice made is to represent the nodes A (application) and FO by the main node colored green, the L (lambda) and FI (fanin, exists in chemlamda only) by red (actually in the demos this is a purple)
and so on. The port nodes are coloured by yellow for the “in” ports and by blue for the “out” ports. The “left”, right”, “middle” types are encoded by the radius of the ports.

__________________________________________________

Another chemlambda reduction and a weird Google bug

 

This is the end of the computation of the monus function monus(a,b)=a-b if a>=b, else 0, for a=3 and b=20.

It is the last part of the computation, as recorded at 8X speed from my screen.

If you want to see all of it then go to https://github.com/chorasimilarity/chemlambda-gui/tree/gh-pages/dynamic , clone it and then type: bash moving_alt.sh and choose monus.mol from the list of mol files.

It is used the deterministic algorithm, this time, which makes all possible moves every time.

What is interesting about this computation is the quantity of trash it produces.

Indeed, I took a lambda term for the monus, applied it to 3 and 20 (in Church encoding), then applied it to successor then applied it to 0.

In the video you see that after the result (the small molecule connected to the blue dot, it i sthe chemlambda version of 0 in the Church encoding), there is still a lot of activity of destroying the rest of the molecule. It looks nice though.

Something strange happened when I tried to publish the video (there are so many strange things related to the dissemination of chemlambda that they become a family joke, some of the readers of this blog are aware of some of those).

After I completed the description I got the warning shown in the following video:  “Brackets aren’t allowed in your description”

 

For example this previous video has brackets in the description and in the title as well

and all worked well.

Anyway I eliminated the brackets but the warning remained, so eventually I got out from my google account and done something else.

Sometimes later I tried again, experience described in this video (which took a LOT of time to be processed)

At the beginning the fields for title and description were void and no error was announced.

After I filled them happened what you see in the video.

I understand that Google uses a distributed system (which probably needs lots of syncs, because you know, that is how intelligent people design programs today), but:

  • the “brackets are not allowed” is bogus, because a previous video worked perfect with brackets in the description,
  • the “unknown error” means simply that there was some trace left on my computer that there was an imaginary error in the previous try, so instead of checking if there is an error this time, I suppose that the browser was instructed to read from the ton of extremely useful cookies google puts in the user’s place.

_________________________________________________________________

What chemlambda can do for you

This post follows after What can I do with chemlambda, which questions to ask about it? and FAQ: chemlambda in real and virtual worlds.

UPDATE:

0.(for chemists)  With chemlambda you can build molecular computers. For this you (in case you’re a chemist) could identify the real chemical reactions which embody the moves of this made-up chemistry. They are simple enough to be possible in the real world. See All chemical reactions needed for a molecular computer.

___________________

1. Is chemlambda another visualization tool for lambda calculus?

NO. Visualization is an independent part, for the moment I use d3.js. Moreover chemlambda is only tangentially related to lambda calculus.

2. What is new in chemlambda then?

You have an example of a computation model which does not use values, calls, variables, variable passing.

Have you seen any other model with these properties?

3. Why should I be excited about a computation with no values, no evaluations, no calls, etc?

For several reasons:

  • nobody thinks about that (excepting perhaps biology related researchers, who don’t see, by looking through the microscope, the structure loved by CS people)
  • this is a blind spot, an accident of the history, because computers appeared after the telephone, during and after the WW2 when the problem was to encrypt and decrypt a message sent from A to B, and because physically the computers we know how to build are based on electronics
  • think, if you don’t have the problem of how to pass a value, or how to call a function in the internet, then what could be possible which now is not?
  • think about building real or virtual computers which are life like, in the sense that their work is complex to the point of seeming incomprehensible, but their complexity is based on very simple mechanisms, used again and again, without any director to channel them and without any hierarchy to maintain or enforce.

4. So should I pack my bags and look for other research subjects than what is popular now, or maybe should I just ignore chemlambda and go with the business as usual? After all, you know, cloud computing is great. Process calculi, types, all this.

It’s your choice, but now you know that there is something which is closer to real life like computation, which may hold the promise for a free sky, cloudless internet.

5. Suppose I want to play a bit with these ideas, but not to the point of getting out of my comfort zone. Which questions would be interesting to ask?

Right, this is a good strategy. Here are some questions which are related to lambda calculus, say.

  •  there is an algorithm which transforms a (untyped lambda beta calculus) term into a chemlambda molecule, but even if one can prove that beta move translates into the BETA move (like Wadsworth-Lamping) and that eventually the translations of SKI or BCKW combinators reduce in chemlambda like they should, there is the question: how does chemlambda does by local writes the thing which replaces  an evaluation strategy? A study by examples may give interesting insights.
  • how to program with chemlambda? Indeed, it can reproduce some behaviours of untyped lambda beta, but even so it does it in ways which are not like usually expected, as proven by the demos. To take an older example, the Y combinator simplifies to a 2 nodes molecule, story told in this article  . The fact is that because one does not have eta (which looks like a global move in chemlambda, i.e. there is no a priory bound on the number of nodes and links to check for application of eta) then there are no functions. It’s a kind of extremal functional programming where there are no functions.
  • (c) what are the right replacements for lists, currying, booleans, which could take advantage from  chemlambda?
  • (d) by a try and explore procedure, which are the less stupid reduction algorithms and in which sense do they work exactly? This has to do with locality, which constrains a lot the design of those algorithms.
  • (e) finally, what can chemlambda do outside the sector of untyped lambda beta, which is only a small part of the possible chemlambda molecules?

_____________________________________________________________________

Experiments with a little lisper tutorial

I used the excellent lambda calculus tutorial by Mayer Goldberg from the little-lisper.org for some experiments in chemlambda.

UPDATE: See also the post The working factorial and the video

_______________

There are five of them.

The mentioned tutorial is the source for the lambda term which I translated into chemlambda and then used in the Ackermann function demos. The most outrageously successful is the computation of Ack(2,2) while self-multiplicating. The daughter molecules do not end at the same time, moreover.

Here is a video of a screen recording at 2X speed.

Now, there is a very puzzling thing here. In chemlambda with a random, purely local graph rewriting algorithm I can compute the Ackermann function. But what about simpler functions, like the factorial?

I tried the easy way consisting into translation of lambda terms for the factorial into chemlambda and the results are ugly. As a geometer who wants to decipher computation without using variables, values, variable passing or evaluations, I know that chemlambda is different because of this feature it has: it uses something like signal transduction instead of gates-and-wires general model from IT. So there has to be consequences of that.

Let’s see what happens in the case of the factorial. In this demo I took the lambda term from the same tutorial, par. 57, page 14. I hoped will work better than other proposals, based on the experience with the Ackermann function. I noticed in other experiments with terms for the factorial that the reduction in chemlambda is extremely wasteful, producing thousands of nodes and lots of trash. Moreover, there is a problem with the fix point combinator, explained in the article with Louis Kauffman Chemlambda, universality and self multiplication, and described in the older demos like this one. In chemlambda, the molecule which corresponds to the Y combinator reduces to a very simple one (which does not has an equivalent as a lambda term), made by only two nodes, fanout and application. Then the molecule becomes a gun which shoots pairs of fanout and application node. There is no mystery about the Y combinator in chemlambda, the real mechanism of it consists in the self-multiplication by the fanout node. [Note: in the old demo and in the article linked there is no FOE fanout. The FOE and the moves associated to it are a recent addition to the formalism, see the page of moves.]

The problem of using the Y combinator is that it never stops generating pairs of A and FOE nodes. In a computation which implements recurrence by the Y combinator, at some point, according to a IFTHENELSE or ISZERO, the cycle of Y is broken. But in chemlambda the Y gun molecule continues to exist and this leads to a never ending computation. From one side this is OK, see for example the life-like computations with chemlambda quines. Actually there is a stronger relation between chemlambda quines and the Y combinator molecule. One can design the computation to be such that when the Y molecule is no longer needed, it is encapsulated into a quine, but this is for another time to explain in detail.

I come back to the factorial example. In the demo I linked you can see that the computation of the factorial is wasteful (and paradoxically leads to a Y molecule), even if it does not use a fix point combinator.

Why?

First I thought it is because of currying and uncurrying. In chemlambda, because it is a graph rewrite system, there is no particular need to use currying all the time.

Then, to check this, I modified the molecule from the little lisper tutorial in order to geometrically compute the repeated application of a function f(a,b)=(succ(a), succ(a)b). The function is a piece of a graph with two in and two out links which is self-multiplying under the action of a number in the Church encoding.

Here is a successful computation with this molecule. But does it work all the time or have I been lucky? The reduction algorithm is random and different runs may give different results. It is the case with the Ackermann function computation as well, but that one was successful all the time.

Oh, it turns out that the computation with that molecule works well in about 20% of the runs. Here is an unsuccessful run.

So there is still a problem, but which one?

Under close examination the computation is still wasteful, because of the term (piece of molecule) c0, for the 0 in the Church encoding. In chemlambda this term corresponds to a small molecule which has a termination node inside.

When we want to thrash, like programmers do, something useless, in chemlambda the useless piece does not disappear. Is like in nature, the thing continues to exist and continues to interact, while in the process of degradation, with the rest of the environment.

The termination node, via the PRUNING moves, destroys slowly the useless part, but pother pieces form the useless part continue to interact with the rest of the graph.

Is this the problem?

In order to check this I further modified the molecule which was successful 20% of the time. I just replaced c0 by c1, which is the (molecule for) the lambda term for 1 in the Church encoding. Now, c1 does not have any termination node inside.

The price is that I no longer compute the factorial, but instead I compute the repeatedly applied function

F(a,b)=(succ(a), tms(a,b))

where tms(a,b)=ab+1. Here is a demo for the computation of tms(3,3) in chemlambda., and further is a video for tms(5,5)=26, where you can see a new creature, more complex than the walker discover in the predecessor.

I checked to see what is the closed expression, if any, for the function I compute, namely

f(0)=1, f(n+1) = (n+1)f(n) + 1

and the Wolfram Alpha engine has an interesting answer.

Well, this time I hit the right spot. The computation works, again and again and again.

So we have to learn to program ecologically with chemlambda.

____________________________________________________________________

Two walkers enter into a Holliday junction…

In this post I want to explain the signal transduction process which replaces signal transmission in chemlambda.

It will be hopefully something clear  with the help of some visual demos.

[Don’t believe what I write and proceed by the scientific method by checking my experiments yourself. You can easily do this by using the links in the demo. They point to a github repo. You have to switch to the gh-pages branch and there you find the scripts and the mol files which are needed to reproduce the experiments. You can of course make new experiments of your own! By looking in the scripts you shall see how things work. It is much more rigorous and easier to check than the written proof version. On the other side it is of course less authority-friendly and demands a bit larger attention span, but with a small attention span nobody can understand anything, right? For more on this “publishing philosophy” see the post The Shortest Open Access and New forms of Publication Question.]

I start.

Chemlambda is interesting because it is good at doing two different things at once:

  • it can compute as computers do it (i.e. is Turing universal)
  • it can also simulate chemical like, or even biological like phenomena.

This is great because there is no other system (excepting Nature) which can do this with such a small effort, at once (i.e. with the same tools).

Indeed, you have artificial life proposals, like for example swarm chemistry, which can simulate some simple life like phenomena but which can’t compute something as sophisticated as the Ackermann function (my favorite catch demo for CS people).

There is the amazing Game of Life, which can do both, but:  for a Turing like computation one needs  hundreds of thousands of nodes, on a fixed predefined grid, and synchronously updated.

What enables chemlambda to do that?

In the development of chemlambda I followed some principles as thought constraints. These principles shaped, and will shape further, this project. They are:

  • (locality) every piece of the formalism or implementation has to be local in space, time or in control terms.
  • (no meaning) global meaning is just an illusion, which is moreover hard to maintain or enforce, Nature does not work by high level meaning
  • (topology does compute) signal transduction, not signal transmission.

While locality seems a natural and desirable feature to have in many formalisms, it is unsurprisingly difficult to achieve. The reason for this, in my opinion, is cultural: we are the children of the Industrial Revolution and so we are trained and our minds are shaped to think in terms of a global, god-like point of view, which gives us total powers over space, time, matter, and total control over  all the  universe at once. This is visible in the scientific explanations in particular, where, just because we want to explain a simple idea, we have then to build a formal notational frame around, like external coordinates, names for the variables (coming with their complex bookkeeping formalism) and to appeal to reification. While all these ingredients are useful for the transmission of a scientific explanation, they are not, by themselves, needed for the understanding.  Example: suppose I’m explaining you the plot of a film. At some point you get lost with the names of the characters and then I react like this: “Look, is simple: A wants to kill B and for that he hires the hitman C. But C has a love affair with B and together they trick A into believing that …” Now, “A”, “B” and “C” may be useful to transmit my understanding of the movie plot to you, but they are not needed for my understanding. In the frame of mind of the Industrial Revolution, the world is a system which can be globally mapped into a hierarchical flow of processes, where everything falls well into this or that case of study. You have the system, you control the world, as if it is an industrial process.

The latest installment of this way of thinking (I’m aware about) is category theory.

The downside of this is the loose of locality and the profusion of higher and higher levels of abstraction, which is the quick fix of the cracks in the globality illusion.

Maybe now it becomes clear the second principle: no meaning. Many, if not all natural things don’t have a global, fixed or indisputable meaning. Still Nature works beautifully. The logical conclusion is that meaning is something us humans use and seek (sometimes), but not a necessary ingredient in Nature’s workings.

The concrete use of the “no meaning” principle consists into the elimination of any constraints which may introduce globality by the back door: there are no “correct” graphs in chemlambda, nor there exist any local decoration of the edges of the chemlambda graphs which give a global “meaning” to the chemlambda graphs.

Elimination of names, elimination of evaluations.

The third principle is called “topology does compute” as an allusion to the NTC vs TC discussion. The idea is very simple: instead of thinking in terms of wires which transmit signals, which are then processed by gates, think about signal transduction as an emergent phenomenon from the locality of the graph rewrites.

Signal transduction is a notion from biology: a molecule binds to a receptor (molecule), which trigger a chain, a cascade of other chemical reactions. Each chemical reaction is of course, local, but the emergent phenomenon is the moving of a “signal” (as seen by our minds, but non existent as a well defined entity in reality). We identify the “signal”,  but things happen without needing the “signal” entity as a prerequisite.

Chemlambda works like that.

In order to explain this I shall use the “walker” example.

It is instructive to see how the computational and the biological like features of chemlambda led to the discovery of the walker.

The initial goal was to see how do various lambda calculus work in chemlambda. I knew that there is an algorithm which associates to any lambda term a chemlambda molecule, so just by picking interesting lambda terms, I could then see how they reduce in chemlambda, exclusively by local graph rewrites.

One of these terms is the predecessor, see Arithmetic in lambda calculus. Numbers appear (in chemlambda) as ladders of pairs of nodes, with the beginning and the end ladder connected by abstraction nodes. The whole topology is one of a circular ladder, roughly.

One can translate also the predecessor lambda term, and apply it to the number. In lambda calculus the predecessor applied to the number N gives N-1 (if N>0, otherwise the predecessor of 0 in lambda calculus is 0).

In chemlambda the predecessor is another molecule, which looks like a small bag. To apply the predecessor to a number translates in chemlambda into putting a supplementary A (application) node, and to connect some ports of the A node with the circular ladder (the number) and with the bag (the predecessor).

The first few reductions are trivial and they transform the molecule I described into a circular one, where on top of the ladder there is a molecule and at the end of the ladder there is another, smaller one.

All in all it looks like a circular train track with something on the tracks.

Now it gets interesting: the reduction process (in chemlambda) looks like there is a “creature” which travels along the train tracks.

This is the walker. You can see it in this first demo. Or you may look at this video

(however I do suggest the demo, the live version is far more impressive).

It is a beautiful example of signal transduction.

In the chemlambda reduction algorithm which is deterministic (all graph rewrites which are possible are applied) the walker keeps its shape, i.e. periodically we find the same graph (the walker graph) in different places on the train track (the number).

The random reduction algorithm breaks this regularity (and synchronicity as well) because in this algorithm a coin is flipped before application of any move, and the move is applied or not with 50% probability.

That is what you see in the demo: the walker looks like a kind of a wave which propagates along the train tracks, until it hits the end of the track (i.e. the small molecule I mentioned previously) and then it is destroyed progressively. It leaves behind a train track with a pair of nodes less than before the reduction.

So, inside the reduction mechanism of a lambda term (pure computation side) there is a self-maintaining, propagating entity, the walker, which travels in a biological fashion through the representation of a number!

This led me to the notion of a chemlambda quine in a rather obvious way:

  • let’s eliminate the beginning and the end of the ladder and replace it by circular ladder with no start or end parts, then the walker entity would go and go in circles, endlessly; this is the “ouroboros“, a older explanation,
  • remark that as a graph, in the deterministic reduction algorithm, after each reduction step the “ouroboros” is unchanged as a graph
  • go to the extreme and use an ouroboros on a single train track pair, this is the 28-quine, see it in the second demo.

Let’s study more the signal transduction aspect. What can happen if two walkers interfere?

The experiment for this can be seen in the third demo.

It works like this. Take two predecessor reduction graphs, two copies of the graph from the first demo (they have each 20 pairs of nodes in their ladders).

Now, cross the ladders. How?

By a Holliday junction, in biochemical terms.  I looks like this

480px-Holliday_junction_coloured

[source]

Mathematically this is a product of two ladder graphs, where two links are cut, crossed and glued back.

The amazing thing is that the two walkers go their way, they mix and then they continue, each on others track, until the end points.

They behave as if they are waves passing one through the other.

UPDATE: This is a screen recording of the experiment

 

 

 

__________________________________________________________________________

 

 

 

As a handful of slime mold. That’s how smart is the whole net.

I record here, to be sure that it’s read, a message of explanations I wrote today.

Before that, or after that you may want to go check the new chemlambda demos, or to look at the older ones down the page, in case you have not noticed them. There are also additions in the moves page. Most related to this post is the vision page.

“First, I have lots of examples which are hand drawn. Some of them appeared from theoretical reasons, but the bad thing used to be that it was too complex to follow (or to be sure of that) on paper what’s happening. From the moment I started to use the mol files notation (aka g-patterns) it has been like I suddenly became able to follow in much more depth. For example that’s how I saw the “walker”, “ouroboros” and finally the first quine by reducing the graph associated to the predecessor (as written in lambda calculus). Finally, I am able now to see what happens dynamically.
However, all these examples are far from where I would like to be now, they correspond to only the introduction into the matter. But with every “technical” improvement there are new things which appear. For example, I was able to write a program which generates, one by one, each of about 1000 initial graphs made by 2 nodes and follow their reduction behaviour for a while, then study the interesting exemplars. That is how I found all kind of strange guys, like left or write propagators, switchers, back propagators, all kind of periodic (with period > 1) graphs.
So I use mainly as input mol files of graphs which I draw by hand or which I identify in bigger mol files as interesting. This is a limitation.
Another input would be a program which turns a lambda term into a mol file. The algorithm is here.
Open problem: pick a simplified programming language based on lambda calculus and then write a “compiler”, better said a parser, which turns it into a mol file. Then run the reduction with the favourite algorithm, for the moment the “stupid” determinist and the “stupid” random ones.
While this seems feasible, this is a hard project for my programming capabilities (I’d say more because of my bad character, if I don’t succeed fast then I procrastinate in that direction).
It is tricky to see how a program written in that hypothetical simple language reduces as a graph, for two reasons:
  1. it has to be pure lambda beta, but not eta (so the functions are not behaving properly, because of the lack of eta)
  2. the reductions in chemlambda do not parallel any reduction strategy I know about for lambda calculus.
The (2) makes things interesting to explore as a side project, let me explain. So there is an algorithm for turning a lambda term into a mol file. But from that part, the reductions of the graph represented by the mol file give other graphs which typically do not correspond to lambda terms. However, magically, if the initial lambda term can be reduced to a lambda term where no further reductions are possible, then the final  graph from the chemlambda reduction does correspond to that term. (I don’t have a proof for that, because even if I can prove that if restricted to lambda terms written onlly as combinators, chemlambda can parallel the beta reduction, the reduction of the basis combinators and is able to fan-out a term, it does not mean that what I wrote previously is true for any term, exactly because of the fact that during chemlambda reductions one gets out from the real of lambda terms.)
In other cases, like for the Y combinator, there is a different phenomenon happening. Recall that in chemlambda there is no variable or term which is transmitted from here to there, there is nothing corresponding to evaluation properly. In the frame of chemlambda the behaviour of the Y combinator is crystal clear though. The graph obtained from Y as lambda term, connected to an A node (application node) starts to reduce even if there is nothing else connected to the other leg of the A node. This graph reduces in chemlambda to just a pair of nodes, A and FO, which then start to shoot pairs of nodes A and FOE (there are two fanout nodes, FO and FOE). The Y is just a gun.
Then, if something else is connected to the other leg of the application node I mentioned, the following phenomenon happens. The other graph starts to self-multiply (due to the FOE node of the pair shot by Y) and, simultaneously, the part of it which is self-multiplied already starts to enter in reaction with the application node A of the pair.
This makes the use of Y spectacular but also problematic, due to too much exuberance of it. That is why, for example, even if the lambda term for the Ackermann function reduces correctly in chemlambda, (or see this 90 min film) I have not yet found a lambda term for the factorial which does the same (because the behaviour of Y has to be tamed, it produces too many opportunities to self-multiply and it does not stop, albeit there are solutions for that, by using quines).
So it is not straightforward that mol files obtained from lambda terms reduce as expected, because of the mystery of the reduction strategy, because of the fact that intermediary steps go outside lambda calculus, and because the graphs which correspond to terms which are simply trashed in lambda calculus continue to reduce in chemlambda.
On the other side, one can always modify the mol file or pre-reduce it and use it as such, or even change the strategy of the algorithm represented by the lambda term. For example, recall that because of lack of eta, the strategy based on defining functions and then calling them, or in general, just currying stuff (for any other reasons than for future easy self-multiplication) is a limitation (of lambda calculus).
These lambda terms are written by smart humans who stream everything according to the principle to turn everything into a function, then use the function when needed. By looking at what happens in chemlambda (and presumably in a processor), most of this functional book-keeping is beautiful for the programmer, but a a limitation from the point of view of the possible alternative graphs which do the same as the functionally designed one, but easier.
This is of course connected to chemistry. We may stare to biochemical reactions, well known in detail, without being able to make sense of them because things do happen by shortcuts discovered by evolution.
Finally, the main advantage of a mol file is that any collection of lines from the mol file is also a mol file. The free edges (those which are capped by the script with FRIN (i.e. free in) and FROUT (i.e. free out) nodes) are free as a property of the mol file where they reside, so if you split a mol file into several pieces and send them to several users then they are still able to communicate by knowing that this FRIN of user A is connected (by a channel? by an ID?) to that FROUT of user B. But this is for a future step which would take things closer to real distributed computing.
And to really close it, if we would put on every computer linked to internet a molecule then the whole net would be as complex (counting the number of molecules) as a mouse. But that’s not fair, because the net is not as structured as a mouse. Say as a handful of slime mold. That’s how smart is the whole net. Now, if we could make it smarter than that, by something like a very small API and a mol file as big as a cookie… “
_________________________________________________________________________

Living computations

What’s better than a movie? A live performance.

I just started new pages where you can see the last living computations with chemlambda:

  • a 20 nodes creature which I qualified previously as a quine, but is not, struggles to survive in a random environment (random reduction method) here
  • the reduction of the predecessor function from lambda calculus turned into a chemlambda reduction (random too) here
  • the self multplication of the S combinator in random conditions here
  • the reduction of Ackermann(2,2) in the  random model here (this is the one used for the video from the last post).
  • a complex reduction in chemlamdba. Here is the recipe:
    – you can write the Y combinator as an expression in the S,K,I, combinators: Y = S (K (S I I)) (S (S (K S) K) (K (S I I)))
    – so take this expression and apply it to the identity I. In combinatory logic this should reduce to something equivalent to YI, which then reduces forever, because it does not have a normal form
    -but we do something more funny, namely all this long string of combinators is transformed into a chemlambda molecule, and we add on top of it a node FO which makes all this big thing to self-reproduce.
    So, we have a bunch of reductions (from the long expression to YI) in parallel with the self-reproduction of the whole thing.
    Now, what you see is this, in a model of computation which uses a random reduction strategy!
    See it live here.

The sources are in this github repository.

_______________________________________________________________________

 

Dynamic rendering of the Ackermann function computation, deterministical and random

Source code here.

This video contains two examples of the computation of the Ackermann function by using artificial chemistry. The (graph rewriting) rules are those of chemlambda and there are two models of computation using them.


In the first one the rules are applied deterministically, in the order of their importance. The rules which increase the number of nodes are considered more important than those which decrease the number of nodes. The rules are applied in parallel, as long as there is no conflict (i.e. as long as they don’t apply to the same node). When there is conflict the rule with higher importance takes the priority.
In the first part of the video you see what happens if this model is applied to a graph which represents (according to the rules of chemlambda) the Ackermann function applied to (2,2). The expected result is 7 (as it appears in the Church encoding, which is then transformed in the chemlambda convention). There are no tricks, like pre-computing the expression of the function, everything goes at a very basic level. The application of the rules does not parallel the application of lambda calculus reduction rules to the lambda term which represents the Ack(2,2), with any reduction strategy. However, the result is obtained correctly, even if many of the intermediary steps are not graphs which represent a lambda term.

 

The model does not use any variable passing, nor any evaluation strategy, moreover!

In the second example is used a different model of computation. The rules are applied randomly. That means the following. For any configuration from the graph which may be subject to a rule, a coin is flipped and the rule is applied with probability 50%. The rules are equally important. The rules are still applied in parallel, in the sense that an update on the graph is done after all edges are visited.

As you see in the second part of the video, the process takes longer, because essentially at each step there are always less rules applied to the whole graph. The comparison is not very accurate, because the reduction process may depend on the particular run of the program. Even if lambda beta calculus (with some model of reduction) is confluent, chemlambda is surely not. It is an open problem if, starting from a graph which represents a lambda term, in chemlambda, knowing that in lambda calculus the term has a normal form, then the random model of computation with chemlambda always arrives eventually to the graph which represents the normal form of the respective term.
At least for the term I use here for Ack(2,2), it looks like that it does. This is of course not a proof.

 

UPDATE: A quine is reduced in chemlambda, first using a deterministic model of computation then using a model which has a random ingredient.


These two models are the same as the ones used for the Ackermann function video.
The quine is called the 9_quine, has been introduced in https://chorasimilarity.wordpress.com/…

In the deterministic reduction you see that at each step the graph reduces and reconstruct itself. It goes on forever like that.

In the random reduction the process is different. In fact, if you look at the list of reductions suffered by the 9_quine, then you see that after each cycle of reduction (in the deterministic version) the graph is isomorphic with the one before because there is an equilibrium between the rules which add nodes and the rules which destroy nodes.
In the random version this equilibrium is broken, therefore you see how the graph grows either by having more and more red nodes, or by having more and more green nodes.
However, because each rule is applied with equal probability, in the long term the graph veers towards a dominant green or towards dominant red states, from one to the other, endlessly.
This is proof that the reductions in chemlambda vary according to the order of application of moves. On the other side, this is evidence (but not proof) that there is a sort of fair effort towards eventual confluence. I use “confluence” in a vague manner, not related to lambda calculus (because the 9_quine does not come from a lambda term), but more related to the graph rewriting world.

_________________________________________________________________________

How to simulate an enzyme with a quine in chemlambda (I)

I discussed many times here about the view which has chemlambda about graph rewrites as invisible enzymes. Here is, I believe, the first picture describing this idea for the beta move (taken from this post)

enzyme_2Since that first proposal the formalism changed, see this page which collects the updates, explanations and some useful links. However I shall lay down soon a detailed tutorial on the actual state of chemlambda.

Further I want to explain how we can make “move enzymes” visible in chemlambda. For this I shall use chemlambda quines and a kin dof bootstrapping, along with a new primitive construct, which appeared already several times before, denoted by “:”, wait a minute and let me take the ingredients one by one.

1. I start by stating again a very simple fact about what is chemlambda (and what is not). Chemlambda is a graph rewrite system which uses an analogy with chemistry. The graphs are called “molecules” and the graph rewrites are seen as chemical reations with invisible “move enzymes”. That’s about how far the analogy goes. It is not a part of the analogy the use of other concepts, as for example chemical reaction networks, Petri nets, probabilities, etc, because these are only external ingredients (or modules, or layers) which can be added over chemlambda in order to obtain a model of computation. Moreover, they are not, by far, the most interesting modules to add.

So, chemlambda is just that: a graph rewrite system. It is comparable with the Alchemy of Fontana and Buss, in the following precise sense: lambda calculus terms and reductions can be simulated by chemical molecules and reactions. But it is very different  Alchemy (and also from the CHAM of Berry and Boudol) because in chemlambda the abstraction and the application appear as nodes in the graph-molecules, while in the Alchemy the application and abstraction have different statuses.

2. If you want to get a model of computation from chemlambda then you have to add new ingredients. My ultimate goal is a model of asynchronous, distributed, decentralized model of computation, sketched already in the article M. Buliga, L.H. Kauffman, GLC actors, artificial chemical connectomes, topological issues and knots, arXiv:1312.4333 , as well as in many posts here, in the category distributed GLC.

But say that there is a lot of programming work to do, therefore it may be a good idea to take it slower and explore as many possible simpler models of computation based on chemlambda.

The first such model is one which is sequential, using the mol files convention, quickly described by the following:

  • if no LEFT patterns of the moves BETA, DIST, FAN-IN and LOC-PR are present in the g-pattern, then stop, else
  • identify the LEFT patterns of the moves BETA, DIST, FAN-IN and LOC-PR. If there is a graphical element which is part of two distinct patterns then apply a PRIORITY  CHOICE
  • apply the moves from LEFT to RIGHT
  • apply the COMB moves until no Arrow element
    can be eliminated
  • repeat.

This is the model implemented in the scripts from the chemlambda gui repo, see for the mol files convention this crude intro.

This makes a model of computation which is already very interesting. I just mention as examples the Ackermann function computation , and a no nonsense clarification of the mechanism behind the Y combinator: the Y combinator is just a gun shooting pairs of applciation and fanout nodes, the recursion is in fact done by self-multiplication of the molecules. The significant fact is that (some) molecules can self-multiply without any external management, only by the local graph rewrites, in this model of computation.

On the open questions side, concerning this model of computation, the most interesting seems to be: what kind of computation is this? Is it functional programming (just because it can do, but is not limited to lambda calculus)? [It appears that not, despite superficial resemblances, because it can do untyped lambda beta — and not eta — calculus, therefore there is no notion of a function in chemlambda] If not, what is it? Maybe the reduction strategy helps to classify it in one of the big categories. But the reduction strategy does not use any evaluation step. More concretely, for some particular examples of reductions of molecules which represent lambda terms (I choose this molecules in order to have a common ground for making comparisons, not because of a limitation of chemlambda to lambda calculus only), the reduction process resembles with one based on call-by-need evaluation, other times not, see this post for some details.

All this — self-multiplication instead of recursion, reduction by something akin to signal transduction — make already this simplest model of computation curiously alive. Alife.

3. Still in this simple model of computation there are some molecules, called chemlambda quines, which have the property that even if the reduction process goes forever, at each step the molecule is globally the same. (Some of) these quines are derived from molecules called walkers. A walker is a molecule which stays the same in a bigger one (imagine a walker on the road; even if the walker advances, by translation symmetry the road-and-walked ensemble stays the same). You just take a walker on a closed road and you get a quine, for example.

What is nice about quines is that they respect the analogy with chemical molecules-chemical reactions in the following sense. During the reduction process (chemical reaction) the number of atoms (nodes) is conserved.

__________________________________________________________________

500: a year review at chorasimilarity, first half

Personal post triggered by the coincidence of the year’s end and a round number of posts here: 500.

I started this year with high hopes about the project described in the article GLC actors, artificial chemical connectomes, topological issues and knots. Louis Kauffman wrote in the introduction some unbelievably nice words about graphic lambda calculus:

Even more generally, the movement between graphs and algebra is part of the larger picture of the relationship of logical and mathematical formalisms with networks and systems that was begun by Claude Shannon in his ground-breaking discovery of the relationship of Boolean algebra and switching networks.

We believe that our graphical formulation of lambda calculus is on a par with these discoveries of Shannon. We hope that the broad impact of this proposal will be a world-wide change in the actual practice of distributed computing. Implemented successfully, this proposal has a potential impact on a par with the internet itself.

But in June we got news that the project “Secure Distributed Computing with Graphic Lambda Calculus” will not be funded by NSF, see my comments in this post.  We got good reactions, from the reviewers, on the theoretical side (i.e. what is described in the GLC actors article), but fair critics on the IT part of the project. Another mistake was the branding of the project as oriented towards security.

I should have follow my initial plan, namely to start from the writing of simple tools, programs, which should also have some fun side, ideally, but which at least would allow to start the exploration of the formalism in much more detail than the what pen and papers allows. Instead of that, as concerns this project, there has been a waste of time from Jan to Jun, waiting for one puny source of funding before doing any programming work.

I parallel, a better trend appeared, one which I have not dreamed about two years ago: artificial life. During the summer of 2013 I thought it is worthy to try to get rid of a weakness of the graphic lambda calculus, namely that is has one global graph rewrite, called GLOBAL FAN-OUT. That’s why I wrote the Chemical concrete machine article, describing a purely local graph rewrite formalism which later was renamed  chemlambda. That was great, I even made an icon for it:

chemlambda4

which is made by two lambda symbols, one being up side down, to suggest that writing linear conventions are obsolete. The lambdas are  arranged into a DNA like double spiral, to suggest connections with life. (Of course that means I entered in the alife field, but everything about that was so fresh for me. Later I learned about Alchemy and other approaches, mixing either lambda calculus with alife, or rewriting systems with alife, but not both, and surely not the way is done in chemlambda, as abundantly documented in this blog)

Several (personal as well) novelties related to the article. One was that the article appeared on figshare too, not only on arXiv. This relates with another subject which I follow, namely what means of dissemination of research to use, seen that publishing academic articles is no longer enough, unless you want your work to be used only as a kind of fertilizer for future data mining projects.

More about this later.

The initial purpose of chemlambda was to be aimed at biologists, chemists, somewhere there, but apparently, with almost certainly, if a guy does computing then he will not do chemistry, or if he does chemistry then he has no idea about lambda calculus. I’m mean, there are exceptions, like to be part of an already existing team which do both, which I’m not. Anyway, so initially I hoped for interactons with biochemists (I still do, looking for that rare bird who would dare to lower himself to talk with a geometer from a non dominant country, motivated purely by the research content, and who would have the trivial idea to use his chemistry notions for something which is not in the scope of already existing projects).

With Louis Kauffman, we set out to write a kind of continuation of the GLC actors article, this time concentrated on chemlambda, as seen from our personal interests. The idea was to participate at the biggest event in the field, the ALIFE 14 conference. Which we did: Chemlambda, universality and self-multiplication.

Putting together the failure of the NSF project and the turn towards alife, is only natural that I set to write more explanations about the formalism, like this series  of 7  expository posts on chemlambda described with the help of g-patterns:

This was a bridge towards starting to write programs, that’s for later.

In parallel with other stuff, which I’ll explain in the second half.

_________________________________________________

Chemlambda task got bigger, explanation

I learned how to reproduce the type inference system of the Calculus of Constructions in a (slight modification of) chemlambda . This comes with the previous caveats concerning untyped lambda beta calculus and chemlambda:

  • (a) for any term (or type) there is a chemlambda molecule, but not for any chemlambda molecule there is a correspondent in CoC,
  • (b) the inference rules are implemented as (compositions of) graph rewrites, but not all graph rewrites have a correspondent in the inference rules,
  • (c) the reduction strategy (I think corresponding to the “tactics” from COQ) is an ingredient which has to be added in order to obtain a full model of computation,
  • (d) all graph rewrites are local (i.e. there is an a priori bound on the number of nodes and arrows of the left and right pattern of graph rewrites, as well on the number of nodes and arrows needed to be taken into account in a condition in case the move has the form: if condition then replace left pattern by right pattern) ,
  • (e) there is no correspondence between the reduction in chemlambda and reduction in CoC (in the sense that the reduction in chemlambda may pass by molecules which do not correspond to terms in CoC).

Some time is needed to eliminate possible bugs, but the task has become even bigger: a purely local version of COQ, based exclusively on graph rewrites, with “tactics” that never use values or names passing.
I have kept a distance, previously, from types because they seemed to me an added ingredient, not really needed, to the purity of a geometrical version of untyped lambda beta calculus. But slowly I understood that the initial try to mix emergent algebras (i.e. space as program) with lambda calculus. from http://arxiv.org/abs/1205.0139 , called “lambda-scale”, is a timid and a bit skew attempt for a type system.
Added over this is the fact that it’s hard, at least for me, to discern in the big literature on the subject, what holds for lambda beta, but not for any other “superior” version, because almost everywhere, if you don’t read carefully, extensionality is mixed in the pot by default, like in all famously well known category theory treatment of the matter.
Or, I have been forced from the beginning to exclude extensionality, because it appears as a non-local graph rewrite. That’s it, I have not made it on purpose, it turns out to be so. Without extensionality, no functions, so no functional style. Without terms and variable passing there is no evaluation, no call, lazy, by need, or otherwise.

There is something else happening here. I looked and found a (surprising)  kind of historical precendent for this situation. At the beginning of chemistry, or even before the rigorous establishment of it, there has been a long and well looked functional period, when chemicals were identified with their function. (In popular thinking even now we speak or hear about the soothing quality of this herbal tea, say.) This was later called “vitalism”, and has been rejected on the grounds of the lack of enough explanatory power.

So I asked: regardless of the fashion of the moment, if you think that untyped lambda beta is Turing universal, then why use extension (or types btw)? What gives a geometrized version (and enlarged, in the sense (a, b, c, e) from the beginning of this post)  version of untyped lambda beta?

At the surface, of one takes a lazy look at my drawings from a year ago, it looks like known facts, mixed with crazy statements. But if you care to look better and read it, then you see that this geometrical view is quite different, because basically has no name, from the previous mentioned reasons.

Now the task is even bigger, so I have to settle for picking some proof of principle bits, which can be attained in finite time by myself.

I tried to go a bit into programming, of course that it works, the limit being only my procrastination in front of so many possible choices.

Not quite, even with the small attempts contained in the chemlambda gui, a patch of awk, sh and js scripts, I am drowned in data.

The field is huge, probably will got bigger.

______________________________________________________________

Rant about Jeff Hawkins “the sensory-motor model of the world is a learned representation of the thing itself”

I enjoyed very much the presentation given by Jeff Hawkins “Computing like the brain: the path to machine intelligence”

 

Around 8:40

Hawkins_1

This is something which could be read in parallel with the passage I commented in the post   The front end visual system performs like a distributed GLC computation.

I reproduce some parts

In the article by Kappers, A.M.L.; Koenderink, J.J.; Doorn, A.J. van, Basic Research Series (1992), pp. 1 – 23,

Local Operations: The Embodiment of Geometry

the authors introduce the notion of  the  “Front End Visual System” .

Let’s pass to the main part of interest: what does the front end?  Quotes from the section 1, indexed by me with (a), … (e):

  • (a) the front end is a “machine” in the sense of a syntactical transformer (or “signal processor”)
  • (b) there is no semantics (reference to the environment of the agent). The front end merely processes structure
  • (c) the front end is precategorical,  thus – in a way – the front end does not compute anything
  • (d) the front end operates in a bottom up fashion. Top down commands based upon semantical interpretations are not considered to be part of the front end proper
  • (e) the front end is a deterministic machine […]  all output depends causally on the (total) input from the immediate past.

Of course, today I would say “performs like a distributed chemlambda computation”, according to one of the strategies described here.

Around 17:00 (Sparse distributed representation) . ” You have to think about a neuron being a bit (active:  a one, non active: a zero).  You have to have many thousands before you have anything interesting [what about C. elegans?]

Each bit has semantic meaning. It has to be learned, this is not something that you assign to it, …

… so the representation of the thing itself is its semantic representation. It tells you what the thing is. ”

That is exactly what I call “no semantics”! But is much better formulated as a positive thing.

Why is this a form of “no semantics”? Because as you can see the representation of the thing itself edits out the semantics, in the sense that “semantics” is redundant, appears only at the level of the explanation about how the brain works, not in the brain workings.

But what is the representation  of the thing itself? A chemical blizzard in the brain.

Let’s put together the two ingredients into one sentence:

  • the sensory-motor model of the world is a learned representation of the thing itself.

Remark that as previously, there is too much here: “model” and “representation” sort of cancel one another, being just superfluous additions of the discourse. Not needed.

What is left: a local, decentralized.  asynchronous, chemically based, never ending “computation”, which is as concrete as the thing (i.e. the world, the brain) itself.

I put “computation” in quotes marks because this is one of the sensible points: there should be a rigorous definition of what that “computation” means. Of course, the first step would be fully rigorous mathematical proof of principle that such a “computation”, which satisfies the requierements listed in the previous paragraph, exists.

Then, it could be refined.

I claim that chemlambda is such a proof of principle. It satisfies the requirements.

I don’t claim that brains work based on a real chemistry instance of chemlambda.

Just a proof of principle.

But how much I would like to test this with people from the frontier mentioned by Jeff Hawkins at the beginning of the talk!

In the following some short thoughts from my point of view.

While playing with chemlambda with the strategy of reduction called “stupid” (i.e. the simplest one), I tested how it works on the very small part (of chemlambda) which simulates lambda calculus.

Lambda calculus, recall, is one of the two pillars of computation, along with the Turing machine.

In chemlambda, the lambda calculus appears as a sector, a small class of molecules and their reactions. Contrary to the Alchemy of Fontana and Buss, abstraction and application (operations from lambda calculus) are both concrete (atoms of molecules). The chemlambda artificial chemistry defines some very general, but very concrete local chemical interactions (local graph rewrites on the molecules) and some (but not all) can be interpreted as lambda calculus reductions.

Contrary to Alchemy, the part which models lambda calculus is concerned only with untyped lambda calculus without extensionality, therefore chemical molecules are not identified with their function, not have they definite functions.

Moreover, the “no semantics” means concretely that most of the chemlambda molecules can’t be associated to a global meaning.

Finally, there are no “correct” molecules, everything resulted from the chemlambda reactions goes, there is no semantics police.

So from this point of view, this is very nature like!

Amazingly,  the chemical reductions of molecules which represent lambda terms reproduce lambda calculus computations! It is amazing because with no semantics control, with no variable passing or evaluation strategies, even if the intermediary molecules don’t represent lambda calculus terms, the computation goes well.

For example the famous Y combinator reduces first to only a small (to nodes and 6 port nodes molecule), which does not have any meaning in lambda calculus, and then becomes just a gun shooting “application” and “fanout” atoms (pair which I call a “bit”). The functioning of the Y combinator is not at all sophisticated and mysterious, being instead fueled by the self-multiplication of the molecules (realized by unsupervised local chemical reactions) which then react with the bits and have as effect exactly what the Y combinator does.

The best example I have is the illustration of the computation of the Ackermann function (recall: a recursive but not primitive recursive function!)

What is nice in this example is that it works without the Y combinator, even if it’s a game of recursion.

But this is a choice, because actually, for many computations which try to reproduce lambda calculus reductions, the “stupid” strategy used with chemlambda is a bit too exuberant if the Y combinator is used as in lambda calculus (or functional programming).

The main reason is the lack of extension, there are no functions, so the usual functional programming techniques and designs are not the best idea. There are shorter ways in chemlambda, which employ better the “representation of the thing itself is its own semantic interpretation” than FP.

One of those techniques is to use instead of long linear and sequential lambda terms (designed as a composition of functions), so to use instead of that another architecture, one of neurons.

For me, when I think about a neural net and neural computation, I tend to see the neurons and synapses as loci of chemical activity. Then  I just forget about these bags of chemicals and I see a chemical connectome sort of thing, actually I see a huge molecule suffering chemical reactions with itself, but in a such a way that its spatial extension (in the neural net), phisically embodied by neurons and synapses and perhaps glial cells and whatnot, this spatial extention is manifested in the reductions themselves.

In this sense, the neural architecture way of using the Y combinator efficiently in chemlambda is to embed it into a neuron (bag of chemical reactions), like sketched in the following simple experiment

Now, instead of a sequential call of duplication and application (which is the way the Y combinator is used in lambda calculus), imagine a well designed network of neurons which in very few steps build a (huge, distributed) molecule (instead of a perhaps very big number of sequential steps) which at it’s turn reduce itself in very few steps as well, and then this chemical connectome ends in a quine state, i.e. in a sort of dynamic equilibrium (reactions are happening all the time but they combine in such a way that the reductions compensate themselves into a static image).

Notice that the end of the short movie about the neuron is a quine.

For chemlambda quines see this post.

In conclusion there are chances that this massively parallel (bad name actually for decentralized, local) architecture of a neural net, seen as it’s chemical working, there are chances that chemlambda really can do not only any computer science computation, but also anything a neural net can do.

_________________________________________________________________

The Ackermann function in the chemlambda gui

UPDATE: in the collection of animations there are posts about the Ackermann function, like this one.

UPDATE: The Ackermann function, the video:

_______________________

I put this stuff on G+  some days ago and now I can find it only if I look for it on purpose. Older and newer posts, those I can see. I can see colored lobsters, funny nerd jokes, propaganda pro and con legacy publishing, propaganda hidden behind granpa’s way of communicating science, half baked philosophical ideas,  but not my post which I made only two days ago. On my page, I mean, not elsewhere.

Thank you G+ for this, once again. (Note not for humans: this was ironic.)  Don’t forget to draw another box next time when you think about a new algorithm.

A non-ironic thanks though for the very rare occasions when I did met very interesting people and very interesting ideas there.

OK, to the matter, now. But really, G+, what kind of algorithm you use which keeps a lobster on my page but deletes a post about the Ackermann function?

UPDATE: The post is back in sight now. Whew!
The post follows, slightly edited (by adding stuff).
The Ackermann function is an example of a total computable function which is not primitive recursive. It is therefore amusing to try to compute it.
The matter is not what is the value of Ack(m,n), because it grows so fast that very soon the problem of computing it is shadowed by the more trivial problem of storing its values. Instead,  more interesting is to see how your computing device handles the computation itself, things like stacks of calls, etc, because here it is manifest the fact that Ack is not primitive recursive.
To simplify it, the funny thing is to see how you can compute Ack(m,n) without any cheat.
I tried to do this with #chemlambda . I know since a long time that it can be done, as explained (very summary, true) in this old post
https://chorasimilarity.wordpress.com/2013/10/19/a-machine-for-computing-the-ackermann-function-in-graphic-lambda-calculus/
for GLC, not chemlambda (but since chemlambda does with only local moves what GLC does, it’s the same).
I want to show you some pictures about it.
It is about computing Ack(3,2). Everybody will point that Ack(3,2) = 29 and moreover that Ack(3,n) has an explicit expression, but this would be cheating, because I don’t want to use precomputed stuff.
No, what I want to use is a lambda calculus term for the Ackermann function (and one which is not eta reduced, because chemlambda does not have eta reduction!), and I want to apply it to the arguments 3 and 2 written as lambda terms as well (using the Church encoding). Then I want to see if after the reductions performed by the algorithm I have I get 29 as a Church number as well.
During all the algorithm I use only graph reductions!
After all there are no calls, no functions and during the computation the molecules which appear are not even representing lambda terms.
Indeed, lambda calculus does not have operations or concepts like fanin nodes, or FOE nodes, not reductions like FAN-IN or DIST. That’s the amazing point (or at least one of them), that even if it veers outside lambda calculus, it ends where it should (or better, that’s for another time).
I used the programs which are available at the site of the chemlambda gui http://imar.ro/~mbuliga/gallery.html
(which is btw online again, after 2 days of server corruption.)Here are some pictures.The first one is a screenshot of the Ack(3,2) “molecule”, i.e. the graph which is to be reduced according to the chemlambda rules and according to the reduction strategy which I call “viral”.
ack_3_2_init
After almost 200 reductions I get 29, see the second figure, where it appears as the molecule which represents 29 as a Church numeral.
ack_3_2_finalWow, it really worked!
You can try it for yourself, I’ll give you the mol file to play with, but first some details about it.
I modified a bit the awk script which does the reductions, in the following place: when it introduces new nodes (after a DIST move) it has to invent new names for the new edges. In the script which is available for download the algorithm takes the max over all all ports names and concatenate it with a word which describes where the edge comes from. It is good for being able to track back where the nodes and edges come, but it results into a growth of the ports name which is exponential in the number of iterations of the reduction algorithm. This leads to very big names of some ports, after 200 iterations.
So I modified this part by choosing a naming procedure which is less helpful for tracking but better in the sense that now the growth of names is linear in the number of iterations. It is a quick fix, after all it is as easy to invent naming procedures which result in a logarithmic or almost constant length wrt the number of iterations.
For the Ackermann function the script which is available is just as good, it works well, only that it has this unpleasant feature of long names which enlarges unnecessarily the json files.
Details over, next now.
In the third picture you see the mol file for the Ack(3,2), i.e. the list of nodes and ports of the Ack(3,2) molecule, in the format used by the reduction program.
ack_3_2_mol
Btw, do you see in this screenshot the name of the updated script? Right, is foe_bubbles_09_10.awk, instead of foe_bubbles_06_10.awk which is available for download.
I don’t cheat at all, see?
I made some annotations which helps you to see which part corresponds to the Ackermann function (as a lambda term translated into chemlamda), which parts are the arguments “3” and “2”, and finally which part represents the Ackermann function applied to (3,2).
ackermann_3_2_mol
Soon enough, when I’ll be able to show you animated reductions (instead of the steps of reduction only), I think such an example will be very funny to examine, as it grows and then shrinks back to the result.
________________________________________

 

 

 

When priority does not matter

Perhaps the post When priority matters gives a false impression. I want to dispel that right away.

In that post we see two possible reductions, depending on the PRIORITY CHOICE, either BETA>DIST or DIST>BETA.

In the case BETA>DIST the reduction stops quickly.

On the contrary, in the case DIST>BETA  the reduction does not stop, because it enters in a cyclic process which produces an unbounded number of bubbles (i.e. loops graphical elements).

Moreover, we start from the g-pattern form of the combinator (Lx.xx)(Lx.xx).

Or, this may lead to the false impression that somehow this has anything to do with the choice between normal order reduction and  applicative order reduction from lambda calculus.

Yes, because a standard example of the difference between these reduction strategies is the following one.

Let’s denote by Omega  the combinator (Lx.xx)(Lx.xx).  Consider then the term

(Lx.z) Omega

Under the normal order reduction this term suffers one beta reduction

(Lx.z) Omega –BETA–> z

and that’s all, the reduction stops.

 

On the contrary, under the applicative order reduction strategy, the reduction never stops, because we first try to reduce Omega, leading to a cycle

(Lx.z) Omega –BETA–> (Lx.z) Omega

 

The question is: is there any connection between these two phenomena?

  • is the sequential chemlambda reduction with BETA>DIST like normal order reduction in lambda calculus?
  • is the sequential chemlambda reduction with DIST>BETA like applicative order reduction?

No, not the slightest.

In order to prove this I shall reduce in chemlambda with the sequential strategy the g-pattern which represents the term  (Lx.z) Omega. Let’s see what happens, but first let me remind what we do.

See the 1st part and 2nd part of the description of the conversion of lambda terms into g-patterns.

The  sequential strategy is described by the following algorithm. I write it again because the g-pattern of Lx.z brings a termination node “T”, therefore we have to consider also the local pruning moves LOC-PR.

See the post about definition of moves with g-patterns.

The algorithm of the sequential reduction strategy is this. At each reduction step do the following:

  • if no LEFT patterns of the moves BETA, DIST, FAN-IN and LOC-PR are present in the g-pattern, then stop, else
  • identify the LEFT patterns of the moves BETA, DIST, FAN-IN and LOC-PR. If there is a graphical element which is part of two distinct patterns then apply a PRIORITY  CHOICE
  • apply the moves from LEFT to RIGHT
  • apply the COMB moves until no Arrow element
    can be eliminated
  • repeat.

The PRIORITY CHOICE means a predefined choice between doing one of the two moves in conflict. The conflict may be between BETA and DIST, between  BETA and LOC-PR or between DIST and LOC-PR.

In the following we shall talk about the PRIORITY CHOICE only if needed.

In the first picture we see, in the upper side, the g-pattern which represents the term (Lx.z) Omega, then the first reduction step.

I kept the same names for the ports from the last post and I added new names for the ports of the new graphical elements.

 

n_metaloop_1

First, remark that the g-pattern which represents (Lx.z) is

L[z,n2,n1] T[n2]

I named by “z” one of the ports of the lambda node L, which would correspond to the variable z of the term Lx.z. But recall that chemlambda does not use variable names, so the name “z” is there only by my choice of names for the port variables, could be anything instead (which was not used before in one of the g-pattern’s ports).

Then, A[n1,1,0] corresponds to the application of something linked to the port n1 (namely the g-pattern of (Lx.z), i.e. L[z,n2,n1] T[n2]) to something linked to the port 1 (i.e. the g-pattern of Omega, which was discussed in the post “When priority matters”).

Nice! What happened?

  • there is an Arrow element which does not disappear, Arrow[z,0]. This is something which corresponds to the g-pattern of z, seen as lambda calculus term.
  • and there is another g-pattern, disconnected from that Arrow[z,0] graphical element.

So, in a sense, this looks like the result of the normal order reduction, but no priority choice was involved!

However, the chemlambda sequential reduction continues, like explained in the picture of the 2nd reduction step.

 

n_metaloop_2

 

 

OK, the Arrow[z,0] still exists after the reduction step, and a LOC-PR move appear.

Let’s see what happens in the 3rd reduction step.

 

n_metaloop_3

The reduction stops here. There is nothing more to do, according to the sequential reduction strategy.

Differently from the reduction of Omega alone, explained in the previous post, this time there is NO PRIORITY CHOICE NEEDED.

Ergo, the priority choice has nothing to do here. The sequential chemlambda reduction of the g-pattern corresponding to (Lx.z) Omega stops after 3 steps, no matter which was the PRIORITY CHOICE made before the start of the computation.

_________________________________________________________

 

When priority matters

I continue with the analysis of chemlambda with the very simple sequential strategy. In this post I want to show you that priority really matters, by using an old example from the Metabolism of loops.

The goal is to see how the g-pattern of the combinator

(Lx.xx) (Lx.xx)

reduces in chemlambda with the sequential strategy.

See the 1st part and 2nd part of the description of the conversion of lambda terms into g-patterns.

The simple sequential strategy is the following: at each reduction step do the following:

  • if no LEFT patterns of the moves BETA, DIST, FAN-IN are present in the g-pattern, then stop, else
  • identify the LEFT patterns of the moves BETA, DIST, FAN-IN. If there is a graphical element which is part of two distinct patterns then apply a PRIORITY  CHOICE
  • apply the moves from LEFT to RIGHT
  • apply the COMB moves until no Arrow element is present
    can be eliminated
  • repeat.

The PRIORITY CHOICE means a predefined choice between doing one of the two moves in conflict.

In this post it will be about the priority between BETA and DIST moves.

Mind that the PRIORITY CHOICE is fixed before the start of the computation.

However, in the following I shall mention the choice when it will be needed.

OK, so let’s start with the g-pattern which represents the well known combinator (Lx.xx) (Lx.xx).  Is clear that as a lambda term it has no normal form,  because it transforms into itself by a beta reduction (so is a sort of a quine, if quines would have an interesting definition in lambda calculus).

As previously, you shall see that we depart quickly from the lambda calculus realm, and we go to some straightforward directions, nevertheless.

The first figure describes the first reduction step.

 

metaloop_1

The g-pattern obtained after this first step is the one which appears as the starting point of the Metabolism of loops post.

The 2nd step is described in the next picture:

 

metaloop_2

Technically we are already outside lambda calculus, because of the fanin node FI[15,12,6].  (We don’t split the computation into pure reduction and pure self-multiplication.)

Let’s see the 3rd step.

 

metaloop_3

Look well at the g-pattern which we get after the 3rd step, you’ll see it again, maybe!

The 4th step is the one which will prepare the path to conflict.

 

metaloop_4

In the 5th step we have conflict:

 

metaloop_5

The 5th step finishes in a different manner, depending on the PRIORITY CHOICE (which is fixed from the beginning of the computation).

Let’s suppose that we choose DIST over BETA. Then the 5th step looks like this:

 

metaloop_6_d

Wow, the g-pattern after the 5th step is the same as the g-pattern after the 3rd step, with a loop graphical element added.

This means that further the computation will look alike the 4th step, then 5th step again (with the same priority choice, which is fixed!). A new loop will be generated and the computation will never stop, producing an endless string of loops.

Bubbles!

Now, let’s see what happens if the PRIORITY CHOICE is BETA over DIST.

Then the 5th step looks like this:

 

metaloop_6_b

The 5th step produced 2 loops and the shortest ouroboros, a fanout node with one out port connected to the in port, namely FO[13,1,13].

The computation then stops!

______________________________________________________

So, depending on the priority choice, we have either a computation which produces bubbles without end, or a computation which stops.

It is logical. Indeed, if the priority choice is DIST over BETA, this induces the choice of increasing the number of nodes of the g-pattern. From here, it may happen, as it is the case in this example, that a cyclic behaviour is induces.

On the other side, the priority choice BETA over DIST decreases the number of nodes, thus increasing the chances for a computation which stops eventually.

Both choices are good, it depends on what we want to do with them. If we want to compute with graphs resembling chemlambda quines, because they look like living organisms with a metabolism, then the choice DIST over BETA is a good one.

If we want to have a computation which stops (dies, would say a biologist) then BETA over DIST seems better.

_____________________________________________________

Answer to “what reduction is this?”

In lambda calculus, the predecessor function is surprisingly difficult, of course when compared with the successor, addition or multiplication.

In the post What reduction is this? I used chemlambda with the stupid sequential reduction strategy stated in Y again: conflict!, namely:

  •  we start with a g-pattern and we reduce it sequentially
  • at each step we identify first all the LEFT patterns from the following moves: beta, DIST, FAN-IN, LOC PR
  • we do the moves only from LEFT to RIGHT
  • repeat until no move available OR until conflict.

… And there is no conflict in the predecessor reduction.

In the post “What reduction is this?” I asked some questions, let me answer:

  • what kind of evaluation strategy is this?  NO EVALUATION STRATEGY, BECAUSE THERE ARE NO VALUES
  • are there reduction steps and self-multiplication steps? NO, THEY MIX  WITH NO EXTERNAL CONTROL
  • is this in lambda calculus? NO, IS INSPIRED FROM, BUT BETTER
  • what can we learn from this particular example? THAT IT WORKS WITHOUT EVALUATION STRATEGY, WITHOUT EXTERNAL CONTROL AND WITHOUT SIGNALS-THROUGH-WIRES-AND GATES.

This is a streamlined version of the reduction hidden in

PRED(3) –> 2

where numbers appear as stacks of pair FO and A nodes. They are “bare” numbers, in the sense that all the currying has been eliminated.

Admire the mechanical, or should I say chemical precision of the process of reduction (in chemlambda, stupid sequential strategy). In the following figure I eliminated all the unnecessary nodes and arrows and we are left now with the pure phenomenon.

 

pred_2

I find amazing that it works even with this stupidest strategy. Shows that chemlambda is much better than anything on the market.

Let me tell again: this is outside IT fundamental assumption that everything  is reduced at signals send through wires, then processed by gates. 

It is how nature works.

____________________________________________

What reduction is this?

It ends in the Church encoding of 2.

Can you guess what is this? (click on the big image to see it better)

 

pred_1

 

As you see, you may ask:

  • what kind of evaluation strategy is this?
  • are there reduction steps and self-multiplication steps?
  • is this in lambda calculus?
  • what can we learn from this particular example?

_______________________________________________

 

Y again: conclusion!

In the post Y again: conflict! I took the most obvious reduction strategy (sequential) and applied it to the reduction of the “Y combinator applied to something”.  The reduction ended in conflict between two moves which compete for the same g-pattern.

Then, in the post Y again:compete!  I took in parallel the two possible outcomes of the conflict. The contenders have been branded as fast shooting cowboys, offering a show.

Surprisingly, both possible paths of reduction ended in a very simple version of the Y combinator.

Only  that the very simple version is not one coming from lambda calculus!

Indeed, let’s recall who is the Y combinator, seen a g-pattern in chemlambda.

In lambda calculus the Y combinator is

Lx.( (Ly.(x(yy)) (Ly.(x(yy)) )

As a molecule, it looks like this.

 

yagain_2

As g-pattern, it looks like this (see this post  and this post for the conversion of lambda terms into g-patterns):

L[a,x,o] A[b,c,a] FO[x,y,z]

L[e,d,b] FO[d,f,g] A[f,g,h] A[y,h,e]

L[j,i,c] FO[i,l,m] A[l,m,k] A[z,k,j]

 

Applied to something means we add to this g-pattern  the following:

A[o,p,u]

with the meaning that Y applies to whatever links to the port “p”. (But mind that in chemlambda there is no variable or term passing or evaluation! so this is a way to speak in the lambda calculus realm, only).

The two mentioned posts about Y again led to the conclusion that the g-pattern “Y applied to something” behaves (eventually, after several reductions) as the far more simple g-pattern:

A[o,p,u]                (i.e. “applied to something at port “p”)

L[b,a,o]

FO[a,c,d] A[c,d,b]

Now, this means that the Y combinator g-pattern may be safely replaced in computations by

L[b,a,o]

FO[a,c,d] A[c,d,b]

or, in graphical version, by

 

yagain_5

But this is outside lambda calculus.

So what?

It is far simpler than the Y combinator from lambda calculus.

The same happens with other lambda terms and reductions.(see for example the post Actors for the Ackermann machine, for another example. Incidentally, the analysis of the Ackermann machine, i.e. the graph which behaves like the Ackermann function, gave me the idea of using the actor model with GLC. This evolved into arXiv:1312.4333.).

This shows  the fact that chemlambda, even with the dumbest sequential reduction strategy (ok, enhanced in obvious ways so it solves conflicts), can do more with less fuss than lambda calculus.

By looking on the net (recall that I’m a mathematician, therefore excuse my ignorance in CS well known people, I’m working on this), I can’t but wonder what chemlambda would give in relation with, for example:

Of course, the dream is to go much, much further. Why? Because of  the List of Ayes/Noes of the artificial chemistry chemlambda.

__________________________________________________________

 

Y again: compete!

In the post Y again: conflict!  the chemlambda computation based on a very crude model ended in conflict.

Conflict means that the same graphical element appears in two LEFT g-patterns (see, in the series of expository posts  the part II for the g-patterns and the part III for the moves) .

In the next figure we see this conflict, in the upper part (that’s where we were left in the previous post), followed by a fork: in the lower left part of the figure we see what we get if we apply the beta move and in the lower right part we see what happens if we apply the DIST move.

 

yagain_3

 

Recall that (or look again at the upper side of the picture) the conflict was between LEFT patterns of a beta move and of a DIST move.

I rearranged the drawing of the g-patterns a bit (mind that this is not affecting in any way the graphs, because the drawings on paper or screen are one thing and the graphs another thing, excuse me for being trivially obvious). In this pretty picture we see well that already the Y gun shot a second pair of nodes A and FO.

The differences now:

  • for the graph from the lower left part of the picture, obtained after the beta move, we see that A[i.i] produces a loop: A[i,i] –COMB–> loop. We can disregard the loop further (because we allow only moves from LEFT to RIGHT, therefore the loop will not interact with anything in the future of the computation). We are left with a very simple graph, made of the two pairs of nodes A and FO which the gun already shot and with a very interesting pair:  A[w,a1,o] FO[o,q,a1] .
  • for the graph from the lower right part of the picture, obtained after the DIST move, we see the two pairs of nodes A and FO whih the Y gun already shot and a graph made by 6 nodes.

Which way is the best? What to do?

Let’s make them compete!  Who shoots faster?

 

Imagine the scene: in the Lambda Saloon, somewhere in the Wide Wild West, enter two new cowboys.

“Who called these guys?” asks the sheriff of the Functional City, where the reputed saloon resides.

“They are strangers! They both look like Lucky Luke, though…”

The cowboys and cowgirls from the saloon nod in approval: they all remember what happened when Lucky Luke — the one, the single cowboy who shoots faster than his shadow — was put between the two parallel mirrors from the saloon stage. What a show! That day, the Functional City got a reputation, and everybody knows that reputation is something as good as gold. Let the merchants from Imperative County sell windows panes for the whole US, nobody   messes with a cowboy, or cowgirl from the Functional City. Small, but elegant. Yes sir, style is the right word…

Let’s go back to the two strangers.

“I’m faster than Master Y” says the stranger from the right.

“I’m MUCH faster than Master Y” says the one from the left, releasing from his cigar a loop of smoke.

“Who the … is Master Y?” asks the sheriff.

“Why, it’s Lucky Luke. He trained us both, then sent us to Functional City. He says hello to you and tells you that he got new tricks to show” says one of the strangers.

“… things not learned from church…” says the other.

“I need to see how fast are you, or else I call you both liars” shouts one of the bearded, long haired  cowboys.

The stranger from the right started first. What a show!

 

yagain_right

He first makes a clever DIST move (not that there was anything else to do). Then he is presented with 4 simultaneous moves to do (FAN-IN, 2 betas and a DIST). He shoots and freezes. Nothing moves, excepting two loops of smoke, rising from his gun.

“I could continue like this forever, but I stopped, to let my colleague show you what he’s good at.” said the stranger from the right.

“Anyway, I am a bit slow  with the first shot, but after that I am faster.” he continued.

“Wait, said the sheriff, you sure you really shot?”

“Yep, sure, look better how I stand”, said the stranger from the right, only slightly modifying his posture, so that everybody could clearly see the shot:

 

yagain_right_2

“Wow, true!” said the cowboys.

“I’m fast from the first shoot!” said the stranger from the left. “Look!”

 

yagain_left

“I only did a DIST move.” said the stranger from the left, freezing in his posture.

“Nice show, guys! Hey, look, I can’t tell now which is which, they look the same. I got it, the guy from the right is a bit slower at the first shoot (however he is dazzlingly fast) but then he is as  good as his fellow from the left.”

“Hm, said the sheriff, true. Only one thing: I have never seen in the Lambda Saloon  anything like this. It’s fast, but it does not seem to belong here.”

___________________________________________________________

Y again: conflict!

We proved in    arXiv:1403.8046 [cs.AI]  ( link to the published  article) that in chemlambda the molecule which represents the Y combinator is a gun which shoots pairs of FO and A nodes. See there the sequence of reductions which was considered.

This time  let’s not care about staying in lambda calculus and let’s take the simplest reduction strategy, to see what happens.

We posit in the frame of g-patterns from the expository posts  part I  and part II (definition of g-patterns) and part III (definition of moves) and part IV (g-patterns for lambda terms 1st part) and part V (g-patterns for lambda terms 2nd part) and part VI (about self-multiplication of g-patterns)  and part VII (an example of reduction) .

We take the following reduction strategy:

  • we start with a g-pattern and we reduce it sequentially
  • at each step we identify first all the LEFT patterns from the following moves: beta, DIST, FAN-IN,
  • we do the moves from LEFT to RIGHT
  • we identify and do all possible COMB moves from LEFT to RIGHT
  • repeat until no move available OR until conflict.

What’s conflict? We shall see one.

Mind that this is a very stupid and simplistic strategy, which does not guarantee that if we start with a g-pattern which represents a lambda term then we stop by having a g-pattern of a lambda term.

It does have it’s advantages, though.

OK, so let us start with the g-pattern of Y applied to something. 

In general, with g-patterns we can say many things. As any combinator molecule, when represented by a g-pattern, the Y combinator has only one free port, let’s call it “b”. Thus Y appears as a g-pattern which we denote by Y[b].

Suppose we want to tart the reduction from Y applied to something. This means that we shall start with the g-pattern

A[b,a,out] Y[b]

OK!

Look what happens when we apply the mentioned strategy.

(Advice: big picture, click on it to see it clearly and to travel along it.)

 

yagain_1

Here is a conflict: at one step we have two LEFT patterns, in this case

L[o,p,i] A[i,p,v]   , which is good for a beta move

and

A[i.p.v] FO[v,q,a1]  , which is good for a DIST move.

The patterns contain a common graphical element, in this case A[i,p,v], which will be deleted during the respective moves.

CONCLUSION: with this strategy we have a gun which shoots one pair of FO and A nodes, but then it got wrecked.

What to do then?

The human way is to apply

When in trouble or in doubt

Run in circles, scream and shout

for a moment, then acknowledge that this is a stupid reduction strategy, then find some qualities of this strategy, then propose another which has those qualities but works better, then reformulate the whole problem and give it an unexpected turn.

The AI way is to wait for somebody to change the reduction strategy.

__________________________________________________________

Lambda calculus and the fixed point combinator in chemlambda (VIII)

This is the 8th  (continuing from part I  and part II  and part III and part IV and part V and part VI  and part VII) in a series of expository posts where we put together in one place the pieces from various places about:

  • how is treated lambda calculus in chemlambda
  • how it works, with special emphasis on the fixed point combinator.

I hope to make this presentation  self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)

_________________________________________________________

This series of posts may be used as a longer, more detailed version of sections

  • The chemlambda formalism
  • Chemlambda and lambda calculus
  • Propagators, distributors, multipliers and guns
  • The Y combinator and self-multiplication

from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication,  arXiv:1403.8046 [cs.AI],  presented  by Louis Kauffman in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York.  Here is a link to the published  article, free, at MIT Press.

_________________________________________________________

Tags. I shall use the name “tag” instead of “actor” or “type”, because is more generic (and because in future developments we shall talk  more about actors and types, continuing from the post Actors as types in the beta move, tentative).

Every port of a graphical element (see part II)  and the graphical element itself can have tags, denoted by :tagname.

There is a null tag “null” which can be omitted in the g-patterns.

As an example, we may see, in the most ornate way, graphical elements like this one:

L[x:a,y:b,z:c]:d

where of course

L[x:null,y:null,z:null]:null    means L[x,y,z]

The port names are tags, in particular “in” out” “middle” “left” and “right” are tags.

Any concatenation of tags is a tag.  Concatenation of tags is denoted by a dot, for example “left.right.null.left.in”.  By the use of “null” we have

a.null –concat–> a

null.a –concat–> a

I shall not regard concat as a move in itself (maybe I should, but that is for later).

Further in this post I shall not use tags for nodes.

Moves with tags. We can use tags in the moves, according to a predefined convention. I shall take several  examples.

1. The FAN-IN move with tags. If the tags a and b are different then

FI[x:a, y:b, z:c] FO[z:c,u:b, v:a]

–FAN-IN–>

Arrow[x:a,v:a] Arrow[y:b,u:b]

Remark that the move is not reversible.

It means that you can do FAN-IN only if the right tags are there.

2. COMB with tags.

L[x:a, y:b, z:c] Arrow[y:b, u:d]

–COMB–>

L[x:a, u:d,z:c]

and so on for all the comb moves which involve two graphical elements.

3. DIST with tags.  There are two DIST moves, here with tags.

A[x:a,y:b,z:c] FO[z:c,u:d,v:e]

–DIST–>

FO[x:a, w:left.d, p:right.e]   FO[y:b, s:left.d, t:right.e]

A[w:left.d, s:left.d, u:d]   A[p:right.e, t:right.e, v:e]

In graphical version

 

dist_with_tags

 

and the DIST move for the L node:

L[y:b, x:a, z:c] FO[z:c, u:d, v:e]

–DIST–>

FI[p:right, w:left, x:a] FO[y:b, s:left, t:right]

L[s:left, w:left,u:d]  L[t:right, p:right, v:e]

In graphical version:

 

dist_tags_lambda

 4. SHUFFLE. This move replaces CO-ASSOC, CO-COMM. (It can be done as a sequence of CO-COMM and CO-ASSOC; conversely, CO-COMM and CO-ASSOC can be done by SHUFFLE and LOC PRUNING, explanations another time.)

FO[x:a, y:b, z:c]  FO[y:b, w:left, p:right] FO[z:c, s:left, t:right]

–SHUFFLE–>

FO[x:a, y:left, z:right]  FO[y:left, w, s] FO[z:right, p, t]

In graphical version:

 

shuffle_with.tags

 

____________________________________________________________

 

 

 

Lambda calculus and the fixed point combinator in chemlambda (VII)

This is the 7th  (continuing from part I  and part II  and part III and part IV and part V and part VI)  in a series of expository posts where we put together in one place the pieces from various places about:

  • how is treated lambda calculus in chemlambda
  • how it works, with special emphasis on the fixed point combinator.

I hope to make this presentation  self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)

_________________________________________________________

This series of posts may be used as a longer, more detailed version of sections

  • The chemlambda formalism
  • Chemlambda and lambda calculus
  • Propagators, distributors, multipliers and guns
  • The Y combinator and self-multiplication

from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication,  arXiv:1403.8046 [cs.AI],  which is accepted in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York, (go see the presentation of Louis Kauffman if you are near the event.) Here is a link to the published  article, free, at MIT Press.

_________________________________________________________

In this post I take a simple example which contains beta reduction and self-multiplication.

Maybe self-multiplication is a too long word. A short one would be “dup”, any tacit programming language has it. However, chemlambda is only superficially resembling to tacit programming (and it’s not a language, arguably, but a GRS, nevermind).

Or “self-dup” because chemlambda has no “dup”, but a mechanism of self-multiplication, as explained in part VI.

Enough with the problem of the right denomination, because

“A rose by any other name would smell as sweet”

as somebody wrote, clearly not  believing that the limit of his world is the limit of his language.

Let’s consider the lambda term (Lx.xx)(Ly.yz). In lambda calculus there is the following string of reductions:

(Lx.xx)(Ly.yz) -beta-> (Ly.yz) (Lu.uz) -beta-> (Lu.uz) z -beta-> zz

What we see? Let’s take it slower. Denote by C=xx and by  B= Ly.yz. Then:

(Lx.C)B -beta-> C[x:=B]  = (xx)[x:=B]  =  (x)[x:=B]  (x)[x:=B] = BB = (Ly.yz) B -beta-> (yz)[y:=B] = (y)[y:=B] (z)[y:=B] =  Bz = (Lu.uz)z -beta=> (uz)[u:=z] = (u)[u:=z] (z)[u:=z]  = zz

Now, with chemlambda and its moves performed  only from LEFT to RIGHT.

The g-pattern which represents (Lx.xx)(Ly.yz) is

L[a1,x,a] FO[x,u,v] A[u,v,a1] A[a,c,b]  L[w,y,c] A[y,z,w]

We can only do a beta move:

L[a1,x,a] FO[x,u,v] A[u,v,a1] A[a,c,b]  L[w,y,c] A[y,z,w]

<–beta–>

Arrow[a1,b] Arrow[c,x] FO[x,u,v] A[u,v,a1] L[w,y,c] A[y,z,w]

We can do two COMB moves

Arrow[a1,b] Arrow[c,x] FO[x,u,v] A[u,v,a1] L[w,y,c] A[y,z,w]

2 <–COMB–>

FO[c,u,v] A[u,v,b] L[w,y,c] A[y,z,w]

Now look, that is not a representation of a lambda term, because of the fact that FO[c,u,v] is “in the middle”, i.e. the middle.in port of the FO[c,u,v] is the out port of B, i.e. the right.out port of the lambda node L[w,y,c]. On the same time, the out ports of FO[c,u,v] are the in ports of A[u,v,b].

The only move which can be performed is DIST, which starts the self-dup or self-multiplication of B = L[w,y,c] A[y,z,w] :

FO[c,u,v] A[u,v,b] L[w,y,c] A[y,z,w]

<–DIST–>

FI[e,f,y] FO[w,g,h] L[h,e,v] L[g,f,u] A[u,v,b] A[y,z,w]

This is still not a representation of a lambda term. Notice also that the g-pattern which represents B has not yet self-multiplied. However, we can already perform a beta move  for L[g,f,u] A[u,v,b] and we get (after 2 COMB moves as well)

FI[e,f,y] FO[w,g,h] L[h,e,v] L[g,f,u] A[u,v,b] A[y,z,w]

<–beta–>

FI[e,f,y] FO[w,g,h] L[h,e,v] Arrow[g,b] Arrow[v,f] A[y,z,w]

2 <–COMB–>

FI[e,f,y] FO[w,b,h] L[h,e,f] A[y,z,w]

This looks like a weird g-pattern. Clearly is not a g-pattern coming from a lambda term, because it contains the fanin node FI[e,f,y].  Let’s write again the g-pattern as

L[h,e,f]  FI[e,f,y]  A[y,z,w] FO[w,b,h]

(for our own pleasure, the order of the elements in the g-pattern does not matter)  and remark that A[y,z,w] is “conjugated” by the FI[e,f,y] and FO[w,b,h].

We can apply another DIST move

L[h,e,f]  FI[e,f,y]  A[y,z,w] FO[w,b,h]

<–DIST–>

A[i,k,b] A[j,l,h] FO[y,i,j] FO[z,k,l] FI[e,f,y] L[h,e,f]

and now there is only one move which can be done, namely a FAN-IN:

A[i,k,b] A[j,l,h] FO[y,i,j] FO[z,k,l] FI[e,f,y] L[h,e,f]

<–FAN-IN–>

Arrow[e,j] Arrow[f,i] A[i,k,b] A[j,l,h] FO[z,k,l] L[h,e,f]

which gives after 2 COMB moves:

Arrow[e,j] Arrow[f,i] A[i,k,b] A[j,l,h] FO[z,k,l] L[h,e,f]

2 <–COMB–>

A[f,k,b] A[e,l,h] FO[z,k,l] L[h,e,f]

The g-pattern

A[f,k,b] A[e,l,h] FO[z,k,l] L[h,e,f]

is a representation of a lambda term, finally: the representation of (Le.ez)z. Great!

From here, though, we can apply only a beta move at the g-pattern A[f,k,b]  L[h,e,f]

A[f,k,b] A[e,l,h] FO[z,k,l] L[h,e,f]

<–beta–>

Arrow[h,b] Arrow[k,e] A[e,l,h] FO[z,k,l]

2 <–COMB–>

FO[z,k,l] A[k,l,b]

which represents zz.

_____________________________________________________

 

Lambda calculus and the fixed point combinator in chemlambda (VI)

This is the 6th  (continuing from part I  and part II  and part III and part IV and part V)   in a series of expository posts where we put together in one place the pieces from various places about:

  • how is treated lambda calculus in chemlambda
  • how it works, with special emphasis on the fixed point combinator.

I hope to make this presentation  self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)

_________________________________________________________

This series of posts may be used as a longer, more detailed version of sections

  • The chemlambda formalism
  • Chemlambda and lambda calculus
  • Propagators, distributors, multipliers and guns
  • The Y combinator and self-multiplication

from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication,  arXiv:1403.8046 [cs.AI],  which is accepted in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York, (go see the presentation of Louis Kauffman if you are near the event.) Here is a link to the published  article, free, at MIT Press.

_________________________________________________________

In this post I want to concentrate on the mechanism of self-multiplication for g-patterns coming from lambda terms (see  part IV   where the algorithm of translation from lambda terms to g-patterns is explained).

Before that, please notice that there is a lot to talk about an important problem which shall be described later in detail. But here is it, to keep an eye on it.

Chemlambda in itself is only a graph rewriting system. In part V is explained that  the beta reduction from lambda calculus needs an evaluation strategy in order to be used. We noticed that  in chemlambda the self-multiplication is needed in order to prove that one can do beta reduction as the beta move.

We go towards the obvious conclusion that in chemlambda reduction (i.e. beta move) and self-multiplication are just names used for parts of the computation. Indeed, the clear conclusion is that there is a computation which can be done with chemlambda, which has some parts where we use the beta move (and possibly some COMB, CO-ASSOC, CO-COMM, LOC PRUNING) and some other parts we use DIST and FAN-IN (and possibly some of the moves COMB, CO-ASSOC, CO-COMM, LOC PRUNING). These two parts have as names reduction and self-multiplication respectively, but in the big computation they mix into a whole. There are only moves, graphs rewrites applied to a molecule.

Which brings the problem: chemlambda in itself is not sufficient for having a model of computation. We need to specify how, where, when the reductions apply to molecules.

There may be many variants, roughly described as: sequential, parallel, concurrent, decentralized, random, based on chemical reaction network models, etc

Each model of computation (which can be made compatible with chemlambda) gives a different whole when used with chemlambda. Until now, in this series there has been no mention of a model of computation.

There is another aspect of this. It is obvious that chemlambda graphs  form a larger class than lambda terms, and also that the graph rewrites apply to more general situations  than beta reduction (and eventually an evaluation strategy).  It means that the important problem of defining a model of computation over chemlambda will have influences over the way chemlambda molecules “compute” in general.

The model of computation which I prefer  is not based on chemical reaction networks, nor on process calculi, but on a new model, inspired from the Actor Model, called  the distributed GLC. I shall explain why I believe that the Actor Model of Hewitt is superior to those mentioned previously (with respect to decentralized, asynchronous computation in the real Internet, and also in the real world), I shall explain what is my understanding of that model and eventually the distributed GLC proposal by me and Louis Kauffman will be exposed in all details.

4.  Self-multiplication of a g-pattern coming from a lambda term.

For the moment we concentrate on the self-multiplication phenomenon for g-patterns which represent lambda terms. In the following is a departure from the ALIFE 14 article. I shall not use the path which consists into going to combinators patterns, nor I shall discuss in this post why the self-multiplication phenomenon is not confined in the world of g-patterns coming from lambda terms. This is for a future post.

In this post I want to give an image about how these g-patterns self-multiply, in the sense that most of the self-multiplication process can be explained independently on the computing model. Later on we shall come back to this, we shall look outside lambda calculus as well and we shall explore also the combinator molecules.

OK, let’s start. In part V has been noticed that after an application of the beta rule to the g-pattern

L[a,x,b] A[b,c,d] C[c]  FOTREE[x,a1,…,aN] B[a1,…,aN, a]

we obtain (via COMB moves)

C[x] FOTREE[x,a1,…,aN] B[a1,…,aN,d]

and the problem is that we have a g-pattern which is not coming from a lambda term, because it has a FOTREE in the middle of it. It looks like this (recall that FOTREEs are figured in yellow and the syntactic trees are figured in light blue)

structure_12The question is: what can happen next?  Let’s simplify the setting by taking the FOTREE in the middle as a single fanout node, then we ask what moves can be applied further to the g-pattern

C[x] FO[x,a,b]

Clearly we can apply DIST moves. There are two DIST moves, one for the application node, the other for the lambda node.

There is a chain of propagation of DIST moves through the syntactic tree of C, which is independent on the model of computation chosen (i.e. on the rules about which, when and where rules are used), because the syntactic tree is a tree.

Look what happens. We have the propagation of DIST moves (for the application nodes say) first, which produce two copies of a part of the syntactic tree which contains the root.

structure_7

At some point we arrive to a pattern which allows the application of a DIST move for a lambda node. We do the rule:

structure_8

We see that fanins appear! … and then the propagation of DIST moves through the syntactic tree continues until eventually we get this:

structure_9So the syntactic tree self-multiplied, but the two copies are still connected by FOTREEs  which connect to left.out ports of the lambda nodes which are part of the syntactic tree (figured only one in the previous image).

Notice that now (or even earlier, it does not matter actually, will be explained rigorously why when we shall talk about the computing model, for the moment we want to see if it is possible only) we are in position to apply the FAN-IN move. Also, it is clear that by using CO-COMM and CO-ASSOC moves we can shuffle the arrows of the FOTREE,  which is “conjugated” with a fanin at the root and with fanouts at the leaves, so that eventually we get this.

structure_10

The self-multiplication is achieved! It looks strikingly like the anaphase [source]

800px-Anaphase.svgfollowed by telophase [source]

Telophase.svg____________________________________________________

 

 

Lambda calculus and the fixed point combinator in chemlambda (V)

This is the 5th  (continuing from part I  and part II  and part III and part IV)   in a series of expository posts where we put together in one place the pieces from various places about:

  • how is treated lambda calculus in chemlambda
  • how it works, with special emphasis on the fixed point combinator.

I hope to make this presentation  self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)

_________________________________________________________

This series of posts may be used as a longer, more detailed version of sections

  • The chemlambda formalism
  • Chemlambda and lambda calculus
  • Propagators, distributors, multipliers and guns
  • The Y combinator and self-multiplication

from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication,  arXiv:1403.8046 [cs.AI],  which is accepted in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York, (go see the presentation of Louis Kauffman if you are near the event.) Here is a link to the published  article, free, at MIT Press.

_________________________________________________________

2.  Lambda calculus terms as seen in  chemlambda  continued.

Let’s look at the structure of a molecule coming from the process of translation of a lambda term described in part IV.

Then I shall make some comments which should be obvious after the fact, but useful later when we shall discuss about the relation between the graphic beta move (i.e. the beta rule for g-patterns) and the beta reduction and evaluation strategies.

That will be a central point in the exposition, it is very important to understand it!

So, a molecule (i.e. a pattern with the free ports names erased, see part II for the denominations) which represents a lambda term looks like this:

 

structure_1

In light blue is the part of the molecule which is essentially the syntactic tree of the lambda term.  The only peculiarity is in the orientation of the arrows of lambda nodes.

Practically this part of the molecule is a tree, which has as nodes the lambda and application ones, but not fanouts, nor fanins.

The arrows are directed towards the up side of the figure.  There is no need to draw it like this, i.e. there is no global rule for the edges orientations, contrary to the ZX calculus, where the edges orientations are deduced from from the global down-to-up orientation.

We see a lambda node figured, which is part of the syntactic tree. It has the right.out  port connecting to the rest of the syntactic tree and the left.out port connecting to the yellow part of the figure.

The yellow part of the figure is a FOTREE (fanout tree). There might be many FOTREEs,  in the figure appears only one. By looking at the algorithm of conversion of a lambda term into a g-pattern, we notice that in the g-patterns which represent lambda terms the FOTREEs may appear in two places:

  • with the root connected to the left.out port of a lambda node, as in the g-pattern which correspond to Lx.(xx)
  • with the root connected to the middle.out port of an Arrow which has the middle.in port free (i.e. the port variable of the middle.in of that Arrow appears only once in that g-pattern), for example for the term  (xx)(Ly.(yx))

As a consequence of this observation, here are two configurations of nodes which NEVER appear in a molecule which represents a lambda term:

structure_2

Notice that these two patterns are EXACTLY those which appear as the LEFT side of the moves DIST! More about this later.

Remark also the position of the  the insertion points of the FOTREE which comes out of  the left.out port of the figured lambda node: the out ports of the FOTREE connect with the syntactic tree somewhere lower than where the lambda node is. This is typical for molecules which represent lambda terms. For example the following molecule, which can be described as the g-pattern L[a,b,c] A[c,b,d]

structure_3

(but with the port variables deleted) cannot appear in a molecule which corresponds to a lambda term.

 

Let’s go back to the first image and continue with “TERMINATION NODE (1)”. Recall that termination nodes are used to cap the left.out port of a lambda lode which corresponds to a term Lx.A with x not occurring in A.

Finally, “FREE IN PORTS (2)” represents free in ports which correspond to the free variables of the lambda term. As observed earlier, but not figured in the picture, we MAY have free in ports as ones of a FANOUT tree.

I collect here some obvious, in retrospect, facts:

  • there are no other variables in a g-pattern of a lambda term than the port variables. However, every port variable appears at most twice. In the graphic version (i.e. as graphs) the port variables which appear twice are replaced by edges of the graph, therefore the bound lambda calculus variables disappear in chemlambda.
  • moreover, the free port in variables, which correspond to the free variables of the lambda term, appear only once. Their multiple occurrences in the lambda term are replaced by FOTREEs.  All in all, this means that there are no values at all in chemlambda.
  • … As a consequence, the nodes application and lambda abstraction are not gates. That means that even if the arrows of the pattern graphs appear in the grammar version as (twice repeating) port variables, the chemlambda formalism has no equational side: there are no equations needed between the port variables. Indeed, no such equation appears in the definition of g-patterns, nor in the definition of the graph rewrites.
  • In the particular case of g-patterns coming from lambda terms it is possible to attach equations to the nodes application, lambda abstraction and fanout, exactly because of the particular form that such g-patterns have. But this is not possible, in a coherent way (i.e. such that the equations attached to the nodes have a global solution) for all molecules!
  • after we pass from lambda terms to chemlambda molecules, we are going to use the graph rewrites, which don’t use any equations attached to the nodes, nor gives any meaning to the port variables other than that they serve to connect nodes. Thus the chemlambda molecules are not flowcharts. There is nothing going through arrows and nodes. Hence the “molecule” image proposed for chemlambda graphs, with nodes as atoms and arrows as bonds.
  • the FOTREEs appear in some positions in a molecule which represents a lambda term, but not in any position possible. That’s more proof that not any molecule represents a lambda term.
  • in particular the patterns which appear at LEFT in the DIST graph rewrites don’t occur in molecules which represent lambda terms. Therefore one can’t apply DIST moves in the + direction to a molecule which represents a lambda term.
  • Or, otherwise said, any molecule which contains the LEFT patterns of a DIST move is not one which represents a lambda term.

_______________________________________________________

3. The beta move. Reduction and evaluation. 

I explain now in what sense the graphic beta move, or beta rule from chemlambda, corresponds to the beta reduction in the case of molecules which correspond to lambda terms.

Recall from part III the definition of he beta move

L[a1,a2,x] A[x,a4,a3]   <–beta–> Arrow[a1,a3] Arrow[a4,a2]

or graphically

beta_move_exp

If we use the visual trick from the pedantic rant, we may depict the beta move as:

beta_move_exp_2

i.e. we use as free port variables the relative positions  of the ports in the doodle.  Of course, there is no node at the intersection of the two arrows, because there is no intersection of arrows at the graphical level. The chemlambda graphs are not planar graphs.”

The beta reduction in lambda calculus looks like this:

(Lx.B) C –beta reduction–> B[x:=C]

Here B and C are lambda terms and B[x:=C] denotes the term which is obtained from B after we replace all the occurrences of x in B by the term C.

I want to make clear what is the relation between the beta move and the beta reduction.  Several things deserve to be mentioned.

It is of course expected that if we translate (Lx.B)C and B[x:=C] into g-patterns, then  the beta move transforms the g-pattern of (Lx.B)C into the g-pattern of B[x:=C]. This is not exactly true, in fact it is  true in a more detailed and interesting sense.

Before that it is worth mentioning that the beta move applies even for patterns which don’t correspond to lambda terms.  Hence the beta move has a range of application greater than the beta reduction!

Indeed, look at the third figure from this post, which can’t be a pattern coming from a lambda term. Written as a g-pattern this is L[a,b,c] A[c,b,d]. We can apply the beta move and it gives:

L[a,b,c] A[c,b,d]  <-beta-> Arrow[a,d] Arrow[b,b]

which can be followed by a COMB move

Arrow[a,d] Arrow[b,b] <-comb-> Arrow[a,d] loop

Graphically it looks like that.

structure_4

In particular this explains the need to have the loop and Arrow graphical elements.

In chemlambda we make no effort to stay inside the collection of graphs which represent lambda terms. This is very important!

Another reason for this is related to the fact that we can’t check if a pattern comes from a lambda term in a local way, in the sense that there is no local (i.e. involving an a priori bound on the number of graphical elements used) criterion which describes the patterns coming from lambda terms. This is obvious from the previous observation that FOTREEs connect  to the syntactic tree lower than their roots.

Or, chemlambda is a purely local graph rewrite system, in the sense that the is a bound on the number of graphical elements involved in any move.

This has as consequence: there is no correct graph in chemlambda.  Hence there is no correctness enforcement in the formalism.   In this respect chemlambda differs from any other graph rewriting system which is used in relation to lambda calculus or more general to functional programming.

Let’s go back to the beta reduction

(Lx.B) C –beta reduction–> B[x:=C]

Translated into g-patterns the term from the LEFT looks like this:

L[a,x,b] A[b,c,d] C[c]  FOTREE[x,a1,…,aN] B[a1,…,aN, a]

where

  • C[c] is the translation as a g-pattern of the term C, with the out port “c”
  • FOTREE[x,a1,…,aN] is the FOTREE which connects to the left.out port of the node L[a,x,b] and to the ports a1, …, aN which represent the places where the lambda term variable “x” occurs in B
  • B[a1,…,aN.a] is a notation for the g-pattern of B, with the ports a1,…,aN (where the FOTREE connects) mentioned, and with the out port “a”

The beta move does not need all this context, but we need it in order to explain in what sense the beta move does what the beta reduction does.

The beta move needs only the piece L[a,x,b] A[b,c,d]. It is a local move!

Look how the beta move acts:

L[a,x,b] A[b,c,d] C[c]  FOTREE[x,a1,…,aN] B[a1,…,aN, a]

<-beta->

Arrow[a,d] Arrow[c,x] FOTREE[x,a1,…,aN] B[a1,…,aN, a]

and then 2 comb moves:

Arrow[a,d] Arrow[c,x] C[c] FOTREE[x,a1,…,aN] B[a1,…,aN, a]

<-2 comb->

C[x] FOTREE[x,a1,…,aN] B[a1,…,aN,d]

Graphically this is:

structure

The graphic beta move, as it looks on syntactic trees of lambda terms, has been discovered  in

Wadsworth, Christopher P. (1971). Semantics and Pragmatics of the Lambda Calculus. PhD thesis, Oxford University

This work is the origin of the lazy, or call-by-need evaluation in lambda calculus!

Indeed, the result of the beta move is not B[x:=C] because in the reduction step is not performed any substitution x:=C.

In the lambda calculus world, as it is well known, one has to supplement the lambda calculus with an evaluation strategy. The call-by-need evaluation explains how to do in an optimized way the substitution x:=C in B.

From the chemlambda point of view on lambda calculus, a very interesting thing happens. The g-pattern obtained after the beta move (and obvious comb moves) is

C[x] FOTREE[x,a1,…,aN] B[a1,…,aN,d]

or graphically

structure_5

As you can see this is not a g-pattern which corresponds to a lambda term.  That is because it has a FOTREE in the middle of it!

Thus the beta move applied to a g-pattern which represents a lambda term gives a g-patterns which can’t represent a lambda term.

The g-pattern which represents the lambda term B[x:=C] is

C[a1] …. C[aN]  B[a1,…,aN,d]

or graphically

structure_6

In graphic lambda calculus, or GLC, which is the parent of chemlambda we pass from the graph which correspond to the g-pattern

C[x] FOTREE[x,a1,…,aN] B[a1,…,aN,d]

to the g-pattern of B[x:=C]

C[a1] …. C[aN]  B[a1,…,aN,d]

by a GLOBAL FAN-OUT move, i.e. a graph rewrite which looks like that

if C[x] is a g-pattern with no other free ports than “x” then

C[x] FOTREE[x, a1, …, aN]

<-GLOBAL FAN-OUT->

C[a1] …. C[aN]

As you can see this is not a local move, because there is no a priori bound on the number of graphical elements involved in the move.

That is why I invented chemlambda, which has only local moves!

The evaluation strategy needed in lambda calculus to know when and how to do the substitution x:C in B is replaced in chemlambda by SELF-MULTIPLICATION.

Indeed, this is because the g-pattern

C[x] FOTREE[x,a1,…,aN] B[a1,…,aN,d]

surely has places where we can apply DIST moves (and perhaps later FAN-IN moves).

That is for the next post.

___________________________________________________

 

Lambda calculus and the fixed point combinator in chemlambda (IV)

This is the 4th  (continuing from part I  and part II  and part III)   in a series of expository posts where we put together in one place the pieces from various places about:

  • how is treated lambda calculus in chemlambda
  • how it works, with special emphasis on the fixed point combinator.

I hope to make this presentation  self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)

_________________________________________________________

This series of posts may be used as a longer, more detailed version of sections

  • The chemlambda formalism
  • Chemlambda and lambda calculus
  • Propagators, distributors, multipliers and guns
  • The Y combinator and self-multiplication

from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication,  arXiv:1403.8046 [cs.AI],  which is accepted in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York, (go see the presentation of Louis Kauffman if you are near the event.) Here is a link to the published  article, free, at MIT Press.

_________________________________________________________

2.  Lambda calculus terms as seen in  chemlambda .

In this post is explained how to associate to any untyped lambda calculus term a g-pattern.

Important. Not any g-pattern, i.e. not any pattern, and not any molecule from chemlambda is associated to a lambda term!

Recall first what is a(n untyped) lambda term.

<lambda term> ::= <variable> |  ( <lambda term> <lambda term> ) | ( L <variable> . <lambda term>)

The operation which associates to a pair of lambda terms A and B the term AB is called application.

The operation which associates to a variable x and a term A the term Lx.A  is called (lambda) abstraction.

Every variable which appears in a term A is either bound or free. The variable x is bound if it appears under the scope of an abstraction, i.e. there is a part of A with the form Lx. B .

It it allowed to rename the bound variables of a term. This is called alpha renaming or alpha conversion. Two terms which differ only by alpha renaming are considered to be the same one.

It is then possible to rename the bound variables of a term such that if x is a bound variable then it appears under the scope of only one abstraction and moreover it does not appear as a free variable.

Further is an algorithm   which transforms a lambda term, in this form which eliminates the ambiguities of the names of bound variables, into a g-pattern. See the post Conversion of lambda calculus terms into graphs for an algorithm which transforms a general lambda term into a GLC graph.

In this algorithm, a variable is said to be “fresh” if it does not appear before the step of the algorithm in question.

We start from declaring that we shall use (lambda terms) variables as port variables.

 

Let  Trans[a,A]  be the translation operator, which has as input a variable  and a lambda term and as output a mess (see part II for the definition of a mess:  “A mess is any finite multiset of graphical elements in grammar version.”)

The algorithm defines Trans.

We start from an initial pair a0,  A0 , such that a0 does not occur in A0.

Then we define Trans recursively by

  • Trans[a, AB] = A[a’, a”, a]  Trans[a’,A] Trans[a”,B]  with  a’ and a” fresh variables,
  • Trans[a, Lx.A] = L[a’,x,a] Trans[a’,A]  with a’ a fresh variable
  • Trans[a,x] =  Arrow[x,a]  .

Practically, Trans gives a version of the syntactic tree of the term, with some peculiarities related to the use of the grammar version of the graphical elements instead of the usual gates notation for the two operations, and also the strange orientation of the arrow of the lambda node which is decorated by the respective bound variable.

Trans[a0,A0] is a mess and not a g-pattern because there may be (port) variables which occur more than twice. There are two  possible cases for this:

  • (a) either a port variable occurs once as an  out port variable and more than once as an  in port variable,
  • (b)  or it does not occur as an out port variable but it occurs more than once as an in port variable,

Let’s see examples:

  • (a)  start with a0 = a and A0 = Lx.(xx) .  Then Trans[a,Lx.xx] = L[a1,x,a] Trans[a1, xx] = L[a1, x, a] A[a2,a3,a_1] Trans[a2,x] Trans[a3,x] = L[a1,x,a] A[a2,a3,a1] Arrow[x,a2] Arrow[x,a3]

 

first_ex

As you see the port variable x appears 3 times, once as an out port variable, in L[a1,x,a] , and twice as an in port variable, in Arrow[x,a2] Arrow[x,a3] .

  • (b)  start with a0 = a and A0 = (xz)(yz) . Then Trans[a,(xz)(yz)] = A[a1,a2,a] Trans[a1,xz] Trans[a2,yz] = A[a1,a2,a] A[a3,a4,a1] Trans[a3,x] Trans[a4,z] A[a5,a6,a2] Trans[a5,y] Trans[a6,z] = A[a1,a2,a] A[a3,a4,a1]  Arrow[x,a3]  Arrow[z,a4]  A[a5,a6,a2]  Arrow[y,a5] Arrow[z,a6]

 

second_ex

In this case the port variable z does not occur as a out port variable, but it appears twice as a in port variable, in Arrow[z,a4] Arrow[z,a6].

 

To pass from a mess to a g-pattern is easy now: we shall introduce fanout nodes.

Indeed, an FO  tree with free in port a and free out ports a1, a2, …, aN  is, by definition, ANY g-pattern formed by the rules:

  •  FO[a,a1,a2]  is an FO tree
  • if  FOTREE[a,a1,…,aN] is an FO tree which has a as the only free in port variable and a1, …, aN the only free out port variables, then for every j=1,…,N and for any u,v fresh port variables  FOTREE[a,a1,…,aN] FO[aj,u,v]  is an FO tree with a as the only free port in variable and a1, …, aN, u, v  with the aj removed from the list as the free out port variables.

Remark that by a sequence of CO-COMM and  CO-ASSOC moveswe can pass from any FO tree with free in port variable a and free out port variables a1, …, aN to any other FO tree with the same free in or out port variables.

We shall not choose a canonical FO tree associated to a pair formed by one free in port variable and a finite set of free out port variables, for this reason. (However, in any algorithm where FO trees have to be constructed, such a choice will be embedded in the respective algorithm.]

In order to transform the mess which is outputted by the Trans operator, we have to solve the cases (a), (b) explained previously.

(a) Suppose that there is a port variable x which satisfies the description for (a), namely that x occurs once as an  out port variable and more than once as an  in port variable. Remark that, because of the definition of the Trans operator, the port variable x will appear at least twice in a list Arrow[x,a1] … Arrow[x,aN] and only once somewhere in a node L[b,x,c].

Pick then an FO tree FOTREE[x,a1,…,aN]  with the only free in port variable x and the only free out port variables a1, …, aN. Erase then from the mess outputted by Trans the collection Arrow[x,a1] … Arrow[x,aN]  and replace it by FOTREE[x,a1,…,aN].

In this way the port variable x will occur only once in a out port, namely in L[b,x,c] and only once in a in port, namely the first FO[x,…] element of the FO tree FOTREE[x,a1,…,aN].

Let’s see for our example, we have

Trans[a, Lx.xx] = L[a1,x,a] A[a2,a3,a1] Arrow[x,a2] Arrow[x,a3]

so the variable x appears at an out port in the node L[a1,x,a] and at in ports in the list Arrow[x,a2] Arrow[x,a3] .

There is only one FO tree with the free in port x and the free out ports a2, a3, namely FO[x,a2,a3].  Delete the list Arrow[x,a2] Arrow[x,a3] and replace it by FO[x,a2,a3]. This gives

L[a1,x,a] A[a2,a3,a1] FO[x,a2,a3]

which is a g-pattern! Here is what we do, graphically:

fo_first_ex

(b) Suppose that there is a port variable x which satisfies the description for (b), namely that x  does not occur as an out port variable but it occurs more than once as an in port variable. Because of the definition of the Trans operator, it must be that x will appear at least twice in a list Arrow[x,a1] … Arrow[x,aN] and nowhere else.

Pick then a FO tree FOTREE[x,a1,…,aN] with the only free in port variable x and the only free out port variables a1, …, aN.

Delete Arrow[x,a1] … Arrow[x,aN]  and replace it by FOTREE[x,a1,…,aN] .

In this way the variable x will appear only once, as a free in port variable.

For our example, we have

Trans[a,(xz)(yz)] =  A[a1,a2,a] A[a3,a4,a1]  Arrow[x,a3]  Arrow[z,a4]  A[a5,a6,a2]  Arrow[y,a5] Arrow[z,a6]

and the problem is with the port variable z which does not occur in any out port, but it does appear twice as an in port variable, namely in Arrow[z,a4]  Arrow[z,a6] .

We delete Arrow[z,a4]   Arrow[z,a6] and replace it by FO[z,a4,a6] and we get the g-pattern

A[a1,a2,a] A[a3,a4,a1]  Arrow[x,a3]  FO[z,a4,a6]  A[a5,a6,a2]  Arrow[y,a5]

In graphical version, here is what has been done:

fo_second_ex

 

OK, we are almost done.

It may happen that  there are out port variables which appear  from  Lx.A with x not occuring in A (i.e. free).  For example let’s start with a0=a and  A0 = Lx.(Ly. x) . Then Trans[a,Lx.(Ly.x)] = L[a1,x,a] Trans[a1, Ly.x] = L[a1,x,a] L[a2,y,a1] Trans[a2,x] = L[a1,x,a] Arrow[x,a2] L[a2,y,a1].

There is the port variable y which appears only as an out port variable in a L node, here L[a2,y,a1], and not elsewhere.

For those port variables x which appear only in a L[a,x,b] we add a termination node T[x].

In our example L[a1,x,a] Arrow[x,a2] L[a2,y,a1] becomes L[a1,x,a] Arrow[x,a2] L[a2,y,a1] T[y]. Graphically this is

third_ex

We may still have Arrow elements which can be absorbed into the nodes ports, therefore we close the conversion algorithm by:

Apply the COMB moves (see part III)  in the + direction and repeat until there is no place to apply them any more.

 

Exercice: Consider the Y combinator

Y = Lf.(  (Lx. f(xx))    (Ly. f(yy))   )

Find it’s conversion as a g-pattern.

________________________________________________________________

Lambda calculus and the fixed point combinator in chemlambda (III)

This is the 3rd  (continuing from part I  and part II)  in a series of expository posts where we put together in one place the pieces from various places about:

  • how is treated lambda calculus in chemlambda
  • how it works, with special emphasis on the fixed point combinator.

I hope to make this presentation  self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)

_________________________________________________________

This series of posts may be used as a longer, more detailed version of sections

  • The chemlambda formalism
  • Chemlambda and lambda calculus
  • Propagators, distributors, multipliers and guns
  • The Y combinator and self-multiplication

from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication,  arXiv:1403.8046 [cs.AI],  which is accepted in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York, (go see the presentation of Louis Kauffman if you are near the event.) Here is a link to the published  article, free, at MIT Press.

_________________________________________________________

1.  The chemlambda formalism continued: graph rewrites.

Now we have all we need to talk about graph rewrites.

For clarity, see  part II for the notion of  “pattern”. Its meaning depends on what we use: the graphical version of the grammar version. In the graphical version a pattern is a chemlambda graph with the free ports and invisible nodes decorated with port variables. In the grammar version we have the equivalent notion of a g-pattern, which is a way to write a pattern as a multiset of graphical elements.

It is allowed to rename the port variables in a g-pattern, such that after the renaming we still get a g-pattern. That means that if M is a g-pattern  and f is  any one-to-one function from V(M) to another set of port variables A, then we may replace any port variable x from V(M) by f(x).  We shall not think about this (sort of alpha renaming for g-patterns) as being a graph rewrite.

 

I shall use the following equivalent names further:

  • “graph rewrite”
  • “move”
  • “reduction”

In simple words, a graph rewrite is a rule which says: “replace the pattern   graph_{1} by the pattern graph_{2}“.

 

Let’s see more precisely then what is a graph rewrite. (Technically this is a simple form of graph rewrite, which is not dependent on context, later we may speak about more involved forms. First let’s understand exactly this simple form!)

In order to define a graph rewrite, or move, we need two g-patterns, call them graph_{1} and graph_{2}, such that (perhaps after a renaming of port variables):

  • they have the same set of FV_in port variables
  • they have the same set of FV_out variables

A move is a pair of such g-patterns. The graph_{1} is called the LEFT pattern of the move, the graph_{2} is called the RIGHT pattern of the move.

The move can be performed from LEFT to RIGHT, called sometimes the “+” direction: replace the LEFT pattern by the RIGHT pattern.

Likewise, the move can be performed from RIGHT to LEFT, called sometimes the “-” direction: replace the RIGHT pattern with the LEFT pattern.

Technically, what I describe here can be made fully explicit as a DPO graph rewriting.

Even if the moves are reversible (they can be performed in the + or – direction), there is a preference to use only the “+” direction (and to embed, if needed, a move performed in the “-” direction into a sequence of moves, called “macro”, more about this later).

The “+” direction is not arbitrarily defined.

_________________________________________________________

OK, enough with these preparations, let’s see the moves.

We shall write the moves in two ways, which are equivalent.

When expressed with g-patterns, they are written as

LEFT pattern  <–name of move–>  RIGHT pattern

When expressed with patterns (i.e graphical), they appear as

move_genericThe port names appear in blue. The name of the move appears in blue, the LEFT is on the left, the RIGHT is on the right, the move is figured by a blue arrow.

Pedantic, but perhaps useful rant.     For some reason, there are people who confuse graphs (which are clearly defined mathematical objects) with their particular representations (i.e. doodles), taking them “literally”.  Graphs are graphs and doodles are doodles. When people use doodles for reasoning with graphs, this is for economy of words reasons, the famous “a picture is worth a thousand words”.  There is nothing wrong with using doodles for reasoning with graphs, as long as you know the convention used. Perhaps the convention is so intuitive that it would need 1000000 words to make it clear (for a machine), but however there is a simple criterion which helps those who don’t trust their sight: you got it right if you understand what the doodle means at the graph level.

Look again at the previous picture, which shows you what a generic move looks like. The move (from LEFT to RIGHT) consists into:

  • cutting the arrows and name them by free port variables (here used 1,…, 5)
  • replacing the LEFT pattern by the RIGHT pattern such that it respects the free port variables (1 to 1, 2 to 2, …)
  • gluing back the arrows with the rest of the graph, which is not depicted in the move.

How simple is that?

To make it even more simple, we use the following visual trick: use the relative placements of the free ports in the doodle  as the port variables.

If look carefully at the previous picture, then you notice that you may redraw it (without affecting what the doodle means at the graph level) by representing  the free ports of the RIGHT in the same relative positions as the free ports from the left.

The drawing would then look like this:

move_generic_2

Then you may notice that you don’t need to write the port variables on the doodles, because they have the same relative positions, so you may as well describe the move as:

move_generic_3

This is the convention used everywhere in the doodles from this blog (and it’s nothing special, it’s used everywhere).

I shall close the pedantic rant by saying that there is a deep hypocrisy in the claim that there is ANY need to spend so much words to make clear things clear, like the distinction between graphs and doodles, and relative positions and so on. I ask those who think that text on a page is clear and (a well done) doodle is vague: do you attach to your text a perhaps sound introduction which explains that you are going to use latin letters, that no, the letter and it’s image in the mirror are not the same, that words are to be read from left to right, that space is that character which separates two words, that if you hit the end of a text line then you should pass to the line from behind, that a line is a sequence of characters separated by an invisible character eol, …..? All this is good info for making a text editor, but you don’t need to program a text editor first in order to read a book (or to program a text editor).  It would be just crazy, right?  Our brains use exactly the same mechanisms to parse a doodle as a text page and a doodle as a depiction of a graph. Our brains understand very well that if you change the text fonts then you don’t change the text, and so on. A big hypocrisy, which I believe has big effects in the divide between various nerd subcultures, like IT and geometers, with a handicapping effect which manifests into the real life, under the form of the products the IT is offering. Well, end of rant.

 

Combing moves. These moves are not present in the original chemlambda formalism, because they  are needed at the level of the g-patterns. Recall  from part I that Louis Kauffman proposed to use commutative polynomials as graphical elements, which brings the need to introduce the Arrow element A[x,y]. This is the same as introducing invisible nodes in the chemlambda molecules (hence the passage from molecules to patterns). The combing moves are moves which eliminate (or add) invisible nodes in patterns. This corresponds in the graphical version to decorations (of those invisible nodes) on arrows of the molecules.

A combing move eliminates an invisible node (in the + direction) or adds an invisible node (in the – direction).

A first combing move is this:

Arrow[x,y] A rrow[y,z]  <–comb–> Arrow[x,z]

or graphically remove (or add) a (decoration of an) invisible node :

comb_1

Another combing move is:

Arrow[x,x] <–comb–> loop

or graphically an arrow with the in and out ports connected  is a loop.

Another family of combing moves is that if you connect an arrow to a port of a node then you can absorb the arrow into the port:

L[x,y,z] Arrow[u,x]  <–comb–> L[u,y,z]

L[x,y,z] Arrow[y,u] <–comb–> L[x,u,z]

L[x,y,z] Arrow[z,u] <–comb–> L[x,y,u]

______________________________________

FO[x,y,z] Arrow[u,x]  <–comb–> FO[u,y,z]

FO[x,y,z] Arrow[y,u] <–comb–> FO[x,u,z]

FO[x,y,z] Arrow[z,u] <–comb–> FO[x,y,u]

______________________________________

A[x,y,z] Arrow[u,x]  <–comb–> A[u,y,z]

A[x,y,z] Arrow[u,y] <–comb–> A[x,u,z]

A[x,y,z] Arrow[z,u] <–comb–> A[x,y,u]

______________________________________

FI[x,y,z] Arrow[u,x]  <–comb–> FI[u,y,z]

FI[x,y,z] Arrow[u,y] <–comb–> FI[x,u,z]

FI[x,y,z] Arrow[z,u] <–comb–> FI[x,y,u]

______________________________________

Now, more interesting moves.

The beta move.     The name is inspired from the beta reduction of lambda calculus (explanations later)

L[a1,a2,x] A[x,a4,a3]   <–beta–> Arrow[a1,a3] Arrow[a4,a2]

or graphically

beta_move_exp

If we use the visual trick from the pedantic rant, we may depict the beta move as:

beta_move_exp_2

i.e. we use as free port variables the relative positions  of the ports in the doodle.  Of course, there is no node at the intersection of the two arrows, because there is no intersection of arrows at the graphical level. The chemlambda graphs are not planar graphs.

The FAN-IN move. This is a move which resembles the beta move.

FI[a1,a4,x] FO[x,a2,a3]

<–FAN-IN–>

Arrow[a1,a3]  Arrow[a4,a2]

(I wrote it like this because it does not fit in one line)

Graphically, with the obvious convention from the pedantic rant, the move is this:

fanin_move_exp

The FAN-OUT moves.  There are two moves: CO-COMM  (because it resembles with a diagram which expresses co-commutativity) and CO-ASSOC (same reason, but for co-associativity).

FO[x,a1,a2]  <–CO-COMM–> FO[x,a2,a1]

and

FO[a1,u,a2] FO[u,a3,a4]

<-CO-ASSOC->

FO[a1,a3,v] FO[v,a4,a2]

or graphically:

convention_3

The DIST moves. These are called distributivity moves. Remark that the LEFT pattern is simpler than the RIGHT pattern in both moves.

A[a1,a4,u] FO[u,a2,a3]

<–DIST–>

FO[a1,a,b] FO[a4,c,d] A[a,c,a2] A[b,d,a3]

and

L[a1,a4,u] FO[u,a2,a3]

<–DIST–>

FI[a1,a,b] FO[a4,c,d] L[c,b, a2] L[d,a,a3]

 

or graphically:

convention_6

The LOCAL PRUNING moves. These are used with the termination node. There are four moves:

FO[a1,a2,x] T[x] <–LOC-PR–> Arrow[a1,a2]

L[a1,x,y] T[x] T[y] <–LOC-PR–> T[a1]

FI[a1,a2,x] T[x] <–LOC-PR–> T[a1] T[a2]

A[a1,a2,x] T[x] <–LOC-PR–> T[a1] T[a2]

 

or graphically

convention_4

 

____________________________________________________________

Lambda calculus and the fixed point combinator in chemlambda (II)

This is the second  (continuing from part I)  in a series of expository posts where we put together in one place the pieces from various places about:

  • how is treated lambda calculus in chemlambda
  • how it works, with special emphasis on the fixed point combinator.

I hope to make this presentation  self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)

_________________________________________________________

This series of posts may be used as a longer, more detailed version of sections

  • The chemlambda formalism
  • Chemlambda and lambda calculus
  • Propagators, distributors, multipliers and guns
  • The Y combinator and self-multiplication

from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication,  arXiv:1403.8046 [cs.AI],  which is accepted in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York, (go see the presentation of Louis Kauffman if you are near the event.) Here is a link to the published  article, free, at MIT Press.

_________________________________________________________

1.  The chemlambda formalism continued: molecules, patterns and g-patterns.

 

Chemlambda works with graphs which are called “molecules”. In the last post was proposed also a grammar version of molecules, based on the idea that a molecule is made by a finite number of graphical elements, each graphical element having ports, the ports are marked with “port variables”; two ports connect (in the graphical version) is the same as the repetition of a port variable in the grammar version of a molecule.

Here are the graphical elements, along with their grammar versions:

  • lambda node

lambda_graph_elem

 

  • fanout node

fanout_graph_elem

  • application node

appl_graph_elem

  • fanin node

fanin_graph_elem

  • arrow

arrow_graph_elem

  • termination node

termin_graph_elem

  • loop

loop_graph_elem

There is only one loop element. The orientation of the loop, as represented in a drawing, is not relevant!

_________________________________________________________

A chemlambda  graph   is any graph made by a finite number of graphical elements, such that there is no conflict of orientation of arrows (i.e. the ports named “in” may connect only with the ports named “out”).

By convention, an arrow graphical element which has no free port (i.e. which has the middle.in port and the middle.out port connected) is represented as an arrow between “invisible nodes”. The ports of an arrow element which are connected are called “invisible ports”.

A molecule is a chemlambda graph without invisible nodes.

By convention, we identify a chemlambda graph with the molecule obtained  by erasing the invisible nodes.

A pattern is a chemlambda graph with the free ports and invisible nodes decorated with different port variables.

Let’s give a name for the grammar version of a pattern.

A mess is any finite multiset of graphical elements in grammar version. The port variables of a mess  is the set of port variables which appear as arguments in the graphical elements  of the mess. The set of port variables of a mess M  is denoted by V(M).

g-pattern is a mess with the properties:

  • any port variable appears at most twice,
  • if a port variable appears twice then it appears in a pair of “in” and “out” ports.

Simple examples:

  • A[x,x,y] is a mess which is not a g-pattern, because the port variable x appears in TWO “in” ports
  • A[x,y,x]  and A[y,x,x] are g-patterns
  • FO[x,y,y] is a mess which is not a g-pattern, because the port variable y appears in TWO “out” ports.
  • L[a,b,c] A[a,d,b] A[x,y,d] is a g-pattern
  • L[a,b,c] A[a,d,b] A[a,y, d]  is a mess which is not a g-pattern, because the port variable “a” appears in 3 places
  • L[a,b,c] A[u,d,b] A[z,y, d] FO[a,u,z] is a g-pattern

 

 

The set of  free variables of a g-pattern M is the set of port variables which appear only once. This set is denoted by FV(M) and it decomposes into a disjoint union of  FV_in (M) and  FV_out(M) of free port variable which appear in a “in” port and free port variables which appear in a “out” port.

There are g-patterns which have an empty set of port variables: for example   V(loop)  is the empty set.

The set of invisible variables of a g-pattern M, denoted by Inv(M),  is made by those variables of M which are not free and they appear in one of the ports of an arrow element.

As an illustration, consider this:

ex_pattern

  • up is a pattern, with free ports decorated by a1, a4, a5 and with invisible nodes decorated by a2, a3
  • in the middle is the corresponding g-pattern, with one free port variable of type “in” a1,  invisible variables a2, a3 and two free port variables of type “out” a4, a5
  • down, the molecule obtained from the pattern from the  up side, with invisible nodes erased and with no decoration of its free ports.

 

_________________________________________________________

Lambda calculus and the fixed point combinator in chemlambda (I)

This is the first in a series of expository posts where we put together in one place the pieces from various places about:

  • how is treated lambda calculus in chemlambda
  • how it works, with special emphasis on the fixed point combinator.

I hope to make this presentation as easy to follow as possible, particularly by trying to make is self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)

_________________________________________________________

This series of posts may be used as a longer, more detailed version of sections

  • The chemlambda formalism
  • Chemlambda and lambda calculus
  • Propagators, distributors, multipliers and guns
  • The Y combinator and self-multiplication

from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication,  arXiv:1403.8046 [cs.AI],  which is accepted in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York, (go see the presentation of Louis Kauffman if you are near the event.) Here is a link to the published  article, free, at MIT Press.

_________________________________________________________

1.  The chemlambda formalism.

Chemlambda has been introduced with the name “chemical concrete machine” (as an allusion to Berry and Boudol CHAM)  in the article M. Buliga, Chemical concrete machine, arXiv:1309.6914 [cs.FL] , also available on figshare here: (doi).

 

Chemlambda is a graph-rewrite system.  From the linked wiki page, only slightly edited:

Graph transformation, or graph rewriting, concerns the technique of creating a new graph out of an original graph algorithmically.

Graph transformations can be used as a computation abstraction. The basic idea is that the state of a computation can be represented as a graph, further steps in that computation can then be represented as transformation rules on that graph. Such rules consist of an original graph, which is to be matched to a subgraph in the complete state, and a replacing graph, which will replace the matched subgraph.

Formally, a graph rewriting system usually consists of a set of graph rewrite rules of the form L \rightarrow R, with L being called pattern graph (or left-hand side) and R being called replacement graph (or right-hand side of the rule). A graph rewrite rule is applied to the host graph by searching for an occurrence of the pattern graph (pattern matching) and by replacing the found occurrence by an instance of the replacement graph.”

In order to define chemlambda we need two ingredients:

  • the graphs used in the formalism
  • the graph rewrites.

 

A chemlambda graph (aka a molecule) is any graph made by a finite number of the following graphical elements:

4_chem

A BNF form would be:

<graphical-element> ::= <lambda> | <fanout> |  <appl> | <fanin> | <arrow>  | <loop> | <termin>

<middle.in>::= port  variable

<middle.out>::= port  variable

<left.in> ::= port variable

<left.out> ::= port variable

<right.in> ::= port variable

<right.out>::= port variable

<lambda>: := L[<middle.in>,<left.out>,<right.out>]

<fanout>::= FO[<middle.in>,<left.out>,<right.out>]

<appl>::= A[<left.in>,<right.in>,<middle.out>]

<fanin>::=FI[<left.in>,<right.in>,<middle.out>]

<arrow>::= Arrow[<middle.in>,<middle.out>]

<loop>::= loop

<termin>::= T[<middle.in>]

This notation is inspired from Louis Kauffman proposal to use commutative polynomials for the graphical elements (then, the variables of the polynomials play the roles of the ports). Louis hacked on July 3rd 2014  the Mathematica symbolic reduction  for polynomial commutative algebra in order to reduce the fixed point combinator in GLC automatically.  I take his approach and  use it here for the grammar notation of chemlambda molecules.

Let’s see first the names of the elements, then let’s discuss more details about them.

  • (a) is the lambda node, it has one in arrow, called the port middle.in and two out arrows, which are NOT interchangeable, called left.out (the one from the left of the middle.in arrow port) and right.out (the one from the right of middle.in port).
  • (b) is the fanout node. As the lambda node, it has one in arrow, called the port middle.in, and two out arrows, called the ports left.out and right.out , following the same graphical conventions for representing it, as previously.   
  • (c) is the application node. It has only one arrow out, called the port middle.out, and two in arrows, which are NOT interchangeable. The two arrow ports are called left.in (the one from the left of the middle.out port) and right.in (the one from the right of the middle.out port).
  • (d) is the fanin node. It has the same configuration of ports as the application node
  • (e) there are three graphical elements there, let’s take them in order.  (up) is an arrow. Now, in chemlambda are allowed arrows with one, or both ends free (i.e. not connected to a chemlambda node). Also (middle) loops with no nodes are accepted.  (down) termination node, with only one port in.

A  shorter, geometrical way to say all this is that the 3valent nodes are all locally planar.

A chemlambda graph is called “molecule”, and it is made by a finite number of graphical elements, i.e.

<molecule> ::= <graphical-element> | <molecule> <molecule>

with the convention that:

  • the order of the writing of the graphical elements of the molecule does NOT matter (one can permute the graphical elements in the grammar notation of the molecule)
  • any in port is connected with at most one out port
  • any out port is connected with at most one in port
  • any concatenation of two arrows is an arrow or a loop
  • any concatenation of an arrow and a port is a port.

If we use a grammar for this (some might like the 1000 words instead of the picture) that means that a <molecule> is well written if:

  • any port variable appears at most twice; when it appears twice then one of the appearances is a  port in (i.e. <middle.in> |  <left.in>  | <right.in>) and the other appearance is a port out (i.e. <middle.out> | <left.out> | <right.out>)
  • when a port variable appears only once then we call it a free port variable. The collection of free port variables (in grammar notation) is denoted by FV(<molecule>).  In the graphical notation we shall add, when needed, the free port variables in blue. Notice however that the graphical notation will make a difference between molecules, which are graphs with free ports NOT tagged with a port variable, and patterns, which are  graphs with free ports tagged with different port variables. This difference does not exist at the level of the grammar notation (helas!).
  • for the concatenation of arrows or concatenation of arrows or ports, this means that at the grammar level we have to introduce some combing rewrites, more about this later.
  • at the level of grammar we have all the kitchen with port variables which asks to well manage them and rename them consistently. At the graphic level this problem does not exist, with the exception of port variables which are free, which may be used for the definition of graph rewrites.

Question: why in graphical notation are needed only two colours for the 3valent nodes?

Answer: because the lambda node and the fanout node have the same type, therefore we colour the first red and the second green. Likewise the fanin node and the application node have the same type, so we colour the first red and the second green.

Let’s see some simple examples. (You can click on the figure to make it bigger.)

molecules_ex

First row: at the left is a molecule in graphical notation. At the right is the same molecule in grammar notation.  Look at the arrow which appears vertical in the graphical notation from the left, where is it in the grammar notation? Well look at the port variable “e”. It appears in the right.out port of the node lambda L[a,b,e] and also in the left.in port of the node application A[e,d,c].

In the grammar notation the same graph may be written as

L[a,b,e] Arrow[e,f] A[f,d,c]

but this is for the next time, when we shall talk about the combing rewrites. (What will happen is that the Arrow[e,f] will disappear because it connects the out port named with the variable “e” with the middle.in port of the Arrow[e,f], which makes the Arrow element redundant.)

Second row:   At the left a graph made by two arrows and two loops. Notice two things:

  • it does not matter if the arrows cross, there is no node there, only an artifact of the drawing a graph  in the plane of the screen,
  • also, even if on the screen we see two oriented loops with opposed orientation in the plane of the screen, as graphical elements the two loops (which have no node, in particular no 3valent node) are the same. “Locally planar” constraint act only as a cyclical order of ports of the 3valent nodes (which combined with the fact that each of the four 3valent nodes has one port which is special, the one called “middle”, gives then a non-ambiguous notion of “left”  and “right” ports).

At the right we see the grammar notation of the same graph.

Third row: Here is a more consistent graph (at the left) and it’s grammar notation at the right. There are 3 nodes, fanout, lambda and termination nodes. Notice  that the port variable “a” appears only in L[a,b,a] which corresponds to the fact that the right.out port and the middle,in port of that lambda node are connected (both being tagged with the port variable “a”.

_________________________________________________________

Here are the 1000 words which are needed to properly explain the notion of a molecule in chemlambda (i.e. if you don’t trust your sight). Such a description is of course useful for writing programs.

After this straightforward description, perhaps extremely boring one, because it can be immediately deduced from the short definition, next time we shall see the graph rewrites of chemlambda.

_________________________________________________________

 

Halfcross way to pattern recognition (in zipperlogic)

Inspired by the Zipper logic and knot diagrams post, here is an alternate encoding of chemlambda.

This is part of the series on zipper logic (branded in this post as  “zipperlogic”).  The last post is Zipper logic (VI) latest version, detailed.

As can be seen in that post, zipperlogic is equivalent with chemlambda, but it has two interesting qualities: is more intuitive and it has the CLICK move.

The CLICK move  transforms a pair of opposed half-zippers into a  zipper, which is then unzipped with the ZIP move. While the ZIP move is equivalent with the graphic beta  move, there is no correspondent to the CLICK move, apparently.

The CLICK move becomes useful when we use other realizations of the zipperlogic than chemlambda.    In the one where half-zippers are realized as towers of crossings, the CLICK move turns out to be a pattern recognition move, and the ZIP move becomes the familiar R2 (Reidemeister 2) move, applied to that pattern.

That is why CLICK is interesting: because in order to apply moves in chemlambda, we have first to identify the patterns where these moves may be used.

Now, I want to justify this in the following.  I shall not aim for another realization of zipperlogic, but for one of chemlambda, inspired by the one of zipperlogic seen as acting on towers of crossings.

I shall use half-crossings.  Recall that in the post Slide equivalence of knots and lambda calculus (I) I wrote:

Louis Kauffman proposes in his book Knots and Physics  (part II, section “Slide equivalence”), the notion of slide equivalence. In his paper “Knotlogic” he uses slide equivalence (in section 4) in relation to the self-replication phenomenon in lambda calculus. In the same paper he is proposing to use knot diagrams as a notation for the elements and operation of a combinatory algebra (equivalent with untyped lambda calculus).

[…]

Obviously, we have four gates, like in the lambda calculus sector of the graphic lambda calculus. Is this a coincidence?

 

So this post can be seen as a try to answer this question.

But the halfcrossings which I use here are different than the ones defined by Louis Kauffman. There might be a way to transform ones into the others, but I have not found it yet.

____________________________

Here is the encoding of chemlambda by halfcrossings:

halfcross_1

Remark that each of the halfcrossings has a dangling, vanishing thread, like in the previous post Bacterial conjugation is beta reduction.

[I shall come back in later posts to the relevance of this formalism for horizontal gene transfer.]

Look at this as a new notation for chemlambda nodes and just replace  the green and red nodes by these halfcrossings in order to get the right moves for the halfcrossings.

With an exception: the CLICK move. This move consists into joining neighbouring dangling threads, in two situations, one related to the beta move, the other related to the FAN-IN move.

Look how the beta move appears with halfcrossings and the CLICK move used for pattern recognition (in the figure this s calld “pattern id”):

halfcross_2

Nice, right?

 

Now, the other CLICK move, involved into the identification of the pattern appearing  in the FAN-IN move.

halfcross_3

In a future post I shall look at the DIST moves, in this encoding.

_________________________________

 

Bacterial conjugation is beta reduction

I come back to the idea from the post   Click and zip with bacterial conjugation , with a bit more details. It is strange, maybe, but perhaps is less strange than many other ideas circulating on the Net around brains and consciousness.

 

The thing is that bacteria can’t act based on semantics, they are more stupid than us. They have physical or chemical mechanisms which obviate the need to use semantics filters.

Bacteria are more simpler than brains, of course, but the discussion is relevant to brains as collections of cells.

The idea: bacterial conjugation is a form of  beta reduction!

On one side we have a biological phenomenon, bacterial conjugation. On the other side we have a logic world concept, beta reduction, which is the engine that moves lambda calculus, one of the two pillars of computation.

What is the relation between semantics, bacterial conjugation and beta reduction?

Lambda calculus is a rewrite system, with the main rewrite being beta reduction. A rewrite system, basically, says that whenever you see a certain pattern in front of you then you can replace this pattern by another.

Graphic lambda calculus is a graph rewrite system which is more general than lambda calculus. A graph rewrite system is like a rewrite system which used graphs instead of lines of text, or words. If you see certain  graphical patterns then you can replace them by others.

Suppose  that Nature uses (graphical) rewrite systems in the biological realm, for example suppose that bacteria interactions can be modeled by a graph rewrite system. Then,  there has to be a mechanism which replaces the recognition of pattern which involves two bacteria in interaction.

When two bacteria interact there are at least two ingredients:  spatial proximity (SP) and chemical interaction (CI).

SP is something we can describe and think about easily, but from the point of view of a microbe our easy description is void. Indeed, two bacteria in SP can’t be described as pairs of coordinate numbers which are numerically close, unless if each of the microbes has an internal representation of a coordinate system, which is stupid to suppose. Moreover, I think is too much to suppose that each microbe has an internal representation of itself and of it’s neighbouring microbes. This is a kind of a bacterial cartesian theater.

You see, even trying to describe what could be SP for a pair of bacteria does not make much sense.

CI happens when SP is satisfied (i.e. for bacteria in spatial proximity). There is of course a lot of randomness into this, which has to be taken into account, but it does not replace the fact that SP is something hard to make sense from the pov of bacteria.

In Distributed GLC we think about bacteria as actors (and not agents) and about SP as connections between actors. Those connections between actors change in a local, asynchronous way, during the CI (which is the proper graph rewrite, after the pattern between two actors in SP is identified).

In this view, SP between actors, this mysterious almost philosophical relation which is forced upon us after we renounce at the God eye point of view, is described as an edge in the actors diagram.

Such an edge, in Distributed GLC, it is always related to   an oriented edge (arrow) in the GLC (or chemlambda) graph which is doing the actual computation. Therefore, we see that arrows in GLC or chemlambda graphs (may) have more interpretation than being chemical bonds in (artificial) chemistry molecules.

Actually, this is very nice, but hard to grasp: there is no difference between CI and SP!

Now, according to the hypothesis from this post and from the previous one, the mechanism which is used by bacteria for graph rewrite is to grow pili.

The following image (done with the tools I have access to right now) explains more clearly how bacterial conjugation may be (graphic) beta reduction.

Image002

In the upper part of the figure we see the  lambda abstraction node (red)  and the application node (green)  as encoded by crossings. They are strange crossings, see the post  Zipper logic and knot diagrams . Here the crossings are representing with a half of the upper passing thread half-erased.

Now, suppose that the lambda node is (or is managed by) a bacterial cell and that the application node is (managed by) anther bacterium cell. The fact that they are in SP is represented in the first line under the blue separation line in the picture. At the left of the first row (under the blue horizontal line) , SP is represented by the arrow which goes from the lambda node (of the bacterium at left) and the application node (of the bacterium at right). At the right of the first row, this SP arrow is represented as the upper arc which connects the two crossings.

Now the process of pattern recognition begin. In Nature, that is asymmetric: one of the bacterial cells grow a pilus. In this diagrammatic representation, things are symmetric (maybe a weakness of the description). The pilus growth is represented as the CLICK move.

This brings us to the last row of the image. Once the pattern is recognized (or in place) the graph reduction may happen by the ZIP move. In the crossing diagram this is represented by a R2 move, which itself is one of the ways to represent (graphic) beta moves.

Remark that in this process we have two arcs:  the upper arc from the RHS crossing diagram (i.e the arc which represents the SP) and the lower arc appeared after the CLICK move (i.e. the pilus connecting the two bacteria).

After the ZIP move we get two (physical) pili, this corresponds to the last row in the diagram of bacterial conjugation, let me reproduce it again here from the wiki source:

 

661px-Conjugation.svg

After the ZIP move the arc which represents SP is representing a pilus as well!

____________________________________

How to use chemlambda for understanding DNA manipulations

UPDATE: Probably RNA better than DNA. The best entry point is that document. More in the references at the end of that doc. Or in the quine graphs repository. Or if you like just to see animations, there are aplenty (more than 350) in this collection.

________________

… or the converse: how to use DNA manipulations to understand chemlambda, this is a new thread starting with this post.

This is a very concrete, nice project, I already have some things written, but it is still in a very fluid form.

Everybody is invited to work with me on this. It would be very useful to collaborate with people which have knowledge about DNA and enzymes involved into the processes around DNA.

So, if you want to contribute, then you can do it in several ways:

  • by dedicating a bit of your brain power to concrete parts of this
  • by sending me links to articles which you have previously  read and understood
  • by asking questions about concrete aspects of the project
  • by proposing alternative ideas, in a clear form
  • by criticizing the ideas from here.

I am not interested just to discuss about it, I want to do it.

Therefore, if you think that there is this other project which does this and that with DNA and computation, please don’t mention it here unless you have clear explanations about the connections with this project.

Don’t use authority arguments and name dropping, please.

____________________________________

Now, if anybody is still interested to learn what is this about, after the frightening introduction, here is what I am thinking.

There is a full load of enzymes like this and that,  which cut, link, copy, etc. strings of DNA.  I want to develop a DNA-to-chemlambda dictionary which translates what happens in one world into the other.

This is rather easy to do. We need a translation of arrows and the four nodes from chemlambda into some DNA form.

Like this one, for example:

dna_1,fig

Then we need a translation of the chemlambda moves (or some version of those, see later) into processes involving DNA.

There is plenty of stuff in the DNA world to do the simple things from chemlambda. In turn, because chemlambda is universal, we get a very cheap way of defining DNA processes as computations.

Not as boolean logic computations. Forget about TRUE, FALSE and AND gates.  Think about translating DNA processes into something like lambda calculus.

I know that there is plenty of research about using DNA for computation, and there is also plenty of research about relations between lambda calculus and chemistry.

But I am not after some overarching theory which comprises everything DNA, chemistry and lambda calculus.

Instead, I am after a very concrete look at tiny parts of the whole huge field, based on a specific formalism of chemlambda.

It will of course turn out that there are many articles relevant for what will be found here and there will be a lot of overlap with research already done.

Partially, this is one of the reasons I am searching collaborations around this, in order to not invent wheels all the time, due to my ignorance.

__________________________________________

Tibit game with two players: trickster and webster

Here is a version of the tibit game with two players, called “trickster” and “webster”.

The webster is the first player. The trickster is the second player.

The webster manipulates (i.e. modifies) a “web”, which is a

  • trivalent graph
  • with oriented arrows
  • with a cyclic orientation of arrows around any node (i.e. locally planar)
  • it may have free arrows, i.e. ones with free tail or free tip.

Tokens.   The webster  has one type of token, called a “termination token”.  The trickster has two types of tokens, one yellow, another magenta.

Moves.  When his turn comes,  any player can do one of the moves listed further, or he may choose to pass.

tibbit_5

Some of the webster moves are “reversible”, meaning that the webster may do them in both direction. Let’s say that the “+” direction is the one from left to right in the graphical moves and also means that the webster may put a termination token (but not take one). The “-” direction is from right to left in the graphical moves and also means that the webster may take a termination token (but not put one).

tibbit_6

The loop rule.  It is possible that, after a move by one of the players, the web is no longer a trivalent graph, because a loop (an arrow which closes itself, without any node or token on it) appears. In such a case the loop is erased before the next move.

_________________________

This is a collaborative game.

There are two webs, with tokens on them, the first called the “datum” and the second called the “goal”.

The players have to modify the datum into the goal, by playing collaboratively, in the following way.

The game has two parts.

Preparation.     The players start from a given web with given tokens placed on it (called the “datum”).  Further,   the webster builds a web which contains the datum and the trickster places tokens on it, but nowhere in the datum .

Eventually they obtain a larger web which contains the datum.

Alternatively, the players may  choose an initial   web, with tokens on it, which contains the datum.

Play.   Now the webster can do only the “+” moves and the trickster can’t put any token on the web.  The players try to obtain a web, with tokens on it, which contains the goal by using their other moves.

_____________________________________

Un-currying with Actors

This is an illustration of a sequential computation with actors and graphic lambda calculus, based on the ideas from  Actors for the Ackermann machine .  It describes the inverse of the currying process described in  Currying by using zippers and an allusion to the cartesian theater.

Without further ado, let’s watch the play.

The preparation consists in introducing the cast and the initial actors diagram.

actor_curry_prep

The play starts. The actor c introduces a to b and dies, or maybe goes playing elsewhere.

actor_curry_1

We see the states of the actors a and b, the other main character d is waiting for his line, while the figuration b1, ..., bN wait in the back of the stage, in artful positions.

The rest of the process consists in unzipping a graphic lambda calculus zipper. Zippers are very useful, because the unzipping process is sequential, i.e. it can happen in only one way. Therefore, by using zippers we can force a naturally distributed parallel computation to happen in a sequential way!

actor_curry_N

At the end of the computation, the two actors a and b, which represent the two parts of the zipper, cancel one another. Eventually the graph D is now connected with the output to the address :out and with the inputs to the (addresses of the) figuration actors b1, ..., bN.

The graph D corresponds to the graph A from  the post   Currying by using zippers and an allusion to the cartesian theater  and the computation just described is a reversed computation of the one described in the mentioned previous post.

__________________________

As a side remark, graphic lambda calculus suggests that currying and un-currying should be eliminated at the preparation stage of a distributed computation. However, I took this example of computation with actors and GLC as a simple illustration and also for stressing that zippers can impose, if needed, an order of reduction moves in GLC.

__________________________

Hewitt Actor Model, lambda calculus and graphic lambda calculus

In Actor Model of Computation: Scalable Robust Information Systems, arXiv:1008.1459,  Hewitt writes, at p 7 that “lambda calculus cannot implement concurrency”. At p.13-14 he writes:

… the semantics of the lambda calculus were expressed using variable substitution in which the values of parameters were substituted into the body of an invoked lambda expression. The substitution model is unsuitable for concurrency because it does not allow the capability of sharing of changing resources.

Correct. However, the graphic lambda calculus does not have this problem because reductions don’t need evaluations. See  The graphic beta move, with details to understand this better.
All the arguments listed further apply also verbatim to graphic lambda calculus! (Hewitt loc cit, p. 14)

Inspired by the lambda calculus, the interpreter for the programming language Lisp [McCarthy et. al. 1962] made use of a data structure called an environment so that the values of  parameters did not have to be substituted into the body of an invoked lambda expression.
Note that in the definition in ActorScript [Hewitt 2011] of the lambda-calculus below:

  • All operations are local.
  • The definition is modular in that each lambda calculus programming language construct is an Actor.
  • The definition is easily extensible since it is easy to add additional programming language constructs.The definition is easily operationalized into efficient concurrent implementations. The definition easily fits into more general concurrent computational frameworks for many-core and distributed computation.

Hewitt continues:

That Actors which behave like mathematical functions exactly correspond with those definable in the lambda calculus provides an intuitive justification for the rules of the lambda  calculus:

  • Lambda identifiers: each identifier is bound to the address of an Actor. The rules for free and bound identifiers correspond to the Actor rules for addresses.
  • Beta reduction: each beta reduction corresponds to an Actor receiving a message. Instead of performing substitution, an Actor receives addresses of its arguments.

Further I try to understand this, as a particular case of the  Chemical actors   sketch.

We have Actors, we have addresses and we have lambda identifiers (it’s about usual lambda calculus, not the better graphic lambda calculus). The idea, as far as I understand, is to “bound” lambda identifiers to the address of an Actor. The second idea is “each beta reduction corresponds to an Actor receiving a message”.

I want a way to identify Actors with graphs in GRAPH or with molecules in MOLECULES, if I wish to use the chemlambda. Actually, because, after some thinking, is better to think about  Actors as if they are a sort of decoration of the arrows of a graph, almost like a decoration with types. (I shall not use the epsilon gates further, just concentrating on the pure logic side)

actor_decor

We shall have actors A, B, etc, with actor names :A. :B. There are other names, like :?  which is a placeholder for indetermination (that’s just a name, if you don’t like this one then change it with  😕 , or whatever), there is also :talk and :listen names. These are not names of actors.

By using these rules we can decorate all half-arrows (meaning that we may have arrows with two decorations. For concreteness, look at the example of the combinator K.

actor_decor_K

I shall take another Actor named A and make it interact with the Actor K, in order to mimick the KUV \rightarrow U couple of reductions.

actor_decor_another

Here “1” and “2” mean anything you want, from “they can be names of actors” to “I don’t care what follows next”.

Now, they are in position to communicate:

actor_decor_int

What happens next? The :listen:talk decoration indicates that a graphic beta reduction can be applied (cf. Hewitt 2nd suggestion). But we need a rule for decorations with names before and after the graphic beta move. Here is what I propose:

actor_decor_beta

We apply this for our example:

actor_decor_int_2

Is that nice or what?  The actor K lost one of it’s “lambda identifiers”, the actor A also simplified a bit, but they still talk one to another:

actor_decor_int_3They did their job and disappeared. What happens next is a matter of other actors.

___________________________

Remark that graphic lambda calculus easily did what Hewitt wants. This example is not showing any concurrency, though, because here there is no concurrency involved, properly speaking. I only wanted to implement the 2 suggestions of Hewitt with graphic lambda calculus. But take examples where concurrency does appear, like here, to see that everything works really nice. This will be explained in another post.

___________________________

Computing with space in the internet of things

My primary interest is “computing with space”.  Some remarks by  Allen Francom  make me believe that it might be a way to give a “space” to the internet of things by using graphic lambda calculus. By that I mean the following: the internet of things is a huge collection of objects which are somewhere on Earth, each carrying a very small processor. So the objects are in the physical space, but they change their places all the time, and their processors are tiny, so one better use tiny programs and simple executions for syncronizing their relative places one with respect to the others.

That is a kind of “computing with space” and it fits well with the following problem with e-books raised by  by Mark Changizi, who says that  e-books lack space and gives as an example his (physical library). He says he knows that book A is the one close to that calculus book he uses when he need some formulae, and that’s the way he is going to find book A. While, in a virtual library there is no space, the relative positions of a e-book wrt to others is irrelevant, because when you click a link is like you teleport (and ignore space) from one place to another. My interpretation of this is: the way Changizi finds a book in his library is different from the way a robot Changizi would find the book, because the robot Changizi would need the coordinates of the book A with respect to  a corner of the bookshelf, or some information like this, then it would go and fetch the book. On the contrary, the human Changizi (‘s brain) acquired a geometric competence by accumulating these simple bits of knowledge about his library, namely tht book A is near the calculus book, and the calculus book is used when he needs formulae, etc.

Is this a problem which the internet of things will have? Surely. Is this solved? I have no idea, but I know how to solve it, by satisfying also the “tiny programs and executions” constraint. This involves exactly that part (called by me “emergent algebra sector”) which gives the kind of programs for using space in a very concentrated form, not needing “geometric expertise” to generate and manipulate them. Then, imagine that each object from the internet of things has a relative representation of some  of his neighbours, under the form of a small graph, like the graphs from graphic lambda calculus or the molecules from the chemical concrete machine. These graphs are generated from simple rules concerning proximity relations (like the keys thing is on the couch thing, or the socks thing is in the drawer thing), more on that later, and changes in proximity relations are like moves acting on such tiny graph molecules. Then, if you want to know where you lost the keys thing, instead of endowing the keys with a GPS, you just need to evaluate (i.e. decorate, like in the discussions about decorations of graphic lambda calculus graphs) these tiny graphs and find out that the key thing is i the drawer thing.
Here is a more mathematical explanation. Suppose you have a set T of things, objects. You want to describe how they are in space without eventually using any “geometric expertise” (like using a GPS coordinate system, for example).

As you know, the set of things T is the same as the trivial groupoid of things T^{2}, with arrows being pairs of things (a,b) and with composition of arrows being (a,b)(b,c) = (a,c).  Now, let us define first what is the “space” around one object from T, let’s call it “e“. (The same works for each object).

For defining this you take a commutative group (could be (0,\infty) or could be the free commutative group over an alphabet (containing for example “NEAR” an “FAR”, such that NEAR^{-1} = FAR, etc). Call this group the “scale group”. Then to any pair of objects (a,b) and any scale \varepsilon, associate a “virtual pair at scale epsilon”  (a,b)_{\varepsilon} = (b(\varepsilon)a, b), where b(\varepsilon)a  is like a term in lambda calculus constructed from variable names “a” and “b”, only that it does not satisfy the lambda calculus definition, but the emergent algebra definition. Of course you can see such terms as graphs in graphic lambda calculus made by the epsilon gates, along with the emergent algebra moves from graphic lambda calculus, as in

Moreover, you can mix them with lambda calculus terms, as it is possible in graphic lambda calculus.

So we have (in our platonic universe, because in practice we can’t “have” an infinite set of all things possible) a collection of term-like objects made from a, b, etc and from scales epsilon, mu, etc. We may imagine them visually either as binary trees, like when we represent graphically compositions of operations,
or as graphs in graphic lambda calculus, if we apply to them the algorithm for eliminating variable names. This is a kind of collection of all programs for using a space (each one is a me for space). (Mind you that there is no constraint, like euclidean space, or other.) The space around one object, say e, is obtained from this construction applied to the groupoid of pairs of pairs with arrows ((a,e),(b,e))  and the scale acts like this ((a,e),(b,e))_{\varepsilon} = ((a,e)_{\varepsilon}, (b,e)_{\varepsilon}).  (Remark: in the epsilon deformed groupoid the composition of arrows is transported by the epsilon deformation, so the composition changed from trivial to more interesting.)
If you are still reading this and is too long, the shorter description is that the space around an object e is defined in terms of virtual geometric relations (mes for space)  between all the other things from the set T (which is finite, of course), with respect to $latex  e$, and that each such virtual geometric relation is a small graph in graphic lambda calculus. More than this, suppose you want to translate from a virtual geometric relation to another (or from a virtual place to another). Such a “translation” is itself a small graph, actually made by only 4 nodes.
And here comes the final part: in order to know where the things from T are, one with respect to the others, it is enough to give,  to attribute to each pair of objects (a,b) a “virtual geometric relation” in the space around b, i.e. a graph. If the things move one with respect  to the other, they can update their relative geometric relations by moves in graphic lambda calculus.
If you want to know where a thing is in the physical space then you have to start from somewhere (some object) and decorate the virtual space relations,as they appear as graphs. This is like evaluations of  lambda calculus terms.

In conclusion, I am very interested in practical applications of this, and I can at least formulate a list of needs, some of them with an IT flavour, others applied math. As for the pure math needs, they are pure fun and play. All this together would lead to really interesting things. One of the things to keep in mind is that would also endow the internet of things with a synthetic chemical connectome, as an extension of The chemical connectome of the internet .

___________________________

Better than extended beta move

Instead of the extended graphic beta move (proposed here and declared still in beta version in the graphic lambda calculus tutorial) is to couple the Reidemeister 2  move R2 with the graphic beta move. Here is how it can be done.

Let us define, for any scale parameter \varepsilon \in \Gamma, the following versions of the application gate and abstraction gate:

space_3

Remark that when \varepsilon =1 we can recover the usual application gate

space_4

and the usual abstraction gate

space_5

The graphic beta move and the move R2 can be coupled into the following nice move:

space_6

__________________

This will be used for making clear where the space is encoded in graphic lambda calculus and how.  Of course, it has to do with emergent algebras, seen in graphic lambda calculus. If you want to get ahead of explanations and figure out by yourself then the following posts will help:

___________________

(in case you see adds here: they have nothing to do with me or with this blog)

Example: if-then-else in the chemical concrete machine

… along with a short discussion about what the chemical concrete machine can compute. I start with this and then go to the example.

Until now I proved that the graph rewriting system called “chemical concrete machine” can do combinatory logic. Therefore it can compute anything (a Turing machine can).  I shall be more specific, because to the eyes of somebody who is not used with functional programming, this might seem outrageous.

It can compute anything which can be computed by using any programming  language based on lambda calculus, like Haskell or Scala. And then, some more (as regards only the things those languages can do without using any syntactic sugar, of course). The procedure is straightforward, namely translate the (essentially) combinatory logic terms into graphs and then let the enzymes (moves) to do their magic. I shall give a bit later the example of the if-then-else structure.

There are, of course, interesting questions to ask, among them:

  • do exist real molecules and real enzymes which react one with another like in the chemical concrete machine formalism? Here I need your help, dear chemists.
  • is this an example of a sequential or parallel computation?
  • what evaluation procedure is used in the chemical concrete machine?

The second question is the most easy to ask: is parallel computation.  The molecules (graphs) are mixed in a reactor with a choice of enzymes and then all possible reactions can occur in parallel, at all times. (Realistic models will have attached a probabilistic part, but there is nothing special here, because the chemical concrete machine, initialized with molecules and enzymes, is just a chemical reaction network, so any procedure to attach the probabilistic part to a chemical reaction network can also be used in conjunction with the chemical concrete machine.)

But we can also prepare the initial molecules such that the computation is sequential. It suffices to use zippers. More later. But for the moment, it is worthy to mention that the chemical concrete machine (as well as the graphic lambda calculus) are already proven to be more powerful than lambda calculus. Why? for at least two reasons:

  • lambda calculus, or combinatory logic, are just sectors of the formalisms, i.e. they correspond to only a part of what the formalisms can do. There are other sectors, as well, for example the tangle diagram sector.
  • they are naturally parallel, as long as we use only local moves, as is the case for the chemical concrete machine, for example. Indeed, I wrote that a graph is a “molecule”, but this is only a way of speaking, because molecules could be better identified with connected graphs. But in these formalisms the graphs are not supposed to be connected: any (finite) collection of graphs (i.e. molecules) is also a graph. The moves being local, there is no interference appearing in the simultaneous application of several instances of the (local) moves in different places, for different molecules (connected subgraphs) or even for the same molecule, as long as the places where the moves are applied are different one from another. On the other side, lambda calculus and combinatory logic are naturally sequential.

The third question, concerning the evaluation procedure will be also explored in further posts.  Care has to be taken here because there are no variables in these formalisms  (which translates in less demand of different species of real molecules only for the need to name variables). So it is about the order of moves, right?  The short answer is that it depends, sometimes the computation done by the chemical machine can be seen as greedy evaluation, sometimes as lazy evaluation.

Let me make again the point that somehow the chemical concrete machine formalism should be seen as part of the beautiful idea of algorithmic chemistry. So, it’s not so unearthly.

Finally, it is well known that lambda calculus and Turing machines are the two pillars of computation. For historical reasons chemists seem to concentrate only on the emulation of Turing machines (pleas correct me if I’m wrong).  The main idea of algorithmic chemistry, as far as I understand, is that a sufficiently complex chemical network has the capacity to do lambda calculus. But, if you are determined to use only Turing machines for chemical computing, then, supposing that algorithmic chemistry idea is true, you have to translate the natural language of lambda calculus into the Turing machine frame. This is a tarpit. Very fast, it becomes very hard to follow.  Instead, why not use lambda calculus as it is, for the real powerful applications of chemical computing, and in parallel use one of the excellent languages for simulating in silico chemical computing.

_________________________

The if-then-else construct has already been explored in graphic lambda calculus,  see Teaser: B-type neural networks in graphic lambda calculus (II)   in . Here I shall do a much simpler example, just by making the right translations, as explained in the beginning of this post.

In lambda calculus, there are terms called TRUE, FALSE and IFTHENELSE, which are the Church encodings of the booleans true, false and if-then-else. Theassociated graphs in the chemical concrete machine are:

bckw_111

Take other two molecules A, B, with one exit each, in particular you may think that they correspond to terms in lambda calculus (but it is not mandatory). Then IFTHENELSE TRUE A B should become A. In the chemical concrete machine, with only beta + enzymes, look what is happening:

bckw_112

Along this chain of reactions, there is no other choice than the one from the figure. Why? Because essentially at every step there is only one reaction site available to the enzyme beta+ (of course, in the region of the reactor represented in the figure). The result is, unsurprisingly, compatible with the lambda calculus version, with the exception that A and B are not supposed to be (graphs corresponding to) lambda terms. They can be anything, as for example, from the family of “other molecules”.

In lambda calculus IFTHENELSE FALSE A B should become (by reductions) B. In the chemical concrete machine look what happens:

bck_113

The previous remarks apply here as well.

With a little bit of imagination, if we look closer to what TRUE and FALSE are doing, then we can adapt the IFTHENELSE to what I’ve called a B-type NN synapse and obtain a molecule which releases, under the detection of a certain molecule, the medicine A, and under the detection of another  the medicine B.

_____________________

Return to the chemical concrete machine tutorial.

Chemical concrete machine, detailed (VI)

Let’s continue from  the post  Chemical concrete machine, detailed (V) , by adding some more facts and remarks.

_________________

We have seen that in order to not have to apply a controlled sequence of CO-ASSOC molecules (i.e. moves), it is better to introduce a composite move, saying that a certain simple molecule is a propagator.

bckw_9

With this move the formalism of the chemical concrete machine is exactly as powerful as without it. The reason is that the move called “PROP+” can be seen as a sequence of CO-ASSOC moves and DIST+ moves.

Let’s accept this move (i.e. like if there is an associated enzyme to it) and let’s revisit the proof that the W molecule is a multiplier.

bckw_10

_________________

Besides the B, C, K, W molecules (which correspond to the  B,C,K,W system  ), other molecules are also interesting: in the following figure are presented the molecules F, I, S’ and S.

bckw_12

The molecule F may represent FALSE in the Church encoding of the booleans, along with K which encodes TRUE.  The molecule I is the identity and the molecule S corresponds to the combinator S from the SKI combinator calculus. which is

a computational system that may be perceived as a reduced version of untyped lambda calculus. It can be thought of as a computer programming language, though it is not useful for writing software. Instead, it is important in the mathematical theory of algorithms because it is an extremely simple Turing complete language.

These correspondences are established by using the algorithm for transforming lambda calculus terms into graphs in the graphic lambda calculus, then by using the dictionary between the gates from graphic lambda calculus to the essential molecules from the chemical concrete machine.

What about the S’ molecule?  It is a version of the molecule S, i.e. it transforms into S by this reaction:

bckw_13

Also, there is a proof, analogous with the one for W, of the fact that S’ is a multiplier, by using the PROP+ move. This proof is better than the proof for S which was given at the end of the post   Local FAN-IN eliminates global FAN-OUT (I) .

_____________________

Return to the chemical concrete machine tutorial.

Chemical concrete machine, detailed (V)

The B,C,K,W system

   is a variant of combinatory logic that takes as primitive the combinators B, C, K, and W. This system was discovered by Haskell Curry in his doctoral thesis Grundlagen der kombinatorischen Logik, whose results are set out in Curry (1930).

In this post, I shall explain which are the correspondents of the B, C, K, W, combinators in the formalism of the chemical concrete machine, then I shall prove that they are multipliers. In a future post I shall prove that  their correspondents in the chemical machine make the machine Turing complete.

Remark: In a sense, this was already established, by using the S, K , I combinators, in the post Local FAN-IN eliminates global FAN-OUT (I) . You only have to pass from the  black and white drawing conventions of graphic lambda calculus to the drawings coloured with red and green of the chemical concrete machine, and then use the result which says that combinatory logic can be implemented in graphic lambda calculus with global FAN-OUT. All in all this gives that in the chemical concrete machine the combinatory logic can be implemented without any global move.

However, I think is more instructive to see first why other combinators than S, K, I are multipliers (which is a reformulation of the result mentioned in the remark).

___________________

Here are the four “molecules” which represent the combinators B, C, K, W.  (Via the red-green vs black-white change of notation, they can be deduced from their expressions in lambda calculus by the algorithm described here . )

bckw_2

Now, let’s prove that they are multipliers!  The proofs for B and C are very much alike, therefore I put here only the proof that B is a multiplier:

bckw_3

By the conventions of the chemical concrete machine, I mention here the enzymes which are involved in the reactions, instead of writing the moves, like in the graphic lambda calculus.

The proof that K is a multiplier is the following:

bckw_6

Notice how, in both cases, the reactions seem feasible, in the sense that there are no cases, they can be accomplished by a linear process: pour some DIST enzymes, then some FAN-IN, for B, or DIST, LOC PRUNING and then FAN-IN, for K. More than that, remark that the order of reactions do not matter, i.e. they can be accomplished by pouring all the needed enzymes at the same moment.

For the W combinator (molecule), things get a bit more complex, as was the case for the combinator S:

bckw_4

There is a reaction (or move) which needs explanations. I called it DISENTANGLE (CO-ASSOC) reaction. It is this:

bckw_5

It can clearly be done by a controlled succession of CO-ASSOC moves (reactions). From the point of view of the feasibility in the real world (provided a real implementation of the chemical concrete machine will appear), it seems hard to control the exact order of applications of CO-ASSOC moves which gives the DISENTANGLE move as an effect. So, probably,  we shall need a “disentangle enzyme” dedicated to this.

As an alternative, remark that for proving that W is a multiplier we need an application of the DISENTANGLE composite move, described in the next figure:

bckw_7

For practical (or theoretical as well) purposes, it is enough to take this as a move. If you are lost and don’t understand what is the correspondent of this move in lambda calculus, is like this:  for any term T, if you apply a FAN-OUT gate to TT then you obtain two TT.  (Recall that’s practically invisible in lambda calculus, because there is no FAN-OUT, instead all variables have names and the pattern matching works by some magic outside that formalism.)

In other words, what would get rid of needing a controlled sequence of CO-ASSOC reactions for multiplying the molecule W is this:  assume that the molecule which is connected to the “y” essential molecule (i.e. to the input of a FAN-OUT gate) is a “propagator”. Propagators are defined in the next figure:

bckw_8

Propagators are different from multipliers, because they are molecules with one selected input and one selected output which “propagate along a FAN-OUT gate”, while multipliers are multiplied by a FAN-OUT gate. Propagators can serve in fact as labels, or names, which propagate along a tree of FAN-OUT gates.

_________________

Return to the chemical concrete machine tutorial.

Ancient Turing machine helps Chemical concrete machine

In this post I want to prove that only with arrow molecules and enzyme moves we can generate all we need for the chemical concrete machine, by using the minimal (?) formalism from  Chemical concrete machine, short list of gates and moves . This is done with the help of ideas from the old post   Ancient Turing machines (I): the three Moirai, which   contains a discussion about generating moves for graphic lambda calculus.

In the post Local FAN-IN eliminates GLOBAL FAN-OUT (II) , at the end, I prove a switch move from CO-COMM and FAN-IN moves. In the next figure, I call this move (which is, recall, a “macro move”, made by a succession of three moves) SWITCH.  In the same figure appears a move CREA’. The name is justified by the resemblance with the CREA move proposed in Ancient Turing machines (I): the three Moirai .

recrea_2

Before giving the proof of CREA’, let me remark that from these macro moves, together with the ones selected in the post  Chemical concrete machine, short list of gates and moves , we can recover all the generating moves of the three Moirai described in the ancient Turing machine post. For example, CREA’ , used for a triple of arrows, gives both the  Clotho’s  CREA  original move and the Atropos  GARB move.

Here is the proof of CREA’:

recrea_1

From a bag of arrows at our discretion, this move gives us fan-out gates and termination gates.  Moreover, it is pleasant to see that, as the SWITCH move is a consequence of CO-COMM and FAN-IN, the CREA’  move is a consequence of CO-ASSOC and FAN-IN.

We need the other three trivalent gates, let’s see how we can obtain them (generate them from arrows, by using moves, recall that moves are enzymes, in the world of the chemical concrete machine).

The fan-in gate is generated like this (we already generated termination gates):

recrea_3

The lambda abstraction and application gates can be generated by using the graphic beta move and a SWITCH move.

recrea_4

Now we have all the gates we need and we can proceed to constructing whatever combinator we want from them, mainly by using SWITCH moves. One elegant way for doing this would be to construct zippers first, by using the graphic beta moves, then construct the S,K,I combinators from zippers, as indicated in  Combinators and zippers.

Probably, the real chemical concrete machine will function with terms (molecules) created by some more natural chemistry tricks, but it is pleasant to see that in principle we can also construct what we need, before passing to the “computation” part, only by using arrows and moves.

Why build a chemical concrete machine, and how?

Asked about this, I spent a bit thinking and I arrived at  this very brute answer:

What I have in mind can be split in two.

There is a first part concerning graphic lambda calculus seen as it’s about real molecules interacting in space. For this part I would like to construct graphs which are like a Turing machine (or other computing devices) then, the important step is to eliminate everything which is due to our human 1d writing conventions (see details in the rant from the first part of this post) and thinking and simplify such “machines”, in the same way, for example, like I saw it’s possible

A serious tool for doing this would be, for example, a program which allows to visualize and perform moves  (automatically and from the keyboard) on graphs in GRAPH.

The second part, which goes in parallel, would be to try to find in the real world (here DNA origami looks like a possible tool) ways to construct chemically, physically, such machines, which are naturally adapted to the real world because they are geometrical (an object or a property is geometrical if it can be described, or it’s invariant to an arbitrary change of parametrization, in our case, an arbitrary renaming).  For this part I want to understand first how DNA origami works, to the level of being able to absorb information and understand some of the hurdles.  This leads to applications, which are still vague in my head, but I was impressed by this video

as well as by research of Jean-Pierre Banatre and Daniel Le Metayer.

In conclusion, I can’t imagine what a syringe with 10^9 nano geometrical turing machines  graphs representing lambda terms [see UPDATE]  can’t do.

______________

UPDATE:  Corrections to this post are made in  Chemical concrete machine not algorithmic self-assembly of DNA, nor Turing machine   , where it is stressed that the “chemical concrete machine”, even if it has Turing universality property, it is not intended to be a Turing machine, (nor an automaton), as is wrongly suggested in this post.

Pair of synapses, one controlling the other (B-type NN part III)

In  Teaser (I) and Teaser (II) I discussed about the possibility to construct neural networks with controlled connections, resembling the B-type neural networks of Turing, in the following sense:

  • they are formed by “neurons”, which in Turing’s description are boolean logic gates, and here they are graphs from graphic lambda calculus which correspond to lambda calculus terms. But there’s no real restriction on that, graphic lambda calculus being more general and powerful than untyped lambda calculus, one may consider “neurons” which are outside the lambda calculus sector. The fact is that a “neuron” here should be interpreted as any graph in GRAPH with at least one input but only one output. Later, when we shall speak about the order of performing moves in such neural networks, it will be seen that each “neuron” is a bag containing graphs which are modified (or reduced, as people say) independently one from another. Neurons are packing conventions from a formal viewpoint, but this viewpoint may not be the best one, a better viewpoint would be to see neurons as well as synapses, described further, as real constructs, like in the related project of the chemical concrete machine, In particular, neurons may do other things than computing in the lambda calculus sense (classical computation), they may do some more geometrical or physical or robot-like activities, not usually considered computations.
  • the connections between neurons are controlled in a B-type NN, by TRUE – FALSE control wires or inputs. Turing explains that controls themselves can be realized as constructions with boolean neurons (i.e. in his language B-type NNs are particular cases of his A-type NNs). In Teaser (II)    is explained how the connections between graphic lambda calculus neurons can be constructed by using a 2-zipper and a switch, such that a connection is controlled (by the same TRUE-FALSE mechanism, but this time TRUE and FALSE are literary the graphs of the corresponding terms in lambda calculus) by another connection.

There is an important difference, besides the power of the calculi used (boolean vs. graphic lambda), namely that there is no signal transmitted through the wires  of the network. That is because graphic lambda calculus does not need variable names. I shall come back to this very important point (which is a big advantage for implementing such networks in reality) in a future post, but let me mention two simple facts:

  • an analogy: think about hardware and software, a difference between them is that software may be associated to the pattern of signals which circulates in the hardware. The hardware is the machine inhabited by the software, they are not in the same plane of physical reality, right? Well, in this implementation there is literary no difference between those, exactly because there are no signals (software) transmitted through the hardware.
  • this use of names, software, variable and signals is a pain for those trying to make sense how to implement in reality computing constructs. But, if you think, the need to name that stuff “x” and that other stuff “y”, to make alpha conversions in order to avoid name clashes, or even to describe computations in a linear string of symbols, all this are in no relation with physical reality, they are only cultural constraints of humans (and their artificial constructs). It all has to do with the fact that humans communicate science through writing, and of course, it has to do with the enumeration techniques of the cartesian method , which “is designed as a technique for understanding performed by one mind in isolation, severely handicapped by the bad capacity of communication with other isolated minds”. Molecules in a cell or in a solution do not align in a string according to the experimenter writing habits, from left to right, right to left, or vertical. They just don’t wait for an external  clock  to rhythm their ballet, nor for the experimenter to draw it’s familiar coordinate system. These are all limitations of the method, not of nature.

_____________

Further, in this post, I shall use “synapse” instead “connection”.  Let’s see, now that we know we can implement synapses by a TRUE-FALSE mechanism, let’s see if we can do it simpler. Also, if it is possible to make the “synapses control other synapses” concept more symmetric.

Instead of the little magenta triangles used in Teaser I, II posts (which are not the same as the magenta triangles used in the chemical concrete machine post) , I shall use a more clear notation:

synapse_1

The design of a synapse proposed in Teaser (II)  involved two switches and a zipper, justified by the direct translation of TRUE-FALSE lambda calculus terms in the freedom sector of graphic lambda calculus.  Think now that in fact we have there two synapses, one controlling the other. The zipper between them is only a form of binding them together in unambiguous way.

A simplified form of this idea of a pair of synapses is the following:

synapse_2

In the upper part of the figure you see the pair of synapses, taking the a form like  this: ><. There are two synapses there, one is >, which is controlled by the other <.  The connection between the blue 1 and the red 3′ is controlled by the other synapse. In order to see how, let’s perform a graphic beta move and obtain the graph from the lower part of the figure.

The red 3 can connect, in the sense explained by the switch mechanism from Teaser (II) with blue 1′ or blue 2′, all three of them belonging to the controlling synapse. Say red 3  connects with blue 1′ and look what happens:

synapse_3

OK, so we obtain a connection between blue 1 and red 3′. Suppose now that red 3 connects with blue 2′, look:

synapse_4

Yes, blue 1 does not connect with red 3′, they just exchanged a pair of termination gates. That’s how the pair of synapses work.

Finally, let me remark that the use of termination gates is a bit ugly, it breaks the symmetry,  is not really needed.

A chemical concrete machine for lambda calculus

UPDATE: See the Chemical concrete machine article (2013), followed by Molecular computers (2015), (arXiv 2018).

This post is a call for building one. More precisely, for building one for graphic lambda calculus, which satisfies all the requests  from section 5 of The chemical abstract machine by Berry and Boudol  for a calculus more general than lambda calculus.

First a short comment on “The chemical abstract machine”. This is a very interesting article, thanks are due  to Mike Stay for telling me about it, in relation to what is described in my previous post  Can graphic lambda calculus be implemented in some form of DNA computing?  The following quote describes exactly what I was looking for (p.1 from the linked file):

Intuitively, the state of a system is like a chemical solution in which floating molecules can interact with each other according to reaction rules … Solutions are finite multisets of molecules …

But from here I will not jump to identify terms in lambda calculus with solutions, like it’s done in section 5.2 on the \gamma calculus. Nor shall I use membranes and airlocks, as useful as they seem (and probably there is a possible implementation of those in graphic lambda calculus).

Finally, there is something which I don’t understand in the article, concerning variables and substitutions. I might be wrong, please tell if so, but apparently an abstract chemical machine still uses names for variables, which appear as “ions”. What I don’t get is: how are two solutions, each one corresponding to a lambda term, the same if the two lambda terms are the same by renaming the variables?

In graphic lambda calculus there is no such problem, because there are no variable names. Moreover, lambda terms (which appear as certain graphs in graphic lambda calculus) are molecules and the moves between them (like the graphic beta move) are chemical reactions.

How can this be done? Here is sketch, mind you that I propose things which I believe are possible from a chemical perspective, but I don’t have any chemistry knowledge.  If you do, and if you are interested to make a chemical concrete machine for graphic lambda calculus, then please contact me.

The graphs from GRAPH which we need for the lambda calculus sector of the graphic lambda calculus, are  made by three gates corresponding to lambda abstraction, application and fan-out (I shall ignore the termination gate). So we need three molecules.

enzyme_1

The five coloured triangles are parts of the molecules which bond one with the other. Say there is a bond strength associated with each pairing, such that the bigger is the bond strength, more easily the bond is made and more stronger it is.

Remark that the molecules from the right of the figure don’t seem to have oriented arrows inside. That is why we need several types of bonding places. The table of bond strengths is this:

enzyme_3

The strength coefficients are taken out of … my imagination, such that they satisfy a number of mental experiments I did with them.  As you see there are:

  • two bonding places — yellow and red, which correspond to input half-arrows,
  • two bonding places — blue and green, which correspond to output half-arrows,
  • one bonding place – magenta, which can be seen as well as input or output half-arrow.

The bipartite graph from the lower part of the figure shows which bonding places CAN bond (i.e. they have bonding strength not equal to 0).  In the upper part of the figure there is a table with strengths.

As you see, this solves the problem of representing decorated nodes of oriented graphs. Nice.

Now, let’s pass to the main move, the graphic beta move. This should be a chemical reaction. Imagine that we have an enzyme called “beta”, which recognizes magenta-magenta bonds. When it does recognize such a bond, it does like in the next figure:

enzyme_2

So, the beta enzyme cuts the other bonds, like in the middle part of the picture, then the bonding places bond according to the strength table. Voila!

If you want to play with, here is a “chemical” representation of the 2-zipper, try to see how it works.

enzyme_4

I hope this adds more details to my call of building a real, concrete chemical machine (or to recognize one with at least the expressivity of untyped lambda calculus). Can anybody do it?

Teaser: B-type neural networks in graphic lambda calculus (II)

The connections in a B-type neural network can be trained.  The following  quote and figure are taken from the article  Turing’s Neural Networks of 1948, by Jack Copeland and Diane Proudfoot:

Turing introduced a type of neural network that he called a ‘B-type unorganised machine’, consisting of artificial neurons, depicted below as circles, and connection-modifiers, depicted as boxes. A B-type machine may contain any number of neurons connected together in any pattern, but subject always to the restriction that each neuron-to-neuron connection passes through a connection-modifier.

connecmod

A connection-modifier has two training fibres (coloured green and red in the diagram). Applying a pulse to the green training fibre sets the box to pass its input–either 0 or 1–straight out again. This is pass mode. In pass mode, the box’s output is identical to its input. The effect of a pulse on the red fibre is to place the modifier in interrupt mode. In this mode, the output of the box is always 1, no matter what its input. While it is in interrupt mode, the modifier destroys all information attempting to pass along the connection to which it is attached. Once set, a connection-modifier will maintain its function unless it receives a pulse on the other training fibre. The presence of these modifiers enables a B-type unorganised machine to be trained, by means of what Turing called ‘appropriate interference, mimicking education’.

Let’s try to construct such a connection in graphic lambda calculus.  I shall use the notations from the previous post  Teaser: B-type neural networks in graphic lambda calculus (I).

3. Connections.   In lambda calculus, Church booleans are the following terms: TRUE = \lambda x . \lambda y .x and FALSE = \lambda x. \lambda y. y (remark that TRUE is the combinator K).  By using the algorithm for transforming lambda calculus terms into graphs in GRAPH, we obtain the following graphs:

switch_5

They act on other graphs (A, B) like this:

switch_6

The graphs are almost identical: they are both made by a 2-zipper with an additional termination gate and a wire. See the  post   Combinators and zippers  for more explanations about TRUE, or K.

I am going to exploit this structure in the construction of a connection. We are going to need the following ingredients: a 2-zipper, an INPUT BOX (otherwise called “switch”, see further) and an OUTPUT BOX,

which is almost identical with a switch (it is identical as a graph, but we are going to connect it with other graphs at each labelled edge):

switch_2

I start with the following description of objects and moves from the freedom sector of graphic lambda calculus (the magenta triangles were used also in the previous post).  I call the object from the middle of the picture a switch.

switch_1

As you can see, a switch can be transformed into one of the two graphs (up and down parts of the figure).  We can exploit the switch in relation with the TRUE and FALSE graphs. Indeed, look at the next figure, which describes graphs almost identical with the TRUE and FALSE graph (as represented by using zippers), with an added switch:

switch_4

Now we are ready for describing a connection like the one from the B-type neural networks (only that better, because it’s done in graphic lambda calculus, thus much more expressive than boolean expressions). Instead of training the connection by a boolean TRUE of FALSE input (coming by one of the green or red wires in the first figure of the post), we replace the connection by an OUTPUT BOX (should I call it “synapse”? I don’t know yet) which is controlled by a switch. The graph of a connection is the following:

switch_3

The connection between an axon and a dendrite is realized by having the axon at “1” and the dendrite at “3”. We may add a termination gate at “2”, but this is irrelevant somehow. At the top of the figure we have a switch, which can take any of the two positions corresponding, literary, to TRUE or FALSE. This will transform the OUTPUT BOX into one of the two possible graphs which can be obtained from a switch.

You may ask why did I not put directly a switch instead of an OUTPUT BOX. Because, in this way, the switch itself may be replaced by the OUTPUT BOX of another connection. The second reason is that by separating the graph of the connection into a switch, a 2-zipper and an OUTPUT BOX, I proved that what is making the switch to function is the TRUE-FALSE like input, in a rigorous way. Finally, I recall that in graphic lambda calculus the green dashed ovals are only visual aids, without intrinsic significance. By separating the OUTPUT BOX from the INPUT BOX (i.e. the switch) with a zipper, the graph has now an unambiguous structure.

Packing and unpacking arrows in graphic lambda calculus

In this post, which continues Currying by using zippers and an allusion to the Cartesian Theater, I want to explain how we may pack and unpack two arrows into one, in the realm of graphic lambda calculus.  An algebrization of graphic lambda calculus may be deduced from this, but one step of it, namely enumerating arbitrarily the nodes of a graph in GRAPH, suffers from the same cartesian disease which was exposed in the previously mentioned post. But nevermind, it is at least funny to show that the usual ways of CS thinking may be used to transform this apparently more general frame of graphic lambda calculus into a 1D string submitted to local algebraic manipulations.

We start from the following sequence of three graphic beta moves.

reg_1

With words, this figure means: we pack the 1, 2, entries into a list, we pass it trough one arrow then we unpack the list into the outputs 3, 4. This packing-unpacking trick may be used of course for more than a pair of arrows, in obvious ways, therefore it is not a restriction of generality to  write only about two arrows.

We may apply the trick to a  pair of graphs in GRAPH, call them A and B, which are connected by a pair of arrows, like in the following figure.

reg_2

With the added packing and unpacking triples of gates, the graphs A, B are interacting only by the intermediary of one arrow.

In particular, we may use this trick for the elementary gates of abstraction and application,  transforming them into graphs with one input and one output, like this:

reg_3

Let’s look now at the graphic beta move:

beta_move_1

If we use the elementary gates transformed into graphs with one input and one output, the move becomes this almost algebraic, 1D rule:

reg_4

Finally, the packing-unpacking trick described in the first figure becomes this:

reg_5

Fixed points in graphic lambda calculus

Background: the page Graphic lambda calculus.

Let A be a graph in GRAPH with one input and one output. For any  graph B with one output, we denote by A(B) the graph obtained  by grafting the output of B to the input of A.

Problem:  Given A, find B such that A(B) \leftrightarrow B, where \leftrightarrow means any finite sequence of moves in graphic lambda calculus.

The solution is in principle the same as in usual lambda calculus, can you recognize it? Here is it. We start from the following:

fixpoint_1

That’s almost done. It suffices to do this:

fixpoint_3

to get a good graph B:

fixpoint_2