Tag Archives: combinators

Birth and death of zipper combinators (II)

Continues from   Birth and death of zipper combinators (I).

In the following is presented another mechanism of birth/death of zipper combinators, which is not using a garbage collecting arrow and termination node.

For any zipper combinator A there is a number n and a succession o fn CLICK, ZIP and LOC PRUNING moves such that



So this time   we transform a zipper combinator connected to a termination node into a bunch of loops.

Proof. The first part is identical with the one from the previous post, namely we remark that we can reduce the initial zipper combinator with the exit connected to a termination node into a finite collection of simpler zippers.


This is done by a succession of LOC PRUNING moves for (+) half-zippers.

We shall use then the following succession of moves, in order to “unlock” (-) half-zippers.


If we use this for the I zipper combinator then we can transform it into one loop.


The same trick is used for transforming the zipper combinator K into two loops.


The zipper combinator S is transformed into three loops, starting by the same trick.


The proof is done.


It follows that w can replace garbage collection by ELIM LOOPS, a move which appeared in earlier formulations of chemlambda.

Seen from the birth point of view, if we have enough loops then we can construct any zipper combinator (with the exit arrow connected to a termination node).


Birth and death of zipper combinators (I)

Because zipper logic is a graph rewriting system which does not use variable names, just like GLC and chemlabda,  it needs ways to multiply, distribute or kill  things.

See  Chemlambda, universality and self-multiplication  for multiplication, distribution or propagation phenomena.

In this post we shall see how zipper combinators die or are born, depending on the sense of reading the moves. This can happen in several ways, one of them explained in this post.

This can also be seen as the statement: we can do garbage collection in chemlambda, GLC or zipper logic.



Zipper combinators.   The set of zipper combinators is the smallest set of zipper graphs with the properties:

  •  it contains the S, K, I zipper graphs defined in  the next figure


  • for any natural number n >0 and for any n+1 zipper combinators, the zipper graph obtained by connecting the out arrows of the  zipper combinators to the in arrows of the (+n) half-zipper is a zipper combinator.


  • any zipper graph which is obtained by applying a zipper logic move to a zipper combinator is a zipper combinator.



I want to prove that for any zipper combinator A, there is a number n and  a succession of n  LOCAL PRUNING,  CLICK and ZIP moves such that:


Proof.     Becauseof the LOC PRUNING move satisfied by  $latex  (+)$ half-zippers, it follows that by a finite number of these moves we can achieve the effect described in the next figure, for any zipper combinator A (of course, the number of these moves depends on A):


It is left to prove that the free arrow ending with a termination node can “eat”, one by one, all the I, K, S zipper combinators.

For the I combinator, it happens like this:


The case of the combinator K is described next:


The combinator S is absorbed by the following succession of moves:



The proof is done.


We can read all backwards, by saying that an arrow connected to a termination node can give birth to  any zipper combinator, with the out arrow connected to a termination node.





Knot diagrams of the SKI (zipper logic V)

Continuing from  Zipper logic and knot diagrams, here are the  S,K,I combinators when expressed in this convention of the zipper logic:


Besides crossings (which satisfy at least the Reidemeister 2 move), there are also fanout nodes. There are associated DIST moves which self-reproduce the half-zippers as expressed with crossings.

Where do the DIST moves come from? Well, recall that there are at least two different ways to express crossings as macros in GLC or chemlambda: one with application and abstraction nodes, the other with fanout and dilation nodes.

This is in fact the point: I am interested to see if the emergent algebra sector of GLC, or the corresponding one in chemlambda, is universal, and when I am using crossings I am thinking secretly about dilations.

The DIST moves (which will be displayed in a future post) come from the DIST moves from the emergent algebra sector of chemlambda (read this post and links therein).

There is though a construct which is strange, namely the left-to-right arrow which has attached a stack of right-to-left arrows,  and the associated CLICK move which connects these stacks of arrows.

Actually, these stacks look like half-zippers themselves and the CLICK move looks like (un)zipping a zipper.

So, are we back to square one?

No, because even if we replace those stacks by some other half-zippers and the CLICK move by unzipping, we still have the property that those constructs and moves, which are external to knot diagrams, are very localized.

Anyway, I can cheat by saying that I can do the CLICK move, if the crossings are expressed in the emergent algebra sector of chemlambda (therefore dilation nodes, fanout and fanin nodes), with the help of ELIM LOOPS and SWITCH.

But I am interested into simple, mindless ways to do this.



Zipper logic (III). Universality

Continues from Zipper logic (II). Details . In this post we shall see that zipper logic is Turing universal.

The plan is the following. I shall define combinators in the zipper logic framework, as being made of two ingredients:

  • instead of a tree of application nodes we have a combination of + half zippers (i.e. there is a clear way to transform a binary tree of application nodes into a collection of + zippers)
  • each combinator is then seen as being made of a combination of + half zippers (replacing the tree of application nodes) and the combinators S, K, I, seen also as particular combinations of + and – zippers.

1. Converting trees of application nodes into combinations of + zippers is straightforward.  Imagine such a tree with the root upwards and leaves downwards.  Each application node has then an exit arrow  and two input arrows, one at left, the other one at right.

Cut into two each right input arrow which is not a leaf or the root arrow.  We obtain a collection of + half zippers. Replace each tower of application nodes from this collection with the corresponding + half zipper.

Then glue back along the arrows which have been cut in two.

An example is given in the following figure.


2. The S, K, I combinators in zipper logic are defined int he next figure.


3. Combinators in zipper logic are by definition trees of application nodes translated into + half zippers, as explained at point 1, with leaves the S, K, I combinators defined at point 2.

4. Computation in zipper logic means the application of the moves (graph rewrites) defined in the previous post  Zipper logic (II). Details .

In particular the computation with a combinator in zipper logic means the reduction according to the graph rewrites of zipper logic of the respective combinator, as defined at point 3.

5. Zipper logic is Turing universal. In order to prove this we have to check the usual reductions of the S, K, I combinators.

Let A be a  combinator, then the zipper combinator which corresponds to IA reduces to the zipper combinator of A:


Let A, B be two combinators. Then the zipper combinator corresponding to the combinator (KA)B reduces as in the following figure.


The combinator (SK)K reduces to the combinator I, as in the next figure.


Now, let A, B, C be three combinators. We see then how the combinator ((SA)B)C reduces to (AC)(BC), when expressed as zipper combinators.


We see here a move called “FAN-OUT”.  This move is a composite of DIST and  FAN-IN, like in chemlambda. It is left to prove that indeed, any zipper combinator is a multiplicator (i.e. the move FAN-OUT makes sense when applied to a zipper combinator). The proof is the same as the one needed in a step of the proof that chemlambda is Turing universal. This is left to the reader, or for another time, if necessary.


Question:  why is CLICK needed? after all I used it all the time with ZIP.

Will see this next time, when I shall prove that tangle diagrams are Turing universal, via zipper logic.


I-don’t-always advantage of the chemical concrete machine

… over other computing formalisms, is contained in the following wise words:


Indeed, usually a FAN-OUT gate is something which has a variable as an input and two copies of it as an output. That is why FAN-OUT gates are not available in any model of computation, like for example in quantum computing.

But if you don’t use variable (names) and there’s nothing circulating through the wires of your computer model, then you can use the FAN-OUT gate, without  impunity, with the condition to have something which replaces the FAN-OUT behaviour, without it’s bad sides.  Consider graph rewriting systems for your new computer.

This is done in the chemical concrete machine, with the help of DIST enzymes and associated moves (chemical reactions). (“DIST” comes from distributivity.)

In graphic lambda calculus, the parent of the chemical concrete machine, I proved that combinatory logic can be done by using local moves available and one global move, called GLOBAL FAN-OUT.  This global move is what is resembling the most with the behaviour of a usual FAN-OUT gate:  A graph A connected to the input of a FAN-OUT gate is replaced by two copies of the graph.

That’s bad, I think, so in the chemical concrete machine I arrived to prove that GLOBAL FAN-OUT can be replaced, as concerns graphs (or molecules, in the chemical concrete machine formalism) which represent combinators, with successions of local DIST moves (and some other local moves) .

It is possible exactly because there are no variable names. Moreover, there’s something almost biological in the succession of moves: we see how combinators reproduce.

As an illustration, the following is taken from the post  Chemical concrete machine, detailed (V) :

Here are the four “molecules” which represent the combinators B, C, K, W.  (Via the red-green vs black-white change of notation, they can be deduced from their expressions in lambda calculus by the algorithm described here . )


Let’s see how the “molecule” K behaves, when connected to a FAN-OUT gate (green node with one input and two outputs):


The “reproduction” of the molecule B is more impressive:


In the formalism of the chemical concrete machine, \delta^{+} is a distributivity move (or “enzyme” which facilitates the move in one direction, preferentially), and \phi^{+} is a FAN-IN move (facilitated in one direction).


See more about this in the Chemical concrete machine tutorial.


This makes me believe that, as long as we don’t reason in terms of states (or any other variables), it is possible to have FAN-OUT gates in quantum computation.

Example: if-then-else in the chemical concrete machine

… along with a short discussion about what the chemical concrete machine can compute. I start with this and then go to the example.

Until now I proved that the graph rewriting system called “chemical concrete machine” can do combinatory logic. Therefore it can compute anything (a Turing machine can).  I shall be more specific, because to the eyes of somebody who is not used with functional programming, this might seem outrageous.

It can compute anything which can be computed by using any programming  language based on lambda calculus, like Haskell or Scala. And then, some more (as regards only the things those languages can do without using any syntactic sugar, of course). The procedure is straightforward, namely translate the (essentially) combinatory logic terms into graphs and then let the enzymes (moves) to do their magic. I shall give a bit later the example of the if-then-else structure.

There are, of course, interesting questions to ask, among them:

  • do exist real molecules and real enzymes which react one with another like in the chemical concrete machine formalism? Here I need your help, dear chemists.
  • is this an example of a sequential or parallel computation?
  • what evaluation procedure is used in the chemical concrete machine?

The second question is the most easy to ask: is parallel computation.  The molecules (graphs) are mixed in a reactor with a choice of enzymes and then all possible reactions can occur in parallel, at all times. (Realistic models will have attached a probabilistic part, but there is nothing special here, because the chemical concrete machine, initialized with molecules and enzymes, is just a chemical reaction network, so any procedure to attach the probabilistic part to a chemical reaction network can also be used in conjunction with the chemical concrete machine.)

But we can also prepare the initial molecules such that the computation is sequential. It suffices to use zippers. More later. But for the moment, it is worthy to mention that the chemical concrete machine (as well as the graphic lambda calculus) are already proven to be more powerful than lambda calculus. Why? for at least two reasons:

  • lambda calculus, or combinatory logic, are just sectors of the formalisms, i.e. they correspond to only a part of what the formalisms can do. There are other sectors, as well, for example the tangle diagram sector.
  • they are naturally parallel, as long as we use only local moves, as is the case for the chemical concrete machine, for example. Indeed, I wrote that a graph is a “molecule”, but this is only a way of speaking, because molecules could be better identified with connected graphs. But in these formalisms the graphs are not supposed to be connected: any (finite) collection of graphs (i.e. molecules) is also a graph. The moves being local, there is no interference appearing in the simultaneous application of several instances of the (local) moves in different places, for different molecules (connected subgraphs) or even for the same molecule, as long as the places where the moves are applied are different one from another. On the other side, lambda calculus and combinatory logic are naturally sequential.

The third question, concerning the evaluation procedure will be also explored in further posts.  Care has to be taken here because there are no variables in these formalisms  (which translates in less demand of different species of real molecules only for the need to name variables). So it is about the order of moves, right?  The short answer is that it depends, sometimes the computation done by the chemical machine can be seen as greedy evaluation, sometimes as lazy evaluation.

Let me make again the point that somehow the chemical concrete machine formalism should be seen as part of the beautiful idea of algorithmic chemistry. So, it’s not so unearthly.

Finally, it is well known that lambda calculus and Turing machines are the two pillars of computation. For historical reasons chemists seem to concentrate only on the emulation of Turing machines (pleas correct me if I’m wrong).  The main idea of algorithmic chemistry, as far as I understand, is that a sufficiently complex chemical network has the capacity to do lambda calculus. But, if you are determined to use only Turing machines for chemical computing, then, supposing that algorithmic chemistry idea is true, you have to translate the natural language of lambda calculus into the Turing machine frame. This is a tarpit. Very fast, it becomes very hard to follow.  Instead, why not use lambda calculus as it is, for the real powerful applications of chemical computing, and in parallel use one of the excellent languages for simulating in silico chemical computing.


The if-then-else construct has already been explored in graphic lambda calculus,  see Teaser: B-type neural networks in graphic lambda calculus (II)   in . Here I shall do a much simpler example, just by making the right translations, as explained in the beginning of this post.

In lambda calculus, there are terms called TRUE, FALSE and IFTHENELSE, which are the Church encodings of the booleans true, false and if-then-else. Theassociated graphs in the chemical concrete machine are:


Take other two molecules A, B, with one exit each, in particular you may think that they correspond to terms in lambda calculus (but it is not mandatory). Then IFTHENELSE TRUE A B should become A. In the chemical concrete machine, with only beta + enzymes, look what is happening:


Along this chain of reactions, there is no other choice than the one from the figure. Why? Because essentially at every step there is only one reaction site available to the enzyme beta+ (of course, in the region of the reactor represented in the figure). The result is, unsurprisingly, compatible with the lambda calculus version, with the exception that A and B are not supposed to be (graphs corresponding to) lambda terms. They can be anything, as for example, from the family of “other molecules”.

In lambda calculus IFTHENELSE FALSE A B should become (by reductions) B. In the chemical concrete machine look what happens:


The previous remarks apply here as well.

With a little bit of imagination, if we look closer to what TRUE and FALSE are doing, then we can adapt the IFTHENELSE to what I’ve called a B-type NN synapse and obtain a molecule which releases, under the detection of a certain molecule, the medicine A, and under the detection of another  the medicine B.


Return to the chemical concrete machine tutorial.

Chemical concrete machine, detailed (VI)

Let’s continue from  the post  Chemical concrete machine, detailed (V) , by adding some more facts and remarks.


We have seen that in order to not have to apply a controlled sequence of CO-ASSOC molecules (i.e. moves), it is better to introduce a composite move, saying that a certain simple molecule is a propagator.


With this move the formalism of the chemical concrete machine is exactly as powerful as without it. The reason is that the move called “PROP+” can be seen as a sequence of CO-ASSOC moves and DIST+ moves.

Let’s accept this move (i.e. like if there is an associated enzyme to it) and let’s revisit the proof that the W molecule is a multiplier.



Besides the B, C, K, W molecules (which correspond to the  B,C,K,W system  ), other molecules are also interesting: in the following figure are presented the molecules F, I, S’ and S.


The molecule F may represent FALSE in the Church encoding of the booleans, along with K which encodes TRUE.  The molecule I is the identity and the molecule S corresponds to the combinator S from the SKI combinator calculus. which is

a computational system that may be perceived as a reduced version of untyped lambda calculus. It can be thought of as a computer programming language, though it is not useful for writing software. Instead, it is important in the mathematical theory of algorithms because it is an extremely simple Turing complete language.

These correspondences are established by using the algorithm for transforming lambda calculus terms into graphs in the graphic lambda calculus, then by using the dictionary between the gates from graphic lambda calculus to the essential molecules from the chemical concrete machine.

What about the S’ molecule?  It is a version of the molecule S, i.e. it transforms into S by this reaction:


Also, there is a proof, analogous with the one for W, of the fact that S’ is a multiplier, by using the PROP+ move. This proof is better than the proof for S which was given at the end of the post   Local FAN-IN eliminates global FAN-OUT (I) .


Return to the chemical concrete machine tutorial.