Take a better look at the knotted S combinator (zipper logic VI)

Continuing from  Knot diagrams of the SKI (zipper logic V) , here are some  more drawings of the S combinator which was described in the last post by means of crossings:




Seen like this, it looks very close to the drawings from section 5 of  arXiv:1103.6007.

I am not teasing you,  but many things are now clear exactly because of all these detours. There is a lot to write and explain now, pretty much straightforward and a day after day effort to produce something  which describes well the end of the journey. When in fact the most mysterious creative part is the journey.


Knot diagrams of the SKI (zipper logic V)

Continuing from  Zipper logic and knot diagrams, here are the  S,K,I combinators when expressed in this convention of the zipper logic:


Besides crossings (which satisfy at least the Reidemeister 2 move), there are also fanout nodes. There are associated DIST moves which self-reproduce the half-zippers as expressed with crossings.

Where do the DIST moves come from? Well, recall that there are at least two different ways to express crossings as macros in GLC or chemlambda: one with application and abstraction nodes, the other with fanout and dilation nodes.

This is in fact the point: I am interested to see if the emergent algebra sector of GLC, or the corresponding one in chemlambda, is universal, and when I am using crossings I am thinking secretly about dilations.

The DIST moves (which will be displayed in a future post) come from the DIST moves from the emergent algebra sector of chemlambda (read this post and links therein).

There is though a construct which is strange, namely the left-to-right arrow which has attached a stack of right-to-left arrows,  and the associated CLICK move which connects these stacks of arrows.

Actually, these stacks look like half-zippers themselves and the CLICK move looks like (un)zipping a zipper.

So, are we back to square one?

No, because even if we replace those stacks by some other half-zippers and the CLICK move by unzipping, we still have the property that those constructs and moves, which are external to knot diagrams, are very localized.

Anyway, I can cheat by saying that I can do the CLICK move, if the crossings are expressed in the emergent algebra sector of chemlambda (therefore dilation nodes, fanout and fanin nodes), with the help of ELIM LOOPS and SWITCH.

But I am interested into simple, mindless ways to do this.



Why Distributed GLC is different from Programming with Chemical Reaction Networks

I use the occasion to bookmark the post at Azimuth Programming with Chemical Reaction Networks, most of all because of the beautiful bibliography which contains links to great articles which can be freely downloaded. Thank you John Baez for putting in one place such an essential list of articles.

Also, I want to explain very briefly why CRNs are not used in Distributed GLC.

Recall that Distributed GLC  is a distributed computing model which is based on an artificial chemistry called chemlambda, itself a variant (slightly different) of graphic lambda calculus, or GLC.

There are two stages of the computation:

  1. define the initial participants at the computation, each one called an “actor”. Each actor is in charge of a chemlambda molecule. Molecules of different actors may be connected, each such connection being interpreted as a connection between actors.  If we put together all molecules of all actors then we can glue them into one big molecule. Imagine this big molecule as a map of the world and actors as countries, each coloured with a different colour.  Neighbouring countries correspond to connected actors. This big molecule is a graph in the chemlambda formalism. The graph which has the actors as nodes and neighbouring relations as edges is called the “actors diagram” and is a different graph than the big molecule graph.
  2. Each actor has a name (like a mail address, or like the colour of a country). Each actor knows only the names of neighbouring actors. Moreover, each actor will behave only as a function of the molecule it has and according to the knowledge of his neighbouring actors behaviour. From this point, the proper part of the computation, each actor is by itself. So, from this point on we use the way of seeing of the Actor Model of Carl Hewitt.  Not the point of view of Process Algebra. (See  Actor model and process calculi.)  OK, each actor has 5 behaviours, most of them consisting into graph rewrites of it’s own molecule or between molecules of neighbouring actors. These graph rewrites are like chemical reactions between molecules and enzymes, one enzyme per graph rewrite. Finally, the connections between actors (may) change as a result of these graph rewrites.

That is the Distributed GLC model, very briefly.

It is different from Programming with CRN because of several reasons.

1.  Participants at the computation are individual molecules. This may be unrealistic for real chemistry and lab measurements of chemical reactions, but this is not the point, because the artificial chemistry chemlambda is designed to be used on the Internet. (However, see the research project on  single molecule quantum computer).

2. There is no explicit stochastic behaviour. Each actor in charge of it’s molecule behaves deterministically. (Or not, there is nothing which stops the model to be modified by introducing some randomness into the behaviour of each actor, but that is not the point here). There are not huge numbers of actors, or some average behaviour of those.

That is because of point 1. (we stay at the level of individual molecules and their puppeteers, their actors) and also because we use the Actor Model style, and not Process Algebra.

So, there is an implicit randomness, coming from the fact that the computation is designed Actor Model style, i.e. such that it may work differently, depending on the particular physical  timing of messages which are sent between actors.

3.  The computation is purely local. It is also decentralized. There is no corporate point of view of counting the number of identical molecules, or their proportion in a global space – global time solution.  This is something reasonable from the point of view of a distributed computation over the Internet.


All this being said,  of course that it would be interesting to see what happens with CRNs of reactions of molecules in chemlambda.  May be very instructive, but this would be a different model.

That is why Distributed GLC does not use the CRN point of view.


The first thread of the Moirai

I just realized that maybe +Carl Vilbrandt  put the artificial life thread of ideas in my head, with this old comment at the post Ancient Turing machines (I): the three Moirai:

Love the level of free association  between computation and Greek philosophy. Very creative.
In this myth computation = looping cycles of life. by the Greek goddess of Mnemosyne (one of the least rememberer of the gods requires the myth of the Moirai to recall how the logic of life / now formalized as Lisp works.
As to the vague questions:
1. Yes they seem to be the primary hackers of necessity.
2. Yes The emergent the time space of spindle of necessity can only be by the necessary computational facts of matter.
3. Of course at this scale of myth of wisdom it a was discreet and causal issue. Replication of them selfs would have been no problem.
Lisp broken can’t bring my self write in any other domain. So lisp is the language of life. With artifical computation resources science can at last define life and design creatures.

Thank you!

When I wrote that post, things were not as clear to me as now. Basically I was just trying to generate all graphs of GLC (in the newer version all molecules of the artificial chemistry called “chemlambda“) by using the three Moirai as a machine.

This thread is now a big dream and a project in the making, to unify the meatspace with the virtual space by using the IoT as a bridge between the real chemistry of the real world and the artificial chemistry of the virtual world.

Zipper logic and knot diagrams

In this post I want to show how to do zipper logic with knot diagrams. Otherwise said, I want to define zippers and their moves in the realm of knot diagrams.

Knot diagrams, it’s a way of saying, in fact I shall use oriented tangle diagrams (i.e. the wires are oriented and there might be in or out wires) which moreover are only locally planar (i.e. we admit also “virtual crossings”, in the sense that the wires may cross without creating a crossing 4-valent node) and not, as usual, globally planar.

Here is the half zippers definition:


As you see, each half zipper has some arrows which are not numbered, that is because we don’t need this information, which can be deduced from the given numbering, just by following the arrows.

We have to define now the CLICK move. For the zippers from chemlambda, that was easy, the move CLICK is trivial there. No longer here:


The figure illustrates a CLICK move between a (-m)Z and a (+n)Z with m>n.  It’s clear how to define CLICK for the other cases m = n and m<n.

Finally, the ZIP move is nothing by repeated application of a R2 move:


It works very nice and it has a number of very interesting consequences, which will be presented in future posts.

For the moment, let me close by recalling the post Two halves of beta, two halves of chora.


Zipper logic (III). Universality

Continues from Zipper logic (II). Details . In this post we shall see that zipper logic is Turing universal.

The plan is the following. I shall define combinators in the zipper logic framework, as being made of two ingredients:

  • instead of a tree of application nodes we have a combination of + half zippers (i.e. there is a clear way to transform a binary tree of application nodes into a collection of + zippers)
  • each combinator is then seen as being made of a combination of + half zippers (replacing the tree of application nodes) and the combinators S, K, I, seen also as particular combinations of + and – zippers.

1. Converting trees of application nodes into combinations of + zippers is straightforward.  Imagine such a tree with the root upwards and leaves downwards.  Each application node has then an exit arrow  and two input arrows, one at left, the other one at right.

Cut into two each right input arrow which is not a leaf or the root arrow.  We obtain a collection of + half zippers. Replace each tower of application nodes from this collection with the corresponding + half zipper.

Then glue back along the arrows which have been cut in two.

An example is given in the following figure.


2. The S, K, I combinators in zipper logic are defined int he next figure.


3. Combinators in zipper logic are by definition trees of application nodes translated into + half zippers, as explained at point 1, with leaves the S, K, I combinators defined at point 2.

4. Computation in zipper logic means the application of the moves (graph rewrites) defined in the previous post  Zipper logic (II). Details .

In particular the computation with a combinator in zipper logic means the reduction according to the graph rewrites of zipper logic of the respective combinator, as defined at point 3.

5. Zipper logic is Turing universal. In order to prove this we have to check the usual reductions of the S, K, I combinators.

Let A be a  combinator, then the zipper combinator which corresponds to IA reduces to the zipper combinator of A:


Let A, B be two combinators. Then the zipper combinator corresponding to the combinator (KA)B reduces as in the following figure.


The combinator (SK)K reduces to the combinator I, as in the next figure.


Now, let A, B, C be three combinators. We see then how the combinator ((SA)B)C reduces to (AC)(BC), when expressed as zipper combinators.


We see here a move called “FAN-OUT”.  This move is a composite of DIST and  FAN-IN, like in chemlambda. It is left to prove that indeed, any zipper combinator is a multiplicator (i.e. the move FAN-OUT makes sense when applied to a zipper combinator). The proof is the same as the one needed in a step of the proof that chemlambda is Turing universal. This is left to the reader, or for another time, if necessary.


Question:  why is CLICK needed? after all I used it all the time with ZIP.

Will see this next time, when I shall prove that tangle diagrams are Turing universal, via zipper logic.


Zipper logic (II). Details

Continuing from Zipper logic, let’s make the following notation for half zippers:


As you see, each half zipper has a leading strand, i.e. the 1->1″ for the (-n)Z  and 1″->1′ for the (+n)Z.

The half zippers are just towers of nodes, therefore if we put a tower over another then we get a bigger tower, provided is made by the same nodes. We transform this into two moves.


We make, from two half zippers with opposed signs, a zipper and a half zipper (that is the CLICK move), then we ZIP the zipper (this corresponds to the graphic beta move).  This is explained in the next figure, for a particular case n<m


The CLICK move and the (full) zippers are not necessary, but they help to visualize in a more clear way what is happening. Moreover, you shall see that there are other zipper constructs, and the introduction of both full zippers and the CLICK move helps to structure future explanations.


In the zipper logic formalism, there are also used the fanout node, the termination node and the fanin node (so we are seeing the zippers as if we are in chemlambda).

The fanout and fanin nodes satisfy the FAN-IN move.  The fanout satisfies CO-ASSOC and CO-COMM.


I mentioned in the previous post that half zippers are distributors. Indeed, they satisfy the following DIST moves when connected at the leading strand with a fanout:



The LOC PRUNING  moves for half zippers are:



In the next post we shall see that zipper logic is universal.


As given here, some of the moves are not local,  because there is no restriction on the length of the half zippers where the moves apply.  But this is not a serious problem because, as we shall see, it’s enough to use half zippers and moves with bounded length (at least 3).