This is the 3rd (continuing from part I and part II) in a series of expository posts where we put together in one place the pieces from various places about:

- how is treated lambda calculus in chemlambda
- how it works, with special emphasis on the fixed point combinator.

I hope to make this presentation self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)

_________________________________________________________

This series of posts may be used as a longer, more detailed version of sections

- The chemlambda formalism
- Chemlambda and lambda calculus
- Propagators, distributors, multipliers and guns
- The Y combinator and self-multiplication

from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication, arXiv:1403.8046 [cs.AI], which is accepted in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York, (go see the presentation of Louis Kauffman if you are near the event.) Here is a link to the published article, free, at MIT Press.

_________________________________________________________

**1. The chemlambda formalism continued: graph rewrites.
**

Now we have all we need to talk about graph rewrites.

For clarity, see part II for the notion of “pattern”. Its meaning depends on what we use: the graphical version of the grammar version. In the graphical version a pattern is a chemlambda graph with the free ports and invisible nodes decorated with port variables. In the grammar version we have the equivalent notion of a g-pattern, which is a way to write a pattern as a multiset of graphical elements.

It is allowed to rename the port variables in a g-pattern, such that after the renaming we still get a g-pattern. That means that if M is a g-pattern and f is any one-to-one function from V(M) to another set of port variables A, then we may replace any port variable x from V(M) by f(x). We shall not think about this (sort of alpha renaming for g-patterns) as being a graph rewrite.

I shall use the following equivalent names further:

- “graph rewrite”
- “move”
- “reduction”

In simple words, a graph rewrite is a rule which says: “replace the pattern by the pattern “.

Let’s see more precisely then what is a graph rewrite. (Technically this is a simple form of graph rewrite, which is not dependent on context, later we may speak about more involved forms. First let’s understand exactly this simple form!)

In order to define a graph rewrite, or move, we need two g-patterns, call them and , such that (perhaps after a renaming of port variables):

- they have the same set of FV_in port variables
- they have the same set of FV_out variables

A move is a pair of such g-patterns. The is called the LEFT pattern of the move, the is called the RIGHT pattern of the move.

The move can be performed from LEFT to RIGHT, called sometimes the “+” direction: replace the LEFT pattern by the RIGHT pattern.

Likewise, the move can be performed from RIGHT to LEFT, called sometimes the “-” direction: replace the RIGHT pattern with the LEFT pattern.

Technically, what I describe here can be made fully explicit as a DPO graph rewriting.

Even if the moves are reversible (they can be performed in the + or – direction), there is a preference to use only the “+” direction (and to embed, if needed, a move performed in the “-” direction into a sequence of moves, called “macro”, more about this later).

The “+” direction is not arbitrarily defined.

_________________________________________________________

OK, enough with these preparations, let’s see the moves.

We shall write the moves in two ways, which are equivalent.

When expressed with g-patterns, they are written as

LEFT pattern <–name of move–> RIGHT pattern

When expressed with patterns (i.e graphical), they appear as

The port names appear in blue. The name of the move appears in blue, the LEFT is on the left, the RIGHT is on the right, the move is figured by a blue arrow.

**Pedantic, but perhaps useful rant.** For some reason, there are people who confuse graphs (which are clearly defined mathematical objects) with their particular representations (i.e. doodles), taking them “literally”. Graphs are graphs and doodles are doodles. When people use doodles for reasoning with graphs, this is for economy of words reasons, the famous “a picture is worth a thousand words”. There is nothing wrong with using doodles for reasoning with graphs, as long as you know the convention used. Perhaps the convention is so intuitive that it would need 1000000 words to make it clear (for a machine), but however there is a simple criterion which helps those who don’t trust their sight: you got it right if you understand what the doodle means at the graph level.

Look again at the previous picture, which shows you what a generic move looks like. The move (from LEFT to RIGHT) consists into:

- cutting the arrows and name them by free port variables (here used 1,…, 5)
- replacing the LEFT pattern by the RIGHT pattern such that it respects the free port variables (1 to 1, 2 to 2, …)
- gluing back the arrows with the rest of the graph, which is not depicted in the move.

How simple is that?

To make it even more simple, we use the following visual trick: **use the relative placements of the free ports in the doodle as the port variables**.

If look carefully at the previous picture, then you notice that you may redraw it (without affecting what the doodle means at the graph level) by representing the free ports of the RIGHT in the same relative positions as the free ports from the left.

The drawing would then look like this:

Then you may notice that you don’t need to write the port variables on the doodles, because they have the same relative positions, so you may as well describe the move as:

This is the convention used everywhere in the doodles from this blog (and it’s nothing special, it’s used everywhere).

I shall close the pedantic rant by saying that there is a deep hypocrisy in the claim that there is ANY need to spend so much words to make clear things clear, like the distinction between graphs and doodles, and relative positions and so on. I ask those who think that text on a page is clear and (a well done) doodle is vague: do you attach to your text a perhaps sound introduction which explains that you are going to use latin letters, that no, the letter and it’s image in the mirror are not the same, that words are to be read from left to right, that space is that character which separates two words, that if you hit the end of a text line then you should pass to the line from behind, that a line is a sequence of characters separated by an invisible character eol, …..? All this is good info for making a text editor, but you don’t need to program a text editor first in order to read a book (or to program a text editor). It would be just crazy, right? Our brains use exactly the same mechanisms to parse a doodle as a text page and a doodle as a depiction of a graph. Our brains understand very well that if you change the text fonts then you don’t change the text, and so on. A big hypocrisy, which I believe has big effects in the divide between various nerd subcultures, like IT and geometers, with a handicapping effect which manifests into the real life, under the form of the products the IT is offering. Well, **end of rant**.

**Combing moves.** These moves are not present in the original chemlambda formalism, because they are needed at the level of the g-patterns. Recall from part I that Louis Kauffman proposed to use commutative polynomials as graphical elements, which brings the need to introduce the Arrow element A[x,y]. This is the same as introducing invisible nodes in the chemlambda molecules (hence the passage from molecules to patterns). The combing moves are moves which eliminate (or add) invisible nodes in patterns. This corresponds in the graphical version to decorations (of those invisible nodes) on arrows of the molecules.

A combing move eliminates an invisible node (in the + direction) or adds an invisible node (in the – direction).

A first combing move is this:

Arrow[x,y] A rrow[y,z] <–comb–> Arrow[x,z]

or graphically remove (or add) a (decoration of an) invisible node :

Another combing move is:

Arrow[x,x] <–comb–> loop

or graphically an arrow with the in and out ports connected is a loop.

Another family of combing moves is that if you connect an arrow to a port of a node then you can absorb the arrow into the port:

L[x,y,z] Arrow[u,x] <–comb–> L[u,y,z]

L[x,y,z] Arrow[y,u] <–comb–> L[x,u,z]

L[x,y,z] Arrow[z,u] <–comb–> L[x,y,u]

______________________________________

FO[x,y,z] Arrow[u,x] <–comb–> FO[u,y,z]

FO[x,y,z] Arrow[y,u] <–comb–> FO[x,u,z]

FO[x,y,z] Arrow[z,u] <–comb–> FO[x,y,u]

______________________________________

A[x,y,z] Arrow[u,x] <–comb–> A[u,y,z]

A[x,y,z] Arrow[u,y] <–comb–> A[x,u,z]

A[x,y,z] Arrow[z,u] <–comb–> A[x,y,u]

______________________________________

FI[x,y,z] Arrow[u,x] <–comb–> FI[u,y,z]

FI[x,y,z] Arrow[u,y] <–comb–> FI[x,u,z]

FI[x,y,z] Arrow[z,u] <–comb–> FI[x,y,u]

______________________________________

Now, more interesting moves.

**The beta move.** The name is inspired from the beta reduction of lambda calculus (explanations later)

L[a1,a2,x] A[x,a4,a3] <–beta–> Arrow[a1,a3] Arrow[a4,a2]

or graphically

If we use the visual trick from the pedantic rant, we may depict the beta move as:

i.e. we use as free port variables the relative positions of the ports in the doodle. Of course, there is no node at the intersection of the two arrows, because there is no intersection of arrows at the graphical level. The chemlambda graphs are not planar graphs.

**The FAN-IN move.** This is a move which resembles the beta move.

FI[a1,a4,x] FO[x,a2,a3]

<–FAN-IN–>

Arrow[a1,a3] Arrow[a4,a2]

(I wrote it like this because it does not fit in one line)

Graphically, with the obvious convention from the pedantic rant, the move is this:

**The FAN-OUT moves.** There are two moves: CO-COMM (because it resembles with a diagram which expresses co-commutativity) and CO-ASSOC (same reason, but for co-associativity).

FO[x,a1,a2] <–CO-COMM–> FO[x,a2,a1]

and

FO[a1,u,a2] FO[u,a3,a4]

<-CO-ASSOC->

FO[a1,a3,v] FO[v,a4,a2]

or graphically:

**The DIST moves.** These are called distributivity moves. Remark that the LEFT pattern is simpler than the RIGHT pattern in both moves.

A[a1,a4,u] FO[u,a2,a3]

<–DIST–>

FO[a1,a,b] FO[a4,c,d] A[a,c,a2] A[b,d,a3]

and

L[a1,a4,u] FO[u,a2,a3]

<–DIST–>

FI[a1,a,b] FO[a4,c,d] L[c,b, a2] L[d,a,a3]

or graphically:

**The LOCAL PRUNING moves.** These are used with the termination node. There are four moves:

FO[a1,a2,x] T[x] <–LOC-PR–> Arrow[a1,a2]

L[a1,x,y] T[x] T[y] <–LOC-PR–> T[a1]

FI[a1,a2,x] T[x] <–LOC-PR–> T[a1] T[a2]

A[a1,a2,x] T[x] <–LOC-PR–> T[a1] T[a2]

or graphically

____________________________________________________________

## 17 thoughts on “Lambda calculus and the fixed point combinator in chemlambda (III)”