Tag Archives: graph rewriting systems

A comparison of two models of computation: chemlambda and Turing machines

The purpose is to understand clearly what is this story about. The most simple stuff, OK? in order to feel it in familiar situations.

Proceed.
Chemlambda is a collection of rules about rewritings done on pieces of files in a certain format. Without an algorithm which tells which rewrite to use, where and when,  chemlambda does nothing.

In the sophisticated version of the Distributed GLC proposal, this algorithmic part uses the Actor Model idea. Too complicated!, Let’s go simpler!

The simplest algorithm for using the collection of rewrites from chemlambda is the following:
  1. take a file (in the format called “mol”, see later)
  2. look for all patterns in the file which can be used for rewrites
  3. if there are different patterns which overlap, then pick a side (by using an ordering or graph rewrites, like the precedence rules in arithmetic)
  4. apply all the rewrites at once
  5. repeat (either until there is no rewrite possible, or a given number of times, or forever)
 To spice things just a bit, consider the next simple algorithm, which is like the one described, only that we add at step 2:
  •  for every identified pattern flip a coin to decide to keep it or ignore it further in the algorithm
The reason  is that  randomness is the simplest way to say: who knows if I can do this rewrite when I want, or maybe I have in my computer only a part of the file, or maybe I have to know that a friend has a half of the pattern and I have the other, so I have to talk with him first, then agree to make together the rewrite. Who knows? Flip a coin then.

Now, proven facts.

Chemlambda with the stupid deterministic algorithm is Turing universal. Which means that implicitly this is a model of computation. Everything is prescribed from the top to the bottom. Is on the par with a Turing machine, or with a RAND model.

Chemlambda with the random stupid model seems to be also Turing universal, but I don’t have yet a proof for this. There is a reason for the fact that it is as powerful as the stupid deterministic model, but I won’t go there.

So the right image to have is that chemlambda with the  described algorithm can do anything any computer can.

The first question is, how? For example how compares chemlambda with a Turing machine? If it is at this basic level then it means it is incomprehensible, because we humans can’t make sense of a scroll of bytecode, unless we are highly trained in this very specialised task.

All computers do the same thing: they crunch machine code. No matter which high language you use to write a program, it is then compiled and eventually there is a machine code which is executed, and that is the level we speak.

It does not matter which language you use, eventually all is machine code. There is a huge architectural tower and we are on the top of it, but in the basement all looks the same. The tower is here for us to be easy to use the superb machine. But it is not needed otherwise, it is only for our comfort.

This is very puzzling when we look at chemlambda because it is claimed that chemlambda has something to do with lambda calculus, or lambda calculus is the prototype of a functional programming language. So it appears that chemlamdba should be associated with higher meaning and clever thinking, and abstraction of the abstraction of the abstraction.

No, from the point of view of the programmer.

Yes, from the point of view of the machine.

In order to compare chemlambda with a TM we have to put it in the same terms. So you can easily put a TM in terms of a rewrite system, such that it works with the same stupid deterministic algorithm. http://chorasimilarity.github.io/chemlambda-gui/dynamic/turingchem.html

It is not yet put there, but the conclusion is obvious: chemlambda can do lambda calculus with one rewrite, while an Universal Turing Machine needs about 20 rewrites to do what TM do.

Unbelievable!
Wait, what about distributivity, propagation, the fanin, all the other rewrites?
They are common, they just form a mechanism for signal transduction and duplication!
Chemlambda is much simpler than TM.

So you can use directly chemlambda, at this metal level, to perform lambda calculus. Is explained here
https://chorasimilarity.wordpress.com/2015/04/21/all-successful-computation-experiments-with-lambda-calculus-in-one-list/

And I highly recommend  to try to play with it by following the instructions.

You need a linux system, or any system where you have sh and awk.

Then

2. unzip it and go to the directory “dynamic”
3. open a shell and write:  bash moving_random_metabo.sh
4. you will get a prompt and a list of files with the extension .mol , write the name of one of them, in the form file.mol
5. you get file.html. Open it with a browser with js enabled. For reasons I don’t understand, it works much better in safari, chromium, chrome than in firefox.

When you look at the result of the computation you see an animation, which is the equivalent of seeing a TM head running here and there on a tape. It does not make much sense at first, but you can convince that it works and get a feeling about how it does it.

Once you get this feeling I will be very glad to discuss more!

Recall that all this is related  to the most stupid algorithm. But I believe it helps a lot to understand how to build on it.
____________________________________________________

Turing Machines, chemlambda style (I)

UPDATE: First animation and a more detailed version here. There will be many additions, as well as programs available from the gh-pages of the repo.

Once again: why do chemists try to reproduce silicon computers? Because that’s the movie.

    At first sight they look very different. Therefore, if we want to compare them, we have to reformulate them in ways which are similar. Rewrite systems are appropriate here.
    The Turing Machine (TM) is a model of computation which is well-known as being a formalisation of what a computer is. It is a machine which has some internal states (from a set S), has a head which read/writes symbols (from a set A) on a tape. The tape is seen as an infinite word made of letters from A. The set A has a special letter (call it “b” from blank) and the infinite words which describe tapes have to be such that all but a finite number of letters of that word are different from “b”. Imagine an infinite, in both directions, tape which has written symbols on it, such that “b” represents an empty space. The tape has only a finite part of it filled with letters from A, others than the blank letter.
    The action of the machine depends on the internal state and on the symbol read by the head. It is therefore a function of the internal state of the machine (element of S), the letter from the tape which is read (element of A), and outputs a letter from the alphabet A (which is written in the case where previously was the letter which has been read, changes its internal state, and the head moves one step along the tape, to the left (L) or right (R), or does not move at all (N).
    The TM can be seen as a rewrite system.
    For example, one could see a TM as follows. (Pedantically this is seen as a multi-headed TM without internal states; the only interest in this distinction is that it raises the question if there is really any meaning into discerning internal states from the tape symbols.) We start from a set (or type) of internal states S. Such states are denoted by A, B, C (thus exhibiting their type by the type of the font used). There are 3 special symbols: < is the symbol of the beginning of a list (i.e. word), > is the symbol of the end of a list (word) and M is the special symbol for the head move (or of the type associated to head moves). There is an alphabet A of external states (i.e. tape symbols), with b (the blank) being in A.
    A tape is then a finite word (list) of one of the forms < w A w’ > , < w M A w’ > , < w A M w’ >, where A is an internal state and w and w’ are finite words written with the alphabet A, which can be empty words as well.
    A rewrite replaces a left pattern (LT) by a right pattern (RT), and there are denoted as LT – – > RT . Here LT and RT are sub-words of the tape word. It supposed that all rewrites are context independent, i.e. LT is replaced by RT regardless of the place where LT appears in the tape word. The rewrite is called “local” if the lengths (i.e. number of letters) of LT and RT are bounded a priori.
    A TM is given as a list of Turing instructions, which have the form (current internal state, tape letter read, new internal state, tape letter write, move of the tape). In terms as explained here, all this can be expressed via local rewrites.

  • Rewrites which introduce blanks at the extremities of the written tape:
    • < A   – – >   < b A
    • A >   – – >   A b >
  • Rewrites which describe how the head moves:
    • A M a   – – >   a A
    • a M A   – – >   A a
  • Turing instructions rewrites:
    • a A c   – – >   d B M c   , for the Turing instruction (A, a, B, d, R)
    • a A c   – – >   d M B c   , for the Turing instruction (A, a, B, d, L)
    • a A c   – – >   d B c   , for the Turing instruction (A, a, B, d, N)
    Together with the algorithm “at each step apply all the rewrites which are possible, else stop” we obtain the deterministic TM model of computation. For any initial tape word, the algorithm explains what the TM does to that tape. < don’t forget to link that to a part of the Cartesian method “to be sure that I made an exhaustive enumeration” which is clearly going down today > Other algorithms are of course possible. Before mentioning some very simple variants of the basic algorithm, let’s see when it works.
    If we start from a tape word as defined here, there is never a conflict of rewrites. This means that there is never the case that two LT from two different rewrites overlap. It might be the case though, if we formulate some rewrites a bit differently. For example, suppose that the Turing rewrites are modified to:
  • a A   – – >   d B M   , for the Turing instruction (A, a, B, d, R)
  • a A   – – >   d M B   , for the Turing instruction (A, a, B, d, L)
    Therefore, the LT of the Turing rewrites is no longer of the form “a A c”, but of the form “a A”. Then it may enter in conflict with the other rewrites, like in the cases:

  • a A M c where two overlapping rewrites are possible
    • Turing rewrite: a A M c   – – >   d M B M c &nbsp which will later produce two possibilities for the head moves rewrites, due to the string M B M
    • head moves rewrite: a A M c   – – >   a c A &nbsp which then produces a LT for a Turing rewrite for c A, instead of the previous Turing rewrite for a A
  • a A > where one may apply a Turing rewrite on a A, or a blank rewrite on A >
    The list is non exhaustive. Let’s turn back to the initial formulation to the Turing rewrites and instead let’s change the definition of a tape word. For example, suppose we allow multiple TM heads on the same tape, more precisely suppose that we accept initial tape words of the form < w1 A w2 B w3 C … wN >. Then we shall surely encounter conflicts between head moves rewrites for patterns as a A M B c.
    The most simple solution for solving these conflicts is to introduce a priority of rewrites. For example we may impose that blank rewrites take precedence over head moves rewrites, which take precedence over Turing rewrites. More such structure can be imposed (like some head moves rewrites have precedence over others). Even new rewrites may be introduced, for example rewrites which allow multiple TMs on the same tape to switch place.
    Let’s see an example: the 2-symbols, 3-states

busy beaver machine

    . Following the conventions from this work, the tape letters (i.e. the alphabet A) are “b” and “1”, the internal states are A, B, C, HALT. (The state HALT may get special treatment, but this is not mandatory). The rewrites are:

  • Rewrites which introduce blanks at the extremities of the written tape:
    • < X   – – >   < b X   for every internal state X
    • A >   – – >   A b >   for every internal state X
  • Rewrites which describe how the head moves:
    • X M a   – – >   a X   , for every internal state X and every tape letter a
    • a M X   – – >   X a   , for every internal state X and every tape letter a
  • Turing instructions rewrites:
    • b A c   – – >   1 B M c   , for every tape letter c
    • b B c   – – >   b C M c   , for every tape letter c
    • b C c   – – >   1 M C c   , for every tape letter c
    • 1 A c   – – >   1 HALT M c   , for every tape letter c
    • 1 B c   – – >   1 B M c   , for every tape letter c
    • 1 C c   – – >   1 M A c   , for every tape letter c
    We can enhance this by adding the priority of rewrites, for example in the previous list, any rewrite has priority over the rewrites written below it. In this way we may relax the definition of the initial tape word and allow for multiple heads on the same tape. Or for multiple tapes.
    Suppose we put the machine to work with an infinite tape with all symbols being blanks. This corresponds to the tape word < A >. Further are the steps of the computation:
  • < A >   – – >   < b A >
  • <b A >   – – >   < b A b >
  • < b A b >   – – >   < 1 B M b >
  • < 1 B M b >   – – >   < 1 b B >
  • < 1 b B >   – – >   < 1 b B b >
  • < 1 b B b >   – – >   < 1 b C M b >
  • < 1 b C M b >   – – >   < 1 b b C >
  • < 1 b b C >   – – >   < 1 b b C b >
  • < 1 b b C b >   – – >   < 1 b 1 M C b >
  • < 1 b 1 M C b >   – – >   < 1 b C 1 b >
  • < 1 b C 1 b >   – – >   < 1 1 M C 1 b >
  • < 1 1 M C 1 b >   – – >   < 1 C 1 1 b >
  • < 1 C 1 1 b >   – – >   < 1 M A 1 1 b >
  • < 1 M A 1 1 b >   – – >   < A 1 1 1 b >
  • < A 1 1 1 b >   – – >   < b A 1 1 1 b >
  • < b A 1 1 1 b >   – – >   < 1 B M 1 1 1 b >
  • <1 B M 1 1 1 b >   – – >   < 1 1 B 1 1 b >
  • < 1 1 B 1 1 b >   – – >   < 1 1 B M 1 1 b >
  • < 1 1 B M 1 1 b >   – – >   < 1 1 1 B 1 b >
  • < 1 1 1 B 1 b >   – – >   < 1 1 1 B M 1 b >
  • < 1 1 1 B M 1 b >   – – >   < 1 1 1 1 B b >
  • < 1 1 1 1 B b >   – – >   < 1 1 1 1 B M b >
  • < 1 1 1 1 B M b >   – – >   < 1 1 1 1 b B >
  • < 1 1 1 1 b B >   – – >   < 1 1 1 1 b B b >
  • < 1 1 1 1 b B b >   – – >   < 1 1 1 1 b C M b >
  • < 1 1 1 1 b C M b >   – – >   < 1 1 1 1 b b C >
  • < 1 1 1 1 b b C >   – – >   < 1 1 1 1 b b C b >
  • < 1 1 1 1 b b C b >   – – >   < 1 1 1 1 b 1 M C b >
  • < 1 1 1 1 b 1 M C b >   – – >   < 1 1 1 1 b C 1 b >
  • < 1 1 1 1 b C 1 b >   – – >   < 1 1 1 1 1 M C 1 b >
  • < 1 1 1 1 1 M C 1 b >   – – >   < 1 1 1 1 C 1 1 b >
  • < 1 1 1 1 C 1 1 b >   – – >   < 1 1 1 1 M A 1 1 b >
  • < 1 1 1 1 M A 1 1 b >   – – >   < 1 1 1 A 1 1 1 b >
  • < 1 1 1 A 1 1 1 b >   – – >   < 1 1 1 HALT M 1 1 1 b >
  • < 1 1 1 HALT M 1 1 1 b >   – – >   < 1 1 1 1 HALT 1 1 b >
    At this stage there are no possible rewrites. Otherwise said, the computation stops. Remark that the priority of rewrites imposed a path of the rewrites applications. Also, at each step there was only one rewrite possible, even if the algorithm does not ask for this.
    More possibilities appear if we see the tape words as graphs. In this case we pass from rewrites to graph rewrites. Here is a proposal for this.
    I shall use the same kind of notation as in

chemlambda: the mol format

    . It goes like this, explained for the busy beaver TM example. We have 9 symbols, which can be seen as nodes in a graph:

  • < which is a node with one “out” port. Use the notation FRIN out
  • > which is a node with one “in” port. Use the notation FROUT in
  • b, 1, A, B, C, HALT, M which are nodes with one “in” and one”out” port. Use a notation Name of node in out
    The rule is to connect “in” ports with “out” ports, in order to obtain a tape word. Or a tape graph, with many busy beavers on it. (TO BE CONTINUED…)

How to simulate an enzyme with a quine in chemlambda (II)

Continues from part (I).

How to simulate a move with a quine. Let’s think.

The starting point is a mol file which describes the graph. Then there is a program which does the reductions. We should see the program (or the specific part of it which does a move) as the enzyme of that move. OK, but a program is something and a mol file is a different thing.

Recall though that chemlambda is Turing universal. That means in particular that for any particular mol file A.mol and any particular program P which reduces mol files there is a chemlambda molecule and a reduction algorithm which simulate how the program reduces the mol file. Kind of a bootstrapping, right?

Yes, but let’s say it again: given A.mol and P there exist B.mol such that P reduces B.mol simulates the computation (P reduces A.mol).

In particular there is a part E.mol of the B.mol file (or more abstractly a subgraph of the molecule B) whose reductions in the “context” B simulate the part of the program P acting on A and doing a reduction.

That part is actually a molecule, just as A, which can be thought as being the enzyme of the move (which is implemented in P).

Then, instead of making a (theoretically possible) algorithm which generates from A.mol and P the file B.mol, etc, instead of that we may do something simpler.

We would like to use a quine as E.mol.

That is because it respects the chemical analogy in the sense that the reduction of E.mol preserves the number of nodes (atoms).

For this we look again at what is a move, and also at what new primitive we need in order to make all this to function.

A move is described by a pair of patterns (i.e. molecules), called LEFT and RIGHT pattern, with the property that there is a pairing between the free ports of the LEFT pattern and the RIGHT pattern.

The move then is implemented by an algorithm which:

  • does a pattern matching for the LEFT pattern
  • then replaces the LEFT pattern by the RIGHT pattern.

(I should add that not any instance of the LEFT pattern in A.mol is replaced by a RIGHT pattern, because there may be conflicts created by the fact that a node of A.mol appears in two different LEFT patterns, so there is a need for a criterion which decides which move to do. I neglect this, you’ll see later why, but roughly I concentrate on the move, because the criterion (or priority choice) is a different problem.)

Aha! We need the possibility to make pairings of edges, or even better (seen the mol format) we need a way to make pairings of ports.

I shall use the notation a:b for such a pairing. Notice that:

  • we see such pairings when we say that x is of type a, i.e x:a (recall that one can see types as pairings between edges of molecules and edges of other molecules which represent types from the Calculus of Constructions)
  • the pairing a:A saying that the edge a belongs to the actor A is a pairing (induces a pairing)  between the edges of a molecule and the edges of an actor diagram in distrubted GLC
  • the pairing a:b, where a is an edge of a molecule and x is an edge of another molecule which represents an emergent algebra term (see the thread on projective conical spaces) says that edge “a” is in place “b”
  • the pairing a:b can be the consequence of a copmmunication; non-exclusively one may think of a:b as a consequence of c?(b) | c!(a)
  • in conclusion moves are particular pairings coupled with an algorithm for associating a LEFT pattern to a RIGHT pattern, pretty much as typing is a pairing with another algorithm, or places (i.e. emergent algebras and their reductions) is of the same kind; the algorithm which does the pairing is a matter of communication procedure and it can be implemented by process calculi, actor models or whatever you like best.

So, let’s pick a small quineas E.mol, like a 9- or 10-quine which has the property that it has one LEFT pattern for (say) beta move and let’s do the following:

  • pair a LEFT pattern from A.mol with the same (pattern matching) from E.mol
  • reduce E.mol  (bootstrapping) and get E’.mol which is identical to E.mol up to renaming some of the port (i.e.edge) names
  • update the pairing (which is akin of the intention of  replacement of  LEFT pattern by RIGHT pattern)
  • let the program in charge of A.mol to rewire (apply the update)

It’s perhaps a too short explanation, will come with details from A to Z next year.

Wish you the best for 2015!

__________________________________________________________________________

Y again: conclusion!

In the post Y again: conflict! I took the most obvious reduction strategy (sequential) and applied it to the reduction of the “Y combinator applied to something”.  The reduction ended in conflict between two moves which compete for the same g-pattern.

Then, in the post Y again:compete!  I took in parallel the two possible outcomes of the conflict. The contenders have been branded as fast shooting cowboys, offering a show.

Surprisingly, both possible paths of reduction ended in a very simple version of the Y combinator.

Only  that the very simple version is not one coming from lambda calculus!

Indeed, let’s recall who is the Y combinator, seen a g-pattern in chemlambda.

In lambda calculus the Y combinator is

Lx.( (Ly.(x(yy)) (Ly.(x(yy)) )

As a molecule, it looks like this.

 

yagain_2

As g-pattern, it looks like this (see this post  and this post for the conversion of lambda terms into g-patterns):

L[a,x,o] A[b,c,a] FO[x,y,z]

L[e,d,b] FO[d,f,g] A[f,g,h] A[y,h,e]

L[j,i,c] FO[i,l,m] A[l,m,k] A[z,k,j]

 

Applied to something means we add to this g-pattern  the following:

A[o,p,u]

with the meaning that Y applies to whatever links to the port “p”. (But mind that in chemlambda there is no variable or term passing or evaluation! so this is a way to speak in the lambda calculus realm, only).

The two mentioned posts about Y again led to the conclusion that the g-pattern “Y applied to something” behaves (eventually, after several reductions) as the far more simple g-pattern:

A[o,p,u]                (i.e. “applied to something at port “p”)

L[b,a,o]

FO[a,c,d] A[c,d,b]

Now, this means that the Y combinator g-pattern may be safely replaced in computations by

L[b,a,o]

FO[a,c,d] A[c,d,b]

or, in graphical version, by

 

yagain_5

But this is outside lambda calculus.

So what?

It is far simpler than the Y combinator from lambda calculus.

The same happens with other lambda terms and reductions.(see for example the post Actors for the Ackermann machine, for another example. Incidentally, the analysis of the Ackermann machine, i.e. the graph which behaves like the Ackermann function, gave me the idea of using the actor model with GLC. This evolved into arXiv:1312.4333.).

This shows  the fact that chemlambda, even with the dumbest sequential reduction strategy (ok, enhanced in obvious ways so it solves conflicts), can do more with less fuss than lambda calculus.

By looking on the net (recall that I’m a mathematician, therefore excuse my ignorance in CS well known people, I’m working on this), I can’t but wonder what chemlambda would give in relation with, for example:

Of course, the dream is to go much, much further. Why? Because of  the List of Ayes/Noes of the artificial chemistry chemlambda.

__________________________________________________________

 

Lambda calculus and the fixed point combinator in chemlambda (VIII)

This is the 8th  (continuing from part I  and part II  and part III and part IV and part V and part VI  and part VII) in a series of expository posts where we put together in one place the pieces from various places about:

  • how is treated lambda calculus in chemlambda
  • how it works, with special emphasis on the fixed point combinator.

I hope to make this presentation  self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)

_________________________________________________________

This series of posts may be used as a longer, more detailed version of sections

  • The chemlambda formalism
  • Chemlambda and lambda calculus
  • Propagators, distributors, multipliers and guns
  • The Y combinator and self-multiplication

from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication,  arXiv:1403.8046 [cs.AI],  presented  by Louis Kauffman in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York.  Here is a link to the published  article, free, at MIT Press.

_________________________________________________________

Tags. I shall use the name “tag” instead of “actor” or “type”, because is more generic (and because in future developments we shall talk  more about actors and types, continuing from the post Actors as types in the beta move, tentative).

Every port of a graphical element (see part II)  and the graphical element itself can have tags, denoted by :tagname.

There is a null tag “null” which can be omitted in the g-patterns.

As an example, we may see, in the most ornate way, graphical elements like this one:

L[x:a,y:b,z:c]:d

where of course

L[x:null,y:null,z:null]:null    means L[x,y,z]

The port names are tags, in particular “in” out” “middle” “left” and “right” are tags.

Any concatenation of tags is a tag.  Concatenation of tags is denoted by a dot, for example “left.right.null.left.in”.  By the use of “null” we have

a.null –concat–> a

null.a –concat–> a

I shall not regard concat as a move in itself (maybe I should, but that is for later).

Further in this post I shall not use tags for nodes.

Moves with tags. We can use tags in the moves, according to a predefined convention. I shall take several  examples.

1. The FAN-IN move with tags. If the tags a and b are different then

FI[x:a, y:b, z:c] FO[z:c,u:b, v:a]

–FAN-IN–>

Arrow[x:a,v:a] Arrow[y:b,u:b]

Remark that the move is not reversible.

It means that you can do FAN-IN only if the right tags are there.

2. COMB with tags.

L[x:a, y:b, z:c] Arrow[y:b, u:d]

–COMB–>

L[x:a, u:d,z:c]

and so on for all the comb moves which involve two graphical elements.

3. DIST with tags.  There are two DIST moves, here with tags.

A[x:a,y:b,z:c] FO[z:c,u:d,v:e]

–DIST–>

FO[x:a, w:left.d, p:right.e]   FO[y:b, s:left.d, t:right.e]

A[w:left.d, s:left.d, u:d]   A[p:right.e, t:right.e, v:e]

In graphical version

 

dist_with_tags

 

and the DIST move for the L node:

L[y:b, x:a, z:c] FO[z:c, u:d, v:e]

–DIST–>

FI[p:right, w:left, x:a] FO[y:b, s:left, t:right]

L[s:left, w:left,u:d]  L[t:right, p:right, v:e]

In graphical version:

 

dist_tags_lambda

 4. SHUFFLE. This move replaces CO-ASSOC, CO-COMM. (It can be done as a sequence of CO-COMM and CO-ASSOC; conversely, CO-COMM and CO-ASSOC can be done by SHUFFLE and LOC PRUNING, explanations another time.)

FO[x:a, y:b, z:c]  FO[y:b, w:left, p:right] FO[z:c, s:left, t:right]

–SHUFFLE–>

FO[x:a, y:left, z:right]  FO[y:left, w, s] FO[z:right, p, t]

In graphical version:

 

shuffle_with.tags

 

____________________________________________________________

 

 

 

Lambda calculus and the fixed point combinator in chemlambda (VII)

This is the 7th  (continuing from part I  and part II  and part III and part IV and part V and part VI)  in a series of expository posts where we put together in one place the pieces from various places about:

  • how is treated lambda calculus in chemlambda
  • how it works, with special emphasis on the fixed point combinator.

I hope to make this presentation  self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)

_________________________________________________________

This series of posts may be used as a longer, more detailed version of sections

  • The chemlambda formalism
  • Chemlambda and lambda calculus
  • Propagators, distributors, multipliers and guns
  • The Y combinator and self-multiplication

from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication,  arXiv:1403.8046 [cs.AI],  which is accepted in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York, (go see the presentation of Louis Kauffman if you are near the event.) Here is a link to the published  article, free, at MIT Press.

_________________________________________________________

In this post I take a simple example which contains beta reduction and self-multiplication.

Maybe self-multiplication is a too long word. A short one would be “dup”, any tacit programming language has it. However, chemlambda is only superficially resembling to tacit programming (and it’s not a language, arguably, but a GRS, nevermind).

Or “self-dup” because chemlambda has no “dup”, but a mechanism of self-multiplication, as explained in part VI.

Enough with the problem of the right denomination, because

“A rose by any other name would smell as sweet”

as somebody wrote, clearly not  believing that the limit of his world is the limit of his language.

Let’s consider the lambda term (Lx.xx)(Ly.yz). In lambda calculus there is the following string of reductions:

(Lx.xx)(Ly.yz) -beta-> (Ly.yz) (Lu.uz) -beta-> (Lu.uz) z -beta-> zz

What we see? Let’s take it slower. Denote by C=xx and by  B= Ly.yz. Then:

(Lx.C)B -beta-> C[x:=B]  = (xx)[x:=B]  =  (x)[x:=B]  (x)[x:=B] = BB = (Ly.yz) B -beta-> (yz)[y:=B] = (y)[y:=B] (z)[y:=B] =  Bz = (Lu.uz)z -beta=> (uz)[u:=z] = (u)[u:=z] (z)[u:=z]  = zz

Now, with chemlambda and its moves performed  only from LEFT to RIGHT.

The g-pattern which represents (Lx.xx)(Ly.yz) is

L[a1,x,a] FO[x,u,v] A[u,v,a1] A[a,c,b]  L[w,y,c] A[y,z,w]

We can only do a beta move:

L[a1,x,a] FO[x,u,v] A[u,v,a1] A[a,c,b]  L[w,y,c] A[y,z,w]

<–beta–>

Arrow[a1,b] Arrow[c,x] FO[x,u,v] A[u,v,a1] L[w,y,c] A[y,z,w]

We can do two COMB moves

Arrow[a1,b] Arrow[c,x] FO[x,u,v] A[u,v,a1] L[w,y,c] A[y,z,w]

2 <–COMB–>

FO[c,u,v] A[u,v,b] L[w,y,c] A[y,z,w]

Now look, that is not a representation of a lambda term, because of the fact that FO[c,u,v] is “in the middle”, i.e. the middle.in port of the FO[c,u,v] is the out port of B, i.e. the right.out port of the lambda node L[w,y,c]. On the same time, the out ports of FO[c,u,v] are the in ports of A[u,v,b].

The only move which can be performed is DIST, which starts the self-dup or self-multiplication of B = L[w,y,c] A[y,z,w] :

FO[c,u,v] A[u,v,b] L[w,y,c] A[y,z,w]

<–DIST–>

FI[e,f,y] FO[w,g,h] L[h,e,v] L[g,f,u] A[u,v,b] A[y,z,w]

This is still not a representation of a lambda term. Notice also that the g-pattern which represents B has not yet self-multiplied. However, we can already perform a beta move  for L[g,f,u] A[u,v,b] and we get (after 2 COMB moves as well)

FI[e,f,y] FO[w,g,h] L[h,e,v] L[g,f,u] A[u,v,b] A[y,z,w]

<–beta–>

FI[e,f,y] FO[w,g,h] L[h,e,v] Arrow[g,b] Arrow[v,f] A[y,z,w]

2 <–COMB–>

FI[e,f,y] FO[w,b,h] L[h,e,f] A[y,z,w]

This looks like a weird g-pattern. Clearly is not a g-pattern coming from a lambda term, because it contains the fanin node FI[e,f,y].  Let’s write again the g-pattern as

L[h,e,f]  FI[e,f,y]  A[y,z,w] FO[w,b,h]

(for our own pleasure, the order of the elements in the g-pattern does not matter)  and remark that A[y,z,w] is “conjugated” by the FI[e,f,y] and FO[w,b,h].

We can apply another DIST move

L[h,e,f]  FI[e,f,y]  A[y,z,w] FO[w,b,h]

<–DIST–>

A[i,k,b] A[j,l,h] FO[y,i,j] FO[z,k,l] FI[e,f,y] L[h,e,f]

and now there is only one move which can be done, namely a FAN-IN:

A[i,k,b] A[j,l,h] FO[y,i,j] FO[z,k,l] FI[e,f,y] L[h,e,f]

<–FAN-IN–>

Arrow[e,j] Arrow[f,i] A[i,k,b] A[j,l,h] FO[z,k,l] L[h,e,f]

which gives after 2 COMB moves:

Arrow[e,j] Arrow[f,i] A[i,k,b] A[j,l,h] FO[z,k,l] L[h,e,f]

2 <–COMB–>

A[f,k,b] A[e,l,h] FO[z,k,l] L[h,e,f]

The g-pattern

A[f,k,b] A[e,l,h] FO[z,k,l] L[h,e,f]

is a representation of a lambda term, finally: the representation of (Le.ez)z. Great!

From here, though, we can apply only a beta move at the g-pattern A[f,k,b]  L[h,e,f]

A[f,k,b] A[e,l,h] FO[z,k,l] L[h,e,f]

<–beta–>

Arrow[h,b] Arrow[k,e] A[e,l,h] FO[z,k,l]

2 <–COMB–>

FO[z,k,l] A[k,l,b]

which represents zz.

_____________________________________________________

 

Lambda calculus and the fixed point combinator in chemlambda (VI)

This is the 6th  (continuing from part I  and part II  and part III and part IV and part V)   in a series of expository posts where we put together in one place the pieces from various places about:

  • how is treated lambda calculus in chemlambda
  • how it works, with special emphasis on the fixed point combinator.

I hope to make this presentation  self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)

_________________________________________________________

This series of posts may be used as a longer, more detailed version of sections

  • The chemlambda formalism
  • Chemlambda and lambda calculus
  • Propagators, distributors, multipliers and guns
  • The Y combinator and self-multiplication

from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication,  arXiv:1403.8046 [cs.AI],  which is accepted in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York, (go see the presentation of Louis Kauffman if you are near the event.) Here is a link to the published  article, free, at MIT Press.

_________________________________________________________

In this post I want to concentrate on the mechanism of self-multiplication for g-patterns coming from lambda terms (see  part IV   where the algorithm of translation from lambda terms to g-patterns is explained).

Before that, please notice that there is a lot to talk about an important problem which shall be described later in detail. But here is it, to keep an eye on it.

Chemlambda in itself is only a graph rewriting system. In part V is explained that  the beta reduction from lambda calculus needs an evaluation strategy in order to be used. We noticed that  in chemlambda the self-multiplication is needed in order to prove that one can do beta reduction as the beta move.

We go towards the obvious conclusion that in chemlambda reduction (i.e. beta move) and self-multiplication are just names used for parts of the computation. Indeed, the clear conclusion is that there is a computation which can be done with chemlambda, which has some parts where we use the beta move (and possibly some COMB, CO-ASSOC, CO-COMM, LOC PRUNING) and some other parts we use DIST and FAN-IN (and possibly some of the moves COMB, CO-ASSOC, CO-COMM, LOC PRUNING). These two parts have as names reduction and self-multiplication respectively, but in the big computation they mix into a whole. There are only moves, graphs rewrites applied to a molecule.

Which brings the problem: chemlambda in itself is not sufficient for having a model of computation. We need to specify how, where, when the reductions apply to molecules.

There may be many variants, roughly described as: sequential, parallel, concurrent, decentralized, random, based on chemical reaction network models, etc

Each model of computation (which can be made compatible with chemlambda) gives a different whole when used with chemlambda. Until now, in this series there has been no mention of a model of computation.

There is another aspect of this. It is obvious that chemlambda graphs  form a larger class than lambda terms, and also that the graph rewrites apply to more general situations  than beta reduction (and eventually an evaluation strategy).  It means that the important problem of defining a model of computation over chemlambda will have influences over the way chemlambda molecules “compute” in general.

The model of computation which I prefer  is not based on chemical reaction networks, nor on process calculi, but on a new model, inspired from the Actor Model, called  the distributed GLC. I shall explain why I believe that the Actor Model of Hewitt is superior to those mentioned previously (with respect to decentralized, asynchronous computation in the real Internet, and also in the real world), I shall explain what is my understanding of that model and eventually the distributed GLC proposal by me and Louis Kauffman will be exposed in all details.

4.  Self-multiplication of a g-pattern coming from a lambda term.

For the moment we concentrate on the self-multiplication phenomenon for g-patterns which represent lambda terms. In the following is a departure from the ALIFE 14 article. I shall not use the path which consists into going to combinators patterns, nor I shall discuss in this post why the self-multiplication phenomenon is not confined in the world of g-patterns coming from lambda terms. This is for a future post.

In this post I want to give an image about how these g-patterns self-multiply, in the sense that most of the self-multiplication process can be explained independently on the computing model. Later on we shall come back to this, we shall look outside lambda calculus as well and we shall explore also the combinator molecules.

OK, let’s start. In part V has been noticed that after an application of the beta rule to the g-pattern

L[a,x,b] A[b,c,d] C[c]  FOTREE[x,a1,…,aN] B[a1,…,aN, a]

we obtain (via COMB moves)

C[x] FOTREE[x,a1,…,aN] B[a1,…,aN,d]

and the problem is that we have a g-pattern which is not coming from a lambda term, because it has a FOTREE in the middle of it. It looks like this (recall that FOTREEs are figured in yellow and the syntactic trees are figured in light blue)

structure_12The question is: what can happen next?  Let’s simplify the setting by taking the FOTREE in the middle as a single fanout node, then we ask what moves can be applied further to the g-pattern

C[x] FO[x,a,b]

Clearly we can apply DIST moves. There are two DIST moves, one for the application node, the other for the lambda node.

There is a chain of propagation of DIST moves through the syntactic tree of C, which is independent on the model of computation chosen (i.e. on the rules about which, when and where rules are used), because the syntactic tree is a tree.

Look what happens. We have the propagation of DIST moves (for the application nodes say) first, which produce two copies of a part of the syntactic tree which contains the root.

structure_7

At some point we arrive to a pattern which allows the application of a DIST move for a lambda node. We do the rule:

structure_8

We see that fanins appear! … and then the propagation of DIST moves through the syntactic tree continues until eventually we get this:

structure_9So the syntactic tree self-multiplied, but the two copies are still connected by FOTREEs  which connect to left.out ports of the lambda nodes which are part of the syntactic tree (figured only one in the previous image).

Notice that now (or even earlier, it does not matter actually, will be explained rigorously why when we shall talk about the computing model, for the moment we want to see if it is possible only) we are in position to apply the FAN-IN move. Also, it is clear that by using CO-COMM and CO-ASSOC moves we can shuffle the arrows of the FOTREE,  which is “conjugated” with a fanin at the root and with fanouts at the leaves, so that eventually we get this.

structure_10

The self-multiplication is achieved! It looks strikingly like the anaphase [source]

800px-Anaphase.svgfollowed by telophase [source]

Telophase.svg____________________________________________________