Tag Archives: beta reduction

Lambda calculus and the fixed point combinator in chemlambda (VIII)

This is the 8th  (continuing from part I  and part II  and part III and part IV and part V and part VI  and part VII) in a series of expository posts where we put together in one place the pieces from various places about:

  • how is treated lambda calculus in chemlambda
  • how it works, with special emphasis on the fixed point combinator.

I hope to make this presentation  self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)

_________________________________________________________

This series of posts may be used as a longer, more detailed version of sections

  • The chemlambda formalism
  • Chemlambda and lambda calculus
  • Propagators, distributors, multipliers and guns
  • The Y combinator and self-multiplication

from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication,  arXiv:1403.8046 [cs.AI],  presented  by Louis Kauffman in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York.  Here is a link to the published  article, free, at MIT Press.

_________________________________________________________

Tags. I shall use the name “tag” instead of “actor” or “type”, because is more generic (and because in future developments we shall talk  more about actors and types, continuing from the post Actors as types in the beta move, tentative).

Every port of a graphical element (see part II)  and the graphical element itself can have tags, denoted by :tagname.

There is a null tag “null” which can be omitted in the g-patterns.

As an example, we may see, in the most ornate way, graphical elements like this one:

L[x:a,y:b,z:c]:d

where of course

L[x:null,y:null,z:null]:null    means L[x,y,z]

The port names are tags, in particular “in” out” “middle” “left” and “right” are tags.

Any concatenation of tags is a tag.  Concatenation of tags is denoted by a dot, for example “left.right.null.left.in”.  By the use of “null” we have

a.null –concat–> a

null.a –concat–> a

I shall not regard concat as a move in itself (maybe I should, but that is for later).

Further in this post I shall not use tags for nodes.

Moves with tags. We can use tags in the moves, according to a predefined convention. I shall take several  examples.

1. The FAN-IN move with tags. If the tags a and b are different then

FI[x:a, y:b, z:c] FO[z:c,u:b, v:a]

–FAN-IN–>

Arrow[x:a,v:a] Arrow[y:b,u:b]

Remark that the move is not reversible.

It means that you can do FAN-IN only if the right tags are there.

2. COMB with tags.

L[x:a, y:b, z:c] Arrow[y:b, u:d]

–COMB–>

L[x:a, u:d,z:c]

and so on for all the comb moves which involve two graphical elements.

3. DIST with tags.  There are two DIST moves, here with tags.

A[x:a,y:b,z:c] FO[z:c,u:d,v:e]

–DIST–>

FO[x:a, w:left.d, p:right.e]   FO[y:b, s:left.d, t:right.e]

A[w:left.d, s:left.d, u:d]   A[p:right.e, t:right.e, v:e]

In graphical version

 

dist_with_tags

 

and the DIST move for the L node:

L[y:b, x:a, z:c] FO[z:c, u:d, v:e]

–DIST–>

FI[p:right, w:left, x:a] FO[y:b, s:left, t:right]

L[s:left, w:left,u:d]  L[t:right, p:right, v:e]

In graphical version:

 

dist_tags_lambda

 4. SHUFFLE. This move replaces CO-ASSOC, CO-COMM. (It can be done as a sequence of CO-COMM and CO-ASSOC; conversely, CO-COMM and CO-ASSOC can be done by SHUFFLE and LOC PRUNING, explanations another time.)

FO[x:a, y:b, z:c]  FO[y:b, w:left, p:right] FO[z:c, s:left, t:right]

–SHUFFLE–>

FO[x:a, y:left, z:right]  FO[y:left, w, s] FO[z:right, p, t]

In graphical version:

 

shuffle_with.tags

 

____________________________________________________________

 

 

 

Advertisements

Lambda calculus and the fixed point combinator in chemlambda (VII)

This is the 7th  (continuing from part I  and part II  and part III and part IV and part V and part VI)  in a series of expository posts where we put together in one place the pieces from various places about:

  • how is treated lambda calculus in chemlambda
  • how it works, with special emphasis on the fixed point combinator.

I hope to make this presentation  self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)

_________________________________________________________

This series of posts may be used as a longer, more detailed version of sections

  • The chemlambda formalism
  • Chemlambda and lambda calculus
  • Propagators, distributors, multipliers and guns
  • The Y combinator and self-multiplication

from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication,  arXiv:1403.8046 [cs.AI],  which is accepted in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York, (go see the presentation of Louis Kauffman if you are near the event.) Here is a link to the published  article, free, at MIT Press.

_________________________________________________________

In this post I take a simple example which contains beta reduction and self-multiplication.

Maybe self-multiplication is a too long word. A short one would be “dup”, any tacit programming language has it. However, chemlambda is only superficially resembling to tacit programming (and it’s not a language, arguably, but a GRS, nevermind).

Or “self-dup” because chemlambda has no “dup”, but a mechanism of self-multiplication, as explained in part VI.

Enough with the problem of the right denomination, because

“A rose by any other name would smell as sweet”

as somebody wrote, clearly not  believing that the limit of his world is the limit of his language.

Let’s consider the lambda term (Lx.xx)(Ly.yz). In lambda calculus there is the following string of reductions:

(Lx.xx)(Ly.yz) -beta-> (Ly.yz) (Lu.uz) -beta-> (Lu.uz) z -beta-> zz

What we see? Let’s take it slower. Denote by C=xx and by  B= Ly.yz. Then:

(Lx.C)B -beta-> C[x:=B]  = (xx)[x:=B]  =  (x)[x:=B]  (x)[x:=B] = BB = (Ly.yz) B -beta-> (yz)[y:=B] = (y)[y:=B] (z)[y:=B] =  Bz = (Lu.uz)z -beta=> (uz)[u:=z] = (u)[u:=z] (z)[u:=z]  = zz

Now, with chemlambda and its moves performed  only from LEFT to RIGHT.

The g-pattern which represents (Lx.xx)(Ly.yz) is

L[a1,x,a] FO[x,u,v] A[u,v,a1] A[a,c,b]  L[w,y,c] A[y,z,w]

We can only do a beta move:

L[a1,x,a] FO[x,u,v] A[u,v,a1] A[a,c,b]  L[w,y,c] A[y,z,w]

<–beta–>

Arrow[a1,b] Arrow[c,x] FO[x,u,v] A[u,v,a1] L[w,y,c] A[y,z,w]

We can do two COMB moves

Arrow[a1,b] Arrow[c,x] FO[x,u,v] A[u,v,a1] L[w,y,c] A[y,z,w]

2 <–COMB–>

FO[c,u,v] A[u,v,b] L[w,y,c] A[y,z,w]

Now look, that is not a representation of a lambda term, because of the fact that FO[c,u,v] is “in the middle”, i.e. the middle.in port of the FO[c,u,v] is the out port of B, i.e. the right.out port of the lambda node L[w,y,c]. On the same time, the out ports of FO[c,u,v] are the in ports of A[u,v,b].

The only move which can be performed is DIST, which starts the self-dup or self-multiplication of B = L[w,y,c] A[y,z,w] :

FO[c,u,v] A[u,v,b] L[w,y,c] A[y,z,w]

<–DIST–>

FI[e,f,y] FO[w,g,h] L[h,e,v] L[g,f,u] A[u,v,b] A[y,z,w]

This is still not a representation of a lambda term. Notice also that the g-pattern which represents B has not yet self-multiplied. However, we can already perform a beta move  for L[g,f,u] A[u,v,b] and we get (after 2 COMB moves as well)

FI[e,f,y] FO[w,g,h] L[h,e,v] L[g,f,u] A[u,v,b] A[y,z,w]

<–beta–>

FI[e,f,y] FO[w,g,h] L[h,e,v] Arrow[g,b] Arrow[v,f] A[y,z,w]

2 <–COMB–>

FI[e,f,y] FO[w,b,h] L[h,e,f] A[y,z,w]

This looks like a weird g-pattern. Clearly is not a g-pattern coming from a lambda term, because it contains the fanin node FI[e,f,y].  Let’s write again the g-pattern as

L[h,e,f]  FI[e,f,y]  A[y,z,w] FO[w,b,h]

(for our own pleasure, the order of the elements in the g-pattern does not matter)  and remark that A[y,z,w] is “conjugated” by the FI[e,f,y] and FO[w,b,h].

We can apply another DIST move

L[h,e,f]  FI[e,f,y]  A[y,z,w] FO[w,b,h]

<–DIST–>

A[i,k,b] A[j,l,h] FO[y,i,j] FO[z,k,l] FI[e,f,y] L[h,e,f]

and now there is only one move which can be done, namely a FAN-IN:

A[i,k,b] A[j,l,h] FO[y,i,j] FO[z,k,l] FI[e,f,y] L[h,e,f]

<–FAN-IN–>

Arrow[e,j] Arrow[f,i] A[i,k,b] A[j,l,h] FO[z,k,l] L[h,e,f]

which gives after 2 COMB moves:

Arrow[e,j] Arrow[f,i] A[i,k,b] A[j,l,h] FO[z,k,l] L[h,e,f]

2 <–COMB–>

A[f,k,b] A[e,l,h] FO[z,k,l] L[h,e,f]

The g-pattern

A[f,k,b] A[e,l,h] FO[z,k,l] L[h,e,f]

is a representation of a lambda term, finally: the representation of (Le.ez)z. Great!

From here, though, we can apply only a beta move at the g-pattern A[f,k,b]  L[h,e,f]

A[f,k,b] A[e,l,h] FO[z,k,l] L[h,e,f]

<–beta–>

Arrow[h,b] Arrow[k,e] A[e,l,h] FO[z,k,l]

2 <–COMB–>

FO[z,k,l] A[k,l,b]

which represents zz.

_____________________________________________________

 

Lambda calculus and the fixed point combinator in chemlambda (VI)

This is the 6th  (continuing from part I  and part II  and part III and part IV and part V)   in a series of expository posts where we put together in one place the pieces from various places about:

  • how is treated lambda calculus in chemlambda
  • how it works, with special emphasis on the fixed point combinator.

I hope to make this presentation  self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)

_________________________________________________________

This series of posts may be used as a longer, more detailed version of sections

  • The chemlambda formalism
  • Chemlambda and lambda calculus
  • Propagators, distributors, multipliers and guns
  • The Y combinator and self-multiplication

from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication,  arXiv:1403.8046 [cs.AI],  which is accepted in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York, (go see the presentation of Louis Kauffman if you are near the event.) Here is a link to the published  article, free, at MIT Press.

_________________________________________________________

In this post I want to concentrate on the mechanism of self-multiplication for g-patterns coming from lambda terms (see  part IV   where the algorithm of translation from lambda terms to g-patterns is explained).

Before that, please notice that there is a lot to talk about an important problem which shall be described later in detail. But here is it, to keep an eye on it.

Chemlambda in itself is only a graph rewriting system. In part V is explained that  the beta reduction from lambda calculus needs an evaluation strategy in order to be used. We noticed that  in chemlambda the self-multiplication is needed in order to prove that one can do beta reduction as the beta move.

We go towards the obvious conclusion that in chemlambda reduction (i.e. beta move) and self-multiplication are just names used for parts of the computation. Indeed, the clear conclusion is that there is a computation which can be done with chemlambda, which has some parts where we use the beta move (and possibly some COMB, CO-ASSOC, CO-COMM, LOC PRUNING) and some other parts we use DIST and FAN-IN (and possibly some of the moves COMB, CO-ASSOC, CO-COMM, LOC PRUNING). These two parts have as names reduction and self-multiplication respectively, but in the big computation they mix into a whole. There are only moves, graphs rewrites applied to a molecule.

Which brings the problem: chemlambda in itself is not sufficient for having a model of computation. We need to specify how, where, when the reductions apply to molecules.

There may be many variants, roughly described as: sequential, parallel, concurrent, decentralized, random, based on chemical reaction network models, etc

Each model of computation (which can be made compatible with chemlambda) gives a different whole when used with chemlambda. Until now, in this series there has been no mention of a model of computation.

There is another aspect of this. It is obvious that chemlambda graphs  form a larger class than lambda terms, and also that the graph rewrites apply to more general situations  than beta reduction (and eventually an evaluation strategy).  It means that the important problem of defining a model of computation over chemlambda will have influences over the way chemlambda molecules “compute” in general.

The model of computation which I prefer  is not based on chemical reaction networks, nor on process calculi, but on a new model, inspired from the Actor Model, called  the distributed GLC. I shall explain why I believe that the Actor Model of Hewitt is superior to those mentioned previously (with respect to decentralized, asynchronous computation in the real Internet, and also in the real world), I shall explain what is my understanding of that model and eventually the distributed GLC proposal by me and Louis Kauffman will be exposed in all details.

4.  Self-multiplication of a g-pattern coming from a lambda term.

For the moment we concentrate on the self-multiplication phenomenon for g-patterns which represent lambda terms. In the following is a departure from the ALIFE 14 article. I shall not use the path which consists into going to combinators patterns, nor I shall discuss in this post why the self-multiplication phenomenon is not confined in the world of g-patterns coming from lambda terms. This is for a future post.

In this post I want to give an image about how these g-patterns self-multiply, in the sense that most of the self-multiplication process can be explained independently on the computing model. Later on we shall come back to this, we shall look outside lambda calculus as well and we shall explore also the combinator molecules.

OK, let’s start. In part V has been noticed that after an application of the beta rule to the g-pattern

L[a,x,b] A[b,c,d] C[c]  FOTREE[x,a1,…,aN] B[a1,…,aN, a]

we obtain (via COMB moves)

C[x] FOTREE[x,a1,…,aN] B[a1,…,aN,d]

and the problem is that we have a g-pattern which is not coming from a lambda term, because it has a FOTREE in the middle of it. It looks like this (recall that FOTREEs are figured in yellow and the syntactic trees are figured in light blue)

structure_12The question is: what can happen next?  Let’s simplify the setting by taking the FOTREE in the middle as a single fanout node, then we ask what moves can be applied further to the g-pattern

C[x] FO[x,a,b]

Clearly we can apply DIST moves. There are two DIST moves, one for the application node, the other for the lambda node.

There is a chain of propagation of DIST moves through the syntactic tree of C, which is independent on the model of computation chosen (i.e. on the rules about which, when and where rules are used), because the syntactic tree is a tree.

Look what happens. We have the propagation of DIST moves (for the application nodes say) first, which produce two copies of a part of the syntactic tree which contains the root.

structure_7

At some point we arrive to a pattern which allows the application of a DIST move for a lambda node. We do the rule:

structure_8

We see that fanins appear! … and then the propagation of DIST moves through the syntactic tree continues until eventually we get this:

structure_9So the syntactic tree self-multiplied, but the two copies are still connected by FOTREEs  which connect to left.out ports of the lambda nodes which are part of the syntactic tree (figured only one in the previous image).

Notice that now (or even earlier, it does not matter actually, will be explained rigorously why when we shall talk about the computing model, for the moment we want to see if it is possible only) we are in position to apply the FAN-IN move. Also, it is clear that by using CO-COMM and CO-ASSOC moves we can shuffle the arrows of the FOTREE,  which is “conjugated” with a fanin at the root and with fanouts at the leaves, so that eventually we get this.

structure_10

The self-multiplication is achieved! It looks strikingly like the anaphase [source]

800px-Anaphase.svgfollowed by telophase [source]

Telophase.svg____________________________________________________

 

 

Lambda calculus and the fixed point combinator in chemlambda (V)

This is the 5th  (continuing from part I  and part II  and part III and part IV)   in a series of expository posts where we put together in one place the pieces from various places about:

  • how is treated lambda calculus in chemlambda
  • how it works, with special emphasis on the fixed point combinator.

I hope to make this presentation  self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)

_________________________________________________________

This series of posts may be used as a longer, more detailed version of sections

  • The chemlambda formalism
  • Chemlambda and lambda calculus
  • Propagators, distributors, multipliers and guns
  • The Y combinator and self-multiplication

from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication,  arXiv:1403.8046 [cs.AI],  which is accepted in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York, (go see the presentation of Louis Kauffman if you are near the event.) Here is a link to the published  article, free, at MIT Press.

_________________________________________________________

2.  Lambda calculus terms as seen in  chemlambda  continued.

Let’s look at the structure of a molecule coming from the process of translation of a lambda term described in part IV.

Then I shall make some comments which should be obvious after the fact, but useful later when we shall discuss about the relation between the graphic beta move (i.e. the beta rule for g-patterns) and the beta reduction and evaluation strategies.

That will be a central point in the exposition, it is very important to understand it!

So, a molecule (i.e. a pattern with the free ports names erased, see part II for the denominations) which represents a lambda term looks like this:

 

structure_1

In light blue is the part of the molecule which is essentially the syntactic tree of the lambda term.  The only peculiarity is in the orientation of the arrows of lambda nodes.

Practically this part of the molecule is a tree, which has as nodes the lambda and application ones, but not fanouts, nor fanins.

The arrows are directed towards the up side of the figure.  There is no need to draw it like this, i.e. there is no global rule for the edges orientations, contrary to the ZX calculus, where the edges orientations are deduced from from the global down-to-up orientation.

We see a lambda node figured, which is part of the syntactic tree. It has the right.out  port connecting to the rest of the syntactic tree and the left.out port connecting to the yellow part of the figure.

The yellow part of the figure is a FOTREE (fanout tree). There might be many FOTREEs,  in the figure appears only one. By looking at the algorithm of conversion of a lambda term into a g-pattern, we notice that in the g-patterns which represent lambda terms the FOTREEs may appear in two places:

  • with the root connected to the left.out port of a lambda node, as in the g-pattern which correspond to Lx.(xx)
  • with the root connected to the middle.out port of an Arrow which has the middle.in port free (i.e. the port variable of the middle.in of that Arrow appears only once in that g-pattern), for example for the term  (xx)(Ly.(yx))

As a consequence of this observation, here are two configurations of nodes which NEVER appear in a molecule which represents a lambda term:

structure_2

Notice that these two patterns are EXACTLY those which appear as the LEFT side of the moves DIST! More about this later.

Remark also the position of the  the insertion points of the FOTREE which comes out of  the left.out port of the figured lambda node: the out ports of the FOTREE connect with the syntactic tree somewhere lower than where the lambda node is. This is typical for molecules which represent lambda terms. For example the following molecule, which can be described as the g-pattern L[a,b,c] A[c,b,d]

structure_3

(but with the port variables deleted) cannot appear in a molecule which corresponds to a lambda term.

 

Let’s go back to the first image and continue with “TERMINATION NODE (1)”. Recall that termination nodes are used to cap the left.out port of a lambda lode which corresponds to a term Lx.A with x not occurring in A.

Finally, “FREE IN PORTS (2)” represents free in ports which correspond to the free variables of the lambda term. As observed earlier, but not figured in the picture, we MAY have free in ports as ones of a FANOUT tree.

I collect here some obvious, in retrospect, facts:

  • there are no other variables in a g-pattern of a lambda term than the port variables. However, every port variable appears at most twice. In the graphic version (i.e. as graphs) the port variables which appear twice are replaced by edges of the graph, therefore the bound lambda calculus variables disappear in chemlambda.
  • moreover, the free port in variables, which correspond to the free variables of the lambda term, appear only once. Their multiple occurrences in the lambda term are replaced by FOTREEs.  All in all, this means that there are no values at all in chemlambda.
  • … As a consequence, the nodes application and lambda abstraction are not gates. That means that even if the arrows of the pattern graphs appear in the grammar version as (twice repeating) port variables, the chemlambda formalism has no equational side: there are no equations needed between the port variables. Indeed, no such equation appears in the definition of g-patterns, nor in the definition of the graph rewrites.
  • In the particular case of g-patterns coming from lambda terms it is possible to attach equations to the nodes application, lambda abstraction and fanout, exactly because of the particular form that such g-patterns have. But this is not possible, in a coherent way (i.e. such that the equations attached to the nodes have a global solution) for all molecules!
  • after we pass from lambda terms to chemlambda molecules, we are going to use the graph rewrites, which don’t use any equations attached to the nodes, nor gives any meaning to the port variables other than that they serve to connect nodes. Thus the chemlambda molecules are not flowcharts. There is nothing going through arrows and nodes. Hence the “molecule” image proposed for chemlambda graphs, with nodes as atoms and arrows as bonds.
  • the FOTREEs appear in some positions in a molecule which represents a lambda term, but not in any position possible. That’s more proof that not any molecule represents a lambda term.
  • in particular the patterns which appear at LEFT in the DIST graph rewrites don’t occur in molecules which represent lambda terms. Therefore one can’t apply DIST moves in the + direction to a molecule which represents a lambda term.
  • Or, otherwise said, any molecule which contains the LEFT patterns of a DIST move is not one which represents a lambda term.

_______________________________________________________

3. The beta move. Reduction and evaluation. 

I explain now in what sense the graphic beta move, or beta rule from chemlambda, corresponds to the beta reduction in the case of molecules which correspond to lambda terms.

Recall from part III the definition of he beta move

L[a1,a2,x] A[x,a4,a3]   <–beta–> Arrow[a1,a3] Arrow[a4,a2]

or graphically

beta_move_exp

If we use the visual trick from the pedantic rant, we may depict the beta move as:

beta_move_exp_2

i.e. we use as free port variables the relative positions  of the ports in the doodle.  Of course, there is no node at the intersection of the two arrows, because there is no intersection of arrows at the graphical level. The chemlambda graphs are not planar graphs.”

The beta reduction in lambda calculus looks like this:

(Lx.B) C –beta reduction–> B[x:=C]

Here B and C are lambda terms and B[x:=C] denotes the term which is obtained from B after we replace all the occurrences of x in B by the term C.

I want to make clear what is the relation between the beta move and the beta reduction.  Several things deserve to be mentioned.

It is of course expected that if we translate (Lx.B)C and B[x:=C] into g-patterns, then  the beta move transforms the g-pattern of (Lx.B)C into the g-pattern of B[x:=C]. This is not exactly true, in fact it is  true in a more detailed and interesting sense.

Before that it is worth mentioning that the beta move applies even for patterns which don’t correspond to lambda terms.  Hence the beta move has a range of application greater than the beta reduction!

Indeed, look at the third figure from this post, which can’t be a pattern coming from a lambda term. Written as a g-pattern this is L[a,b,c] A[c,b,d]. We can apply the beta move and it gives:

L[a,b,c] A[c,b,d]  <-beta-> Arrow[a,d] Arrow[b,b]

which can be followed by a COMB move

Arrow[a,d] Arrow[b,b] <-comb-> Arrow[a,d] loop

Graphically it looks like that.

structure_4

In particular this explains the need to have the loop and Arrow graphical elements.

In chemlambda we make no effort to stay inside the collection of graphs which represent lambda terms. This is very important!

Another reason for this is related to the fact that we can’t check if a pattern comes from a lambda term in a local way, in the sense that there is no local (i.e. involving an a priori bound on the number of graphical elements used) criterion which describes the patterns coming from lambda terms. This is obvious from the previous observation that FOTREEs connect  to the syntactic tree lower than their roots.

Or, chemlambda is a purely local graph rewrite system, in the sense that the is a bound on the number of graphical elements involved in any move.

This has as consequence: there is no correct graph in chemlambda.  Hence there is no correctness enforcement in the formalism.   In this respect chemlambda differs from any other graph rewriting system which is used in relation to lambda calculus or more general to functional programming.

Let’s go back to the beta reduction

(Lx.B) C –beta reduction–> B[x:=C]

Translated into g-patterns the term from the LEFT looks like this:

L[a,x,b] A[b,c,d] C[c]  FOTREE[x,a1,…,aN] B[a1,…,aN, a]

where

  • C[c] is the translation as a g-pattern of the term C, with the out port “c”
  • FOTREE[x,a1,…,aN] is the FOTREE which connects to the left.out port of the node L[a,x,b] and to the ports a1, …, aN which represent the places where the lambda term variable “x” occurs in B
  • B[a1,…,aN.a] is a notation for the g-pattern of B, with the ports a1,…,aN (where the FOTREE connects) mentioned, and with the out port “a”

The beta move does not need all this context, but we need it in order to explain in what sense the beta move does what the beta reduction does.

The beta move needs only the piece L[a,x,b] A[b,c,d]. It is a local move!

Look how the beta move acts:

L[a,x,b] A[b,c,d] C[c]  FOTREE[x,a1,…,aN] B[a1,…,aN, a]

<-beta->

Arrow[a,d] Arrow[c,x] FOTREE[x,a1,…,aN] B[a1,…,aN, a]

and then 2 comb moves:

Arrow[a,d] Arrow[c,x] C[c] FOTREE[x,a1,…,aN] B[a1,…,aN, a]

<-2 comb->

C[x] FOTREE[x,a1,…,aN] B[a1,…,aN,d]

Graphically this is:

structure

The graphic beta move, as it looks on syntactic trees of lambda terms, has been discovered  in

Wadsworth, Christopher P. (1971). Semantics and Pragmatics of the Lambda Calculus. PhD thesis, Oxford University

This work is the origin of the lazy, or call-by-need evaluation in lambda calculus!

Indeed, the result of the beta move is not B[x:=C] because in the reduction step is not performed any substitution x:=C.

In the lambda calculus world, as it is well known, one has to supplement the lambda calculus with an evaluation strategy. The call-by-need evaluation explains how to do in an optimized way the substitution x:=C in B.

From the chemlambda point of view on lambda calculus, a very interesting thing happens. The g-pattern obtained after the beta move (and obvious comb moves) is

C[x] FOTREE[x,a1,…,aN] B[a1,…,aN,d]

or graphically

structure_5

As you can see this is not a g-pattern which corresponds to a lambda term.  That is because it has a FOTREE in the middle of it!

Thus the beta move applied to a g-pattern which represents a lambda term gives a g-patterns which can’t represent a lambda term.

The g-pattern which represents the lambda term B[x:=C] is

C[a1] …. C[aN]  B[a1,…,aN,d]

or graphically

structure_6

In graphic lambda calculus, or GLC, which is the parent of chemlambda we pass from the graph which correspond to the g-pattern

C[x] FOTREE[x,a1,…,aN] B[a1,…,aN,d]

to the g-pattern of B[x:=C]

C[a1] …. C[aN]  B[a1,…,aN,d]

by a GLOBAL FAN-OUT move, i.e. a graph rewrite which looks like that

if C[x] is a g-pattern with no other free ports than “x” then

C[x] FOTREE[x, a1, …, aN]

<-GLOBAL FAN-OUT->

C[a1] …. C[aN]

As you can see this is not a local move, because there is no a priori bound on the number of graphical elements involved in the move.

That is why I invented chemlambda, which has only local moves!

The evaluation strategy needed in lambda calculus to know when and how to do the substitution x:C in B is replaced in chemlambda by SELF-MULTIPLICATION.

Indeed, this is because the g-pattern

C[x] FOTREE[x,a1,…,aN] B[a1,…,aN,d]

surely has places where we can apply DIST moves (and perhaps later FAN-IN moves).

That is for the next post.

___________________________________________________

 

Bacterial conjugation is beta reduction

I come back to the idea from the post   Click and zip with bacterial conjugation , with a bit more details. It is strange, maybe, but perhaps is less strange than many other ideas circulating on the Net around brains and consciousness.

 

The thing is that bacteria can’t act based on semantics, they are more stupid than us. They have physical or chemical mechanisms which obviate the need to use semantics filters.

Bacteria are more simpler than brains, of course, but the discussion is relevant to brains as collections of cells.

The idea: bacterial conjugation is a form of  beta reduction!

On one side we have a biological phenomenon, bacterial conjugation. On the other side we have a logic world concept, beta reduction, which is the engine that moves lambda calculus, one of the two pillars of computation.

What is the relation between semantics, bacterial conjugation and beta reduction?

Lambda calculus is a rewrite system, with the main rewrite being beta reduction. A rewrite system, basically, says that whenever you see a certain pattern in front of you then you can replace this pattern by another.

Graphic lambda calculus is a graph rewrite system which is more general than lambda calculus. A graph rewrite system is like a rewrite system which used graphs instead of lines of text, or words. If you see certain  graphical patterns then you can replace them by others.

Suppose  that Nature uses (graphical) rewrite systems in the biological realm, for example suppose that bacteria interactions can be modeled by a graph rewrite system. Then,  there has to be a mechanism which replaces the recognition of pattern which involves two bacteria in interaction.

When two bacteria interact there are at least two ingredients:  spatial proximity (SP) and chemical interaction (CI).

SP is something we can describe and think about easily, but from the point of view of a microbe our easy description is void. Indeed, two bacteria in SP can’t be described as pairs of coordinate numbers which are numerically close, unless if each of the microbes has an internal representation of a coordinate system, which is stupid to suppose. Moreover, I think is too much to suppose that each microbe has an internal representation of itself and of it’s neighbouring microbes. This is a kind of a bacterial cartesian theater.

You see, even trying to describe what could be SP for a pair of bacteria does not make much sense.

CI happens when SP is satisfied (i.e. for bacteria in spatial proximity). There is of course a lot of randomness into this, which has to be taken into account, but it does not replace the fact that SP is something hard to make sense from the pov of bacteria.

In Distributed GLC we think about bacteria as actors (and not agents) and about SP as connections between actors. Those connections between actors change in a local, asynchronous way, during the CI (which is the proper graph rewrite, after the pattern between two actors in SP is identified).

In this view, SP between actors, this mysterious almost philosophical relation which is forced upon us after we renounce at the God eye point of view, is described as an edge in the actors diagram.

Such an edge, in Distributed GLC, it is always related to   an oriented edge (arrow) in the GLC (or chemlambda) graph which is doing the actual computation. Therefore, we see that arrows in GLC or chemlambda graphs (may) have more interpretation than being chemical bonds in (artificial) chemistry molecules.

Actually, this is very nice, but hard to grasp: there is no difference between CI and SP!

Now, according to the hypothesis from this post and from the previous one, the mechanism which is used by bacteria for graph rewrite is to grow pili.

The following image (done with the tools I have access to right now) explains more clearly how bacterial conjugation may be (graphic) beta reduction.

Image002

In the upper part of the figure we see the  lambda abstraction node (red)  and the application node (green)  as encoded by crossings. They are strange crossings, see the post  Zipper logic and knot diagrams . Here the crossings are representing with a half of the upper passing thread half-erased.

Now, suppose that the lambda node is (or is managed by) a bacterial cell and that the application node is (managed by) anther bacterium cell. The fact that they are in SP is represented in the first line under the blue separation line in the picture. At the left of the first row (under the blue horizontal line) , SP is represented by the arrow which goes from the lambda node (of the bacterium at left) and the application node (of the bacterium at right). At the right of the first row, this SP arrow is represented as the upper arc which connects the two crossings.

Now the process of pattern recognition begin. In Nature, that is asymmetric: one of the bacterial cells grow a pilus. In this diagrammatic representation, things are symmetric (maybe a weakness of the description). The pilus growth is represented as the CLICK move.

This brings us to the last row of the image. Once the pattern is recognized (or in place) the graph reduction may happen by the ZIP move. In the crossing diagram this is represented by a R2 move, which itself is one of the ways to represent (graphic) beta moves.

Remark that in this process we have two arcs:  the upper arc from the RHS crossing diagram (i.e the arc which represents the SP) and the lower arc appeared after the CLICK move (i.e. the pilus connecting the two bacteria).

After the ZIP move we get two (physical) pili, this corresponds to the last row in the diagram of bacterial conjugation, let me reproduce it again here from the wiki source:

 

661px-Conjugation.svg

After the ZIP move the arc which represents SP is representing a pilus as well!

____________________________________

A question and a comment about untyped lambda calculus

Question: Is there any formulation in terms of category theory of pure untyped lambda calculus, only with alpha-equivalence and beta-reduction, but not with eta-reduction? Please provide links to the relevant sources, which have to contain proofs.

My impression (which could be wrong) is that the answer is NO. It becomes YES with eta-reduction, am I right?

Comment: This is again about pure untyped lambda calculus, only with alpha-equivalence and beta-reduction. compared with my graphic lambda calculus.  So, in pure untyped lambda calculus there are only two moves: beta-reduction and (a collection of moves under the) alpha-equivalence. At first view, the graphic lambda calculus has much more moves. Let us neglect the emergent algebra moves, which are exterior to lambda calculus. Still, in graphic lambda calculus there are the  fan-out moves    and the pruning moves. Compare these moves with the long prose one has to write in order to really explain how alpha-equivalence works and how terms are written, with all rules and moves included so that a non-human computer might apply them.

UPDATE: The question, formulated as: “is lambda-beta representable?” is an open problem by Barendregt and others. See for the precise language the paper GRaph models of lambda-calculus at work, by C. Berline, Math Struct. for Comput. Sci. 16:1-37, 2006,  link to ps.

The zipper macro and zipper moves

I continue from the post Generating set of Reidemeister moves for graphic lambda crossings , where the “crossing macros” over graphic lambda calculus were discussed.

Another interesting macro  (over graphic lambda calculus) is the zipper, together with its associated zipper moves.

Let’s take n \geq 2 a natural number and let’s consider the following graph in GRAPH, called the n-zipper:

At the left is the n-zipper graph; at the right is a NOTATION for it, or a macro.  We could as well take n =2, with obvious modifications of the figure, so the 2-zipper exists. Even n=1 makes sense, but the 1-zipper is kind of degenerate, see later.

There is a graphic beta move which we can perform on the n-zipper graph. In the following picture I figured in red the place where the graphic beta move is applied.

In terms of zipper notation this graphic beta move has the following appearance:

We see that a n-zipper transforms into a (n-1)-zipper plus an arrow. We may repeat this move, as long as we can. What is the result? A n-zipper move:

The 1-zipper move, called ZIP_{1} is just the graphic beta move, which transforms the 1-zipper into two arrows.

Nice, now what can we do with zippers and crossings?