Tag Archives: graphic lambda calculus

Lambda calculus and the fixed point combinator in chemlambda (VI)

This is the 6th  (continuing from part I  and part II  and part III and part IV and part V)   in a series of expository posts where we put together in one place the pieces from various places about:

  • how is treated lambda calculus in chemlambda
  • how it works, with special emphasis on the fixed point combinator.

I hope to make this presentation  self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)

_________________________________________________________

This series of posts may be used as a longer, more detailed version of sections

  • The chemlambda formalism
  • Chemlambda and lambda calculus
  • Propagators, distributors, multipliers and guns
  • The Y combinator and self-multiplication

from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication,  arXiv:1403.8046 [cs.AI],  which is accepted in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York, (go see the presentation of Louis Kauffman if you are near the event.) Here is a link to the published  article, free, at MIT Press.

_________________________________________________________

In this post I want to concentrate on the mechanism of self-multiplication for g-patterns coming from lambda terms (see  part IV   where the algorithm of translation from lambda terms to g-patterns is explained).

Before that, please notice that there is a lot to talk about an important problem which shall be described later in detail. But here is it, to keep an eye on it.

Chemlambda in itself is only a graph rewriting system. In part V is explained that  the beta reduction from lambda calculus needs an evaluation strategy in order to be used. We noticed that  in chemlambda the self-multiplication is needed in order to prove that one can do beta reduction as the beta move.

We go towards the obvious conclusion that in chemlambda reduction (i.e. beta move) and self-multiplication are just names used for parts of the computation. Indeed, the clear conclusion is that there is a computation which can be done with chemlambda, which has some parts where we use the beta move (and possibly some COMB, CO-ASSOC, CO-COMM, LOC PRUNING) and some other parts we use DIST and FAN-IN (and possibly some of the moves COMB, CO-ASSOC, CO-COMM, LOC PRUNING). These two parts have as names reduction and self-multiplication respectively, but in the big computation they mix into a whole. There are only moves, graphs rewrites applied to a molecule.

Which brings the problem: chemlambda in itself is not sufficient for having a model of computation. We need to specify how, where, when the reductions apply to molecules.

There may be many variants, roughly described as: sequential, parallel, concurrent, decentralized, random, based on chemical reaction network models, etc

Each model of computation (which can be made compatible with chemlambda) gives a different whole when used with chemlambda. Until now, in this series there has been no mention of a model of computation.

There is another aspect of this. It is obvious that chemlambda graphs  form a larger class than lambda terms, and also that the graph rewrites apply to more general situations  than beta reduction (and eventually an evaluation strategy).  It means that the important problem of defining a model of computation over chemlambda will have influences over the way chemlambda molecules “compute” in general.

The model of computation which I prefer  is not based on chemical reaction networks, nor on process calculi, but on a new model, inspired from the Actor Model, called  the distributed GLC. I shall explain why I believe that the Actor Model of Hewitt is superior to those mentioned previously (with respect to decentralized, asynchronous computation in the real Internet, and also in the real world), I shall explain what is my understanding of that model and eventually the distributed GLC proposal by me and Louis Kauffman will be exposed in all details.

4.  Self-multiplication of a g-pattern coming from a lambda term.

For the moment we concentrate on the self-multiplication phenomenon for g-patterns which represent lambda terms. In the following is a departure from the ALIFE 14 article. I shall not use the path which consists into going to combinators patterns, nor I shall discuss in this post why the self-multiplication phenomenon is not confined in the world of g-patterns coming from lambda terms. This is for a future post.

In this post I want to give an image about how these g-patterns self-multiply, in the sense that most of the self-multiplication process can be explained independently on the computing model. Later on we shall come back to this, we shall look outside lambda calculus as well and we shall explore also the combinator molecules.

OK, let’s start. In part V has been noticed that after an application of the beta rule to the g-pattern

L[a,x,b] A[b,c,d] C[c]  FOTREE[x,a1,…,aN] B[a1,…,aN, a]

we obtain (via COMB moves)

C[x] FOTREE[x,a1,…,aN] B[a1,…,aN,d]

and the problem is that we have a g-pattern which is not coming from a lambda term, because it has a FOTREE in the middle of it. It looks like this (recall that FOTREEs are figured in yellow and the syntactic trees are figured in light blue)

structure_12The question is: what can happen next?  Let’s simplify the setting by taking the FOTREE in the middle as a single fanout node, then we ask what moves can be applied further to the g-pattern

C[x] FO[x,a,b]

Clearly we can apply DIST moves. There are two DIST moves, one for the application node, the other for the lambda node.

There is a chain of propagation of DIST moves through the syntactic tree of C, which is independent on the model of computation chosen (i.e. on the rules about which, when and where rules are used), because the syntactic tree is a tree.

Look what happens. We have the propagation of DIST moves (for the application nodes say) first, which produce two copies of a part of the syntactic tree which contains the root.

structure_7

At some point we arrive to a pattern which allows the application of a DIST move for a lambda node. We do the rule:

structure_8

We see that fanins appear! … and then the propagation of DIST moves through the syntactic tree continues until eventually we get this:

structure_9So the syntactic tree self-multiplied, but the two copies are still connected by FOTREEs  which connect to left.out ports of the lambda nodes which are part of the syntactic tree (figured only one in the previous image).

Notice that now (or even earlier, it does not matter actually, will be explained rigorously why when we shall talk about the computing model, for the moment we want to see if it is possible only) we are in position to apply the FAN-IN move. Also, it is clear that by using CO-COMM and CO-ASSOC moves we can shuffle the arrows of the FOTREE,  which is “conjugated” with a fanin at the root and with fanouts at the leaves, so that eventually we get this.

structure_10

The self-multiplication is achieved! It looks strikingly like the anaphase [source]

800px-Anaphase.svgfollowed by telophase [source]

Telophase.svg____________________________________________________

 

 

Lambda calculus and the fixed point combinator in chemlambda (V)

This is the 5th  (continuing from part I  and part II  and part III and part IV)   in a series of expository posts where we put together in one place the pieces from various places about:

  • how is treated lambda calculus in chemlambda
  • how it works, with special emphasis on the fixed point combinator.

I hope to make this presentation  self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)

_________________________________________________________

This series of posts may be used as a longer, more detailed version of sections

  • The chemlambda formalism
  • Chemlambda and lambda calculus
  • Propagators, distributors, multipliers and guns
  • The Y combinator and self-multiplication

from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication,  arXiv:1403.8046 [cs.AI],  which is accepted in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York, (go see the presentation of Louis Kauffman if you are near the event.) Here is a link to the published  article, free, at MIT Press.

_________________________________________________________

2.  Lambda calculus terms as seen in  chemlambda  continued.

Let’s look at the structure of a molecule coming from the process of translation of a lambda term described in part IV.

Then I shall make some comments which should be obvious after the fact, but useful later when we shall discuss about the relation between the graphic beta move (i.e. the beta rule for g-patterns) and the beta reduction and evaluation strategies.

That will be a central point in the exposition, it is very important to understand it!

So, a molecule (i.e. a pattern with the free ports names erased, see part II for the denominations) which represents a lambda term looks like this:

 

structure_1

In light blue is the part of the molecule which is essentially the syntactic tree of the lambda term.  The only peculiarity is in the orientation of the arrows of lambda nodes.

Practically this part of the molecule is a tree, which has as nodes the lambda and application ones, but not fanouts, nor fanins.

The arrows are directed towards the up side of the figure.  There is no need to draw it like this, i.e. there is no global rule for the edges orientations, contrary to the ZX calculus, where the edges orientations are deduced from from the global down-to-up orientation.

We see a lambda node figured, which is part of the syntactic tree. It has the right.out  port connecting to the rest of the syntactic tree and the left.out port connecting to the yellow part of the figure.

The yellow part of the figure is a FOTREE (fanout tree). There might be many FOTREEs,  in the figure appears only one. By looking at the algorithm of conversion of a lambda term into a g-pattern, we notice that in the g-patterns which represent lambda terms the FOTREEs may appear in two places:

  • with the root connected to the left.out port of a lambda node, as in the g-pattern which correspond to Lx.(xx)
  • with the root connected to the middle.out port of an Arrow which has the middle.in port free (i.e. the port variable of the middle.in of that Arrow appears only once in that g-pattern), for example for the term  (xx)(Ly.(yx))

As a consequence of this observation, here are two configurations of nodes which NEVER appear in a molecule which represents a lambda term:

structure_2

Notice that these two patterns are EXACTLY those which appear as the LEFT side of the moves DIST! More about this later.

Remark also the position of the  the insertion points of the FOTREE which comes out of  the left.out port of the figured lambda node: the out ports of the FOTREE connect with the syntactic tree somewhere lower than where the lambda node is. This is typical for molecules which represent lambda terms. For example the following molecule, which can be described as the g-pattern L[a,b,c] A[c,b,d]

structure_3

(but with the port variables deleted) cannot appear in a molecule which corresponds to a lambda term.

 

Let’s go back to the first image and continue with “TERMINATION NODE (1)”. Recall that termination nodes are used to cap the left.out port of a lambda lode which corresponds to a term Lx.A with x not occurring in A.

Finally, “FREE IN PORTS (2)” represents free in ports which correspond to the free variables of the lambda term. As observed earlier, but not figured in the picture, we MAY have free in ports as ones of a FANOUT tree.

I collect here some obvious, in retrospect, facts:

  • there are no other variables in a g-pattern of a lambda term than the port variables. However, every port variable appears at most twice. In the graphic version (i.e. as graphs) the port variables which appear twice are replaced by edges of the graph, therefore the bound lambda calculus variables disappear in chemlambda.
  • moreover, the free port in variables, which correspond to the free variables of the lambda term, appear only once. Their multiple occurrences in the lambda term are replaced by FOTREEs.  All in all, this means that there are no values at all in chemlambda.
  • … As a consequence, the nodes application and lambda abstraction are not gates. That means that even if the arrows of the pattern graphs appear in the grammar version as (twice repeating) port variables, the chemlambda formalism has no equational side: there are no equations needed between the port variables. Indeed, no such equation appears in the definition of g-patterns, nor in the definition of the graph rewrites.
  • In the particular case of g-patterns coming from lambda terms it is possible to attach equations to the nodes application, lambda abstraction and fanout, exactly because of the particular form that such g-patterns have. But this is not possible, in a coherent way (i.e. such that the equations attached to the nodes have a global solution) for all molecules!
  • after we pass from lambda terms to chemlambda molecules, we are going to use the graph rewrites, which don’t use any equations attached to the nodes, nor gives any meaning to the port variables other than that they serve to connect nodes. Thus the chemlambda molecules are not flowcharts. There is nothing going through arrows and nodes. Hence the “molecule” image proposed for chemlambda graphs, with nodes as atoms and arrows as bonds.
  • the FOTREEs appear in some positions in a molecule which represents a lambda term, but not in any position possible. That’s more proof that not any molecule represents a lambda term.
  • in particular the patterns which appear at LEFT in the DIST graph rewrites don’t occur in molecules which represent lambda terms. Therefore one can’t apply DIST moves in the + direction to a molecule which represents a lambda term.
  • Or, otherwise said, any molecule which contains the LEFT patterns of a DIST move is not one which represents a lambda term.

_______________________________________________________

3. The beta move. Reduction and evaluation. 

I explain now in what sense the graphic beta move, or beta rule from chemlambda, corresponds to the beta reduction in the case of molecules which correspond to lambda terms.

Recall from part III the definition of he beta move

L[a1,a2,x] A[x,a4,a3]   <–beta–> Arrow[a1,a3] Arrow[a4,a2]

or graphically

beta_move_exp

If we use the visual trick from the pedantic rant, we may depict the beta move as:

beta_move_exp_2

i.e. we use as free port variables the relative positions  of the ports in the doodle.  Of course, there is no node at the intersection of the two arrows, because there is no intersection of arrows at the graphical level. The chemlambda graphs are not planar graphs.”

The beta reduction in lambda calculus looks like this:

(Lx.B) C –beta reduction–> B[x:=C]

Here B and C are lambda terms and B[x:=C] denotes the term which is obtained from B after we replace all the occurrences of x in B by the term C.

I want to make clear what is the relation between the beta move and the beta reduction.  Several things deserve to be mentioned.

It is of course expected that if we translate (Lx.B)C and B[x:=C] into g-patterns, then  the beta move transforms the g-pattern of (Lx.B)C into the g-pattern of B[x:=C]. This is not exactly true, in fact it is  true in a more detailed and interesting sense.

Before that it is worth mentioning that the beta move applies even for patterns which don’t correspond to lambda terms.  Hence the beta move has a range of application greater than the beta reduction!

Indeed, look at the third figure from this post, which can’t be a pattern coming from a lambda term. Written as a g-pattern this is L[a,b,c] A[c,b,d]. We can apply the beta move and it gives:

L[a,b,c] A[c,b,d]  <-beta-> Arrow[a,d] Arrow[b,b]

which can be followed by a COMB move

Arrow[a,d] Arrow[b,b] <-comb-> Arrow[a,d] loop

Graphically it looks like that.

structure_4

In particular this explains the need to have the loop and Arrow graphical elements.

In chemlambda we make no effort to stay inside the collection of graphs which represent lambda terms. This is very important!

Another reason for this is related to the fact that we can’t check if a pattern comes from a lambda term in a local way, in the sense that there is no local (i.e. involving an a priori bound on the number of graphical elements used) criterion which describes the patterns coming from lambda terms. This is obvious from the previous observation that FOTREEs connect  to the syntactic tree lower than their roots.

Or, chemlambda is a purely local graph rewrite system, in the sense that the is a bound on the number of graphical elements involved in any move.

This has as consequence: there is no correct graph in chemlambda.  Hence there is no correctness enforcement in the formalism.   In this respect chemlambda differs from any other graph rewriting system which is used in relation to lambda calculus or more general to functional programming.

Let’s go back to the beta reduction

(Lx.B) C –beta reduction–> B[x:=C]

Translated into g-patterns the term from the LEFT looks like this:

L[a,x,b] A[b,c,d] C[c]  FOTREE[x,a1,…,aN] B[a1,…,aN, a]

where

  • C[c] is the translation as a g-pattern of the term C, with the out port “c”
  • FOTREE[x,a1,…,aN] is the FOTREE which connects to the left.out port of the node L[a,x,b] and to the ports a1, …, aN which represent the places where the lambda term variable “x” occurs in B
  • B[a1,…,aN.a] is a notation for the g-pattern of B, with the ports a1,…,aN (where the FOTREE connects) mentioned, and with the out port “a”

The beta move does not need all this context, but we need it in order to explain in what sense the beta move does what the beta reduction does.

The beta move needs only the piece L[a,x,b] A[b,c,d]. It is a local move!

Look how the beta move acts:

L[a,x,b] A[b,c,d] C[c]  FOTREE[x,a1,…,aN] B[a1,…,aN, a]

<-beta->

Arrow[a,d] Arrow[c,x] FOTREE[x,a1,…,aN] B[a1,…,aN, a]

and then 2 comb moves:

Arrow[a,d] Arrow[c,x] C[c] FOTREE[x,a1,…,aN] B[a1,…,aN, a]

<-2 comb->

C[x] FOTREE[x,a1,…,aN] B[a1,…,aN,d]

Graphically this is:

structure

The graphic beta move, as it looks on syntactic trees of lambda terms, has been discovered  in

Wadsworth, Christopher P. (1971). Semantics and Pragmatics of the Lambda Calculus. PhD thesis, Oxford University

This work is the origin of the lazy, or call-by-need evaluation in lambda calculus!

Indeed, the result of the beta move is not B[x:=C] because in the reduction step is not performed any substitution x:=C.

In the lambda calculus world, as it is well known, one has to supplement the lambda calculus with an evaluation strategy. The call-by-need evaluation explains how to do in an optimized way the substitution x:=C in B.

From the chemlambda point of view on lambda calculus, a very interesting thing happens. The g-pattern obtained after the beta move (and obvious comb moves) is

C[x] FOTREE[x,a1,…,aN] B[a1,…,aN,d]

or graphically

structure_5

As you can see this is not a g-pattern which corresponds to a lambda term.  That is because it has a FOTREE in the middle of it!

Thus the beta move applied to a g-pattern which represents a lambda term gives a g-patterns which can’t represent a lambda term.

The g-pattern which represents the lambda term B[x:=C] is

C[a1] …. C[aN]  B[a1,…,aN,d]

or graphically

structure_6

In graphic lambda calculus, or GLC, which is the parent of chemlambda we pass from the graph which correspond to the g-pattern

C[x] FOTREE[x,a1,…,aN] B[a1,…,aN,d]

to the g-pattern of B[x:=C]

C[a1] …. C[aN]  B[a1,…,aN,d]

by a GLOBAL FAN-OUT move, i.e. a graph rewrite which looks like that

if C[x] is a g-pattern with no other free ports than “x” then

C[x] FOTREE[x, a1, …, aN]

<-GLOBAL FAN-OUT->

C[a1] …. C[aN]

As you can see this is not a local move, because there is no a priori bound on the number of graphical elements involved in the move.

That is why I invented chemlambda, which has only local moves!

The evaluation strategy needed in lambda calculus to know when and how to do the substitution x:C in B is replaced in chemlambda by SELF-MULTIPLICATION.

Indeed, this is because the g-pattern

C[x] FOTREE[x,a1,…,aN] B[a1,…,aN,d]

surely has places where we can apply DIST moves (and perhaps later FAN-IN moves).

That is for the next post.

___________________________________________________

 

Notes for gamification of chemlambda

Here are some obvious notes  which are filed here about chemlambda graphs in GraphML.  Hope they may be used with … ? … why not Liviz.js  to get quick something OS to play with.

In the previous post The Quantomatic GUI may be useful for chemlambda (NTC vs TC, III) is mentioned the quantomatic gui, which can easily do all this, but is not free. Moreover, the goals for the gui proposed here are  more modest and easily attainable. Don’t care about compatibility with this or that notion from category theory because our approach is different and because the gui is just step 0. So any quick and dirty, but effective code will do, I believe.

______________________________

Recall the idea: gamification of chemlambda.

  • get quick a chemlambda GUI, as described  in  A user interface for GLC  and use it, with a collection of predefined goals to make a gamification of chemlambda.
  • obviously that means choosing a format for representing the graphs and moves, many variants here, in this post is described such a choice — GraphML. The work consists into transforming the definitions of graphs and moves into a chosen format.
  • the various buttons and options from the post describing the gui are simple scripts, perhaps in javascript?  of the kind: find and replace predefined patterns by others, all involving up to 4 nodes and 8 arrows, so almost a no brainer, consisting in writing into the chosen format which are the patterns (i.e. the configurations of at most 4 nodes and 8 arrows ) which are involved in the chemlambda moves and which are the replacement rules (i.e. transforming the definition of chemlambda moves into code)
  • open the possibility to define arbitrary patterns (named here “macros”) and arbitrary moves (named here “macro moves”)
  • transform into code the algorithm described in Conversion of lambda calculus terms into graphs , with the care to use instead chemlambda. This is good for having a button for importing lambda terms into chemlambda  easily.
  • with all this done, pick a visualization tool which is open, looks good and  is easy to use, like the one previously mentioned.
  • write some scripts for converting to and from GraphML (or whatever other choice one likes) to  the input of the viz tool.
  • pick some examples, transform them into challenges and make public the game.

___________________________

For those with a  non functional right hemisphere, here is the GraphML description of chemlambda graphs and what would mean to do a move.

I use this source for GraphML.

A chemlambda graph is  any graph which is directed, with two types of 3-valent nodes and one type of  1-valent node, with the nodes having some attributes. [For mathematicians, is an oriented graph made by trivalent, locally planar nodes, and by some 1-valent nodes, with arrows which may have free starts or free ends, or even loops.]

Moreover, we accept arrows (i.e. directed edges) with free starts, free ends, or both, or with start=end and no node (i.e. loops with no node). For all those we need, in GraphML, to use some “invisible” nodes [multiple variants here, only one is described.]

Here are the trivalent, locally planar nodes:

 

  • application node

<node id=”n0″ parse.indegree=”2″ parse.outdegree=”1″>
<data key=”d0″>green</data>
<port name=”middle_out”/>
<port name=”left_in”/>
<port name=”right_in”/>
</node>

  • fanin node

<node id=”n3″ parse.indegree=”2″ parse.outdegree=”1″>
<data key=”d0″>red</data>
<port name=”middle_out”/>
<port name=”left_in”/>
<port name=”right_in”/>
</node>

  • lambda node

<node id=”n1″ parse.indegree=”1″ parse.outdegree=”2″>
<data key=”d0″>red</data>
<port name=”middle_in”/>
<port name=”left_out”/>
<port name=”right_out”/>
</node>

  • fanout node

<node id=”n2″ parse.indegree=”1″ parse.outdegree=”2″>
<data key=”d0″>green</data>
<port name=”middle_in”/>
<port name=”left_out”/>
<port name=”right_out”/>
</node>

  • termination node

<node id=”n4″ parse.indegree=”1″ parse.outdegree=”0″>
<data key=”d0″>term</data> </node>

(where “term” denotes a color we agree to use for the termination node, in xfig made drawings this node  appears like this  –>–|  )
  • two invisible nodes 1 valent

<node id=”n5″ parse.indegree=”0″ parse.outdegree=”1″>
<data key=”d0″>invisible</data> </node>

<node id=”n6″ parse.indegree=”1″ parse.outdegree=”0″>
<data key=”d0″>invisible</data> </node>

(where “invisible” should be something to  agree to use)

  • invisible node 2 valent

<node id=”n7″ parse.indegree=”1″ parse.outdegree=”1″>
<data key=”d0″>invisible</data> </node>

Uses of  invisible nodes:

  • in chemlambda graphs we admit arrows which start from one of the 3 valent nodes but the end is free. Instead of the free end we may use  the invisible node  with id=”n6″ from before.

<edge source=”n101″ target=”n6″/>

  • There are also accepted arrows which end in a 3 valent node or in a 1 valent termination node, but their start is free. For those we may use the invisible node with id=”n5″ from before.

<edge source=”n5″ target=”n102″/>

  • There are accepted arrows with free starts and ends, so we represent them with the help of invisible nodes with id=”n5″ and id=”n6″.

<edge source=”n5″ target=”n6″/>

  • Finally, we accept also loops, with no nodes, these are represented with the help of the 2 valent invisible nodes line the one with id=”n7″

<edge source=”n7″ target=”n7″>

____________________________________________

 

Examples:
(a) – recognize pattern for beta move
(b) – perform (when called) the beta move

  • (a) recognize pattern for beta move.

– input is a chemlambda graph (in GraphML)

– output is the same graph and some supplementary file of annotations of the graph.

  • This program looks in the input graph for all patterns where the beta move can be applied.  This pattern looks like this in the GraphML convention:

<node id=”n101″ parse.indegree=”2″ parse.outdegree=”1″>
<data key=”d0″>green</data>
<port name=”middle_out”/>
<port name=”left_in”/>
<port name=”right_in”/>
</node>
<node id=”n102″ parse.indegree=”1″ parse.outdegree=”2″>
<data key=”d0″>red</data>
<port name=”middle_in”/>
<port name=”left_out”/>
<port name=”right_out”/>
</node>
<edge source=”n102″ target=”n101″ sourceport=”right_out” targetport=”left_in”/>
<edge source=”n106″ target=”n102″ sourceport=”???_1″ targetport=”middle_in”/>
<edge source=”n102″ target=”n103″ sourceport=”left_out” targetport=”???_2″/>
<edge source=”n105″ target=”n102″ sourceport=”???_3″ targetport=”right_in”/>
<edge source=”n101″ target=”n104″ sourceport=”middle_out” targetport=”???_4″/>

Here “???_i” means any port name.

If such a pattern is found, then the collection of id’s (of edges and nodes) from it is stored in some format in the annotations file which is the output.

  • (b) perform (when called) the beta move

When called, this program takes as input the graph and the annotation file for beta moves pattern and then wait for the user to pick one of these patterns (i.e. perhaps that the patterns are numbered in the annotation file, the user interface shows then on screen and the user clicks on one of them).

When the instance of the pattern is chosen by the user, the program erases from the input graph the pattern (or just comments it out, or makes a copy first of the input graph and then works on the copy, …) and replaces it by the following:

<edge source=”n106″ target=”n104″ sourceport=”???_1″ targetport=”???_4″/>
<edge source=”n105″ target=”n103″ sourceport=”???_3″ targetport=”???_2″/>

It works only when the nodes n101 or n102 are different than the nodes  n103 ,n104, n105, n106,  because if not then when erased leads to trouble. See the post Graphic beta move with details.

As an alternative one may proceed by introducing invisible nodes which serve as connection points for the arrows from n106 to n104 and n101 to n103, then erase the nodes  among n101, n102 which are not among  n103, n104, n105, n106 .

___________________________________________

A concrete example:

reg_1

  • The input graph is the one from the first row of the figure:

<node id=”n1″ parse.indegree=”0″ parse.outdegree=”1″>
<data key=”d0″>invisible</data>
<port name=”out”/>
</node>
<node id=”n2″ parse.indegree=”0″ parse.outdegree=”1″>
<data key=”d0″>invisible</data>
<port name=”out”/>
</node>
<node id=”n3″ parse.indegree=”1″ parse.outdegree=”0″>
<data key=”d0″>invisible</data>
<port name=”in”/>
</node>
<node id=”n4″ parse.indegree=”1″ parse.outdegree=”0″>
<data key=”d0″>invisible</data>
<port name=”in”/>
</node>

[comment: these are the four free ends of arrows which are numbered in the figure by “1”, … , “4”.   So you have a graph with two inputs and two outputs,  definitely not a graph of a  lambda term! ]

<node id=”n105″ parse.indegree=”2″ parse.outdegree=”1″>
<data key=”d0″>green</data>
<port name=”middle_out”/>
<port name=”left_in”/>
<port name=”right_in”/>
</node>
<node id=”n106″ parse.indegree=”2″ parse.outdegree=”1″>
<data key=”d0″>green</data>
<port name=”middle_out”/>
<port name=”left_in”/>
<port name=”right_in”/>
</node>
<node id=”n108″ parse.indegree=”2″ parse.outdegree=”1″>
<data key=”d0″>green</data>
<port name=”middle_out”/>
<port name=”left_in”/>
<port name=”right_in”/>
</node>

[comment: these are 3 application nodes]

<node id=”n107″ parse.indegree=”1″ parse.outdegree=”2″>
<data key=”d0″>red</data>
<port name=”middle_in”/>
<port name=”left_out”/>
<port name=”right_out”/>
</node>
<node id=”n109″ parse.indegree=”1″ parse.outdegree=”2″>
<data key=”d0″>red</data>
<port name=”middle_in”/>
<port name=”left_out”/>
<port name=”right_out”/>
</node>
<node id=”n110″ parse.indegree=”1″ parse.outdegree=”2″>
<data key=”d0″>red</data>
<port name=”middle_in”/>
<port name=”left_out”/>
<port name=”right_out”/>
</node>

[comment: these are 3 lambda abstraction nodes]

<edge source=”n1″ target=”n105″ sourceport=”out” targetport=”right_in”/>
<edge source=”n107″ target=”n105″ sourceport=”left_out” targetport=”left_in”/>
<edge source=”n105″ target=”n106″ sourceport=”middle_out” targetport=”left_in”/>

<edge source=”n2″ target=”n106″ sourceport=”out” targetport=”right_in”/>
<edge source=”n106″ target=”n107″ sourceport=”middle_out”
targetport=”middle_in”/>
<edge source=”n107″ target=”n108″ sourceport=”right_out” targetport=”left_in”/>

<edge source=”n108″ target=”n109″ sourceport=”middle_out”
targetport=”middle_in”/>
<edge source=”n110″ target=”n108″ sourceport=”right_out” targetport=”right_in”/>
<edge source=”n109″ target=”n110″ sourceport=”right_out”
targetport=”middle_in”/>

<edge source=”n109″ target=”n4″ sourceport=”left_out” targetport=”in”/>
<edge source=”n110″ target=”n3″ sourceport=”left_out” targetport=”in”/>

  • This graph reduces via 3 beta reductions to

<node id=”n1″ parse.indegree=”0″ parse.outdegree=”1″>
<data key=”d0″>invisible</data>
<port name=”out”/>
</node>
<node id=”n2″ parse.indegree=”0″ parse.outdegree=”1″>
<data key=”d0″>invisible</data>
<port name=”out”/>
</node>
<node id=”n3″ parse.indegree=”1″ parse.outdegree=”0″>
<data key=”d0″>invisible</data>
<port name=”in”/>
</node>
<node id=”n4″ parse.indegree=”1″ parse.outdegree=”0″>
<data key=”d0″>invisible</data>
<port name=”in”/>
</node>

<edge source=”n1″ target=”n3″ sourceport=”out” targetport=”in”/>
<edge source=”n2″ target=”n4″ sourceport=”out” targetport=”in”/>

____________________________________________

For this choice which consists into using invisible nodes, a new program is needed (which may be useful for other purposes, later)

(c) arrow combing

The idea is that a sequence of arrows  connected via 2-valent invisible nodes should count as an arrow in a chemlambda graph.

So this program does exactly this: if n1001 is different than n1002  then replaces the pattern

<node id=”n7″ parse.indegree=”1″ parse.outdegree=”1″>
<data key=”d0″>invisible</data>
<port name=”in”/>
<port name=”out”/>
</node>
<edge source=”n1001″ target=”n7″ sourceport=”???_1″ targetport=”in”/>
<edge source=”n7″ target=”n1002″ sourceport=”out” targetport=”???_2″/>

by

<edge source=”n1001″ target=”n1002″ sourceport=”???_1″ targetport=”???_2″/>

 

_________________________________________

 

The Quantomatic GUI may be useful for chemlambda (NTC vs TC, III)

Since I discovered Quantomatic (mentioned in NTC vs TC I), I continued to look and it seems that it already has, or almost, the GUI which is the step 0  in the Distributed GLC project.

This is a very good news for me, because I tend to consider that possibly complex tasks are simple, therefore as any lazy mathematician will tell you, it is always good if some piece of work has been  done before.

In the post A user interface for GLC I describe what we would need and this corresponds, apparently, to a part of what the Quantomatic GUI can do.

See Aleks Kissinger Pictures of Processes: Automated Graph Rewriting for Monoidal Categories and Applications to Quantum Computing, DPhil thesis, [arXiv:1203.0202] , Chapter 9.

Without much ado, I shall comment on the differences, with the hope that the similarities are clear.

Differences.  I shall use as an anchor for explaining the differences the homepage of Aleks Kissinger, because is well written and straightforward to read. Of course, this will not mean much if you don’t know what I am talking about concerning NTC vs TC, chemlambda or distributed GLC.

1. Aleks explains

“But “box-and-wire” diagrams aren’t just for physics and multilinear algebra. In fact, they make sense whenever there is a notion a “map”, a procedure for composing maps (i.e. vertical composition), and a procedure for putting two maps “side-by-side” (horizontal composition). That is, this notation makes sense in any monoidal category.”

Here is a first difference from chemlambda, because the chemlambda graphs are open graphs (in the sense that they also have arrows with one or both ends free, as well as loops), but otherwise a chemlambda graph has not any preferred external order of examination. The arrows of the graph are not “wires” and the nodes are not “boxes”. There is no meaning of the vertical or horizontal composition, because there is no vertical and no horizontal.

2. Another quote from Aleks page:

“In Open Graphs and Monoidal Theories, Lucas Dixon and I defined the category of open-graphs and described how double-pushout graph rewriting, as defined in e.g. Ehrig et al5, can be applied to open-graphs.”

This marks the second basic difference, which consists into the effort we make to NOT go global. It is a very delicate boundary between staying local and taking a God’s pov, and algebra is very quickly passing that boundary. Not that it breaks a law, but it adds extra baggage to the formalism, only for the needs to explain things in the algebraic way.

Mind that it is very interesting to algebraize from God’s pov, but it might be also interesting to see how far can one go without taking the global pov.

 

3.  Maybe as an effect of 1. they think in terms of processes, we think in terms of Actors. This is not the same, but OMG how hard is to effect, only via words exchanges,  the brain rewiring which is needed to understand this!

But this is not directly related to the GUI, which is only step 0 for the distributed GLC.

_______________________________________

Putting these differences aside, it is still clear that:

  • chemlambda graphs are what they call “open graphs”
  • there are some basic tools in the Quantomatic gui which we need
  • mixing our minimal, purely local constraint on reasoning with their much more developed global point of view can be only fruitful, even if the goals and applications are different.

_______________________________________

What do you think about this?

_______________________________________

 

 

My first NSF experience and the future of GLC

Just learned that the project “Secure Distributed Computing with Graphic Lambda Calculus” will not be funded by NSF.

I read the reviews and my conclusion is that they are well done. The 6 reviewers all make good points and a good job to detect strong points and weaknesses of the project.

Thank you NSF for this fair process. As the readers of this blog know, I don’t have the habit to hide my opinions about bad reviews, which sometimes may be harsh. Seen from this point of view, my thanks look, I hope, even more sincere.

So, what was the project about? Distributed computing, like in the “GLC actors, artificial chemical connectomes, topological issues and knots”  arXiv:1312.4333 [cs.DC], which was branded as useful for secure computing. The project has been submitted in Jan to Secure and Trustworthy Cyberspace (SaTC) NSF program.

The point was to get funding which allows the study of the Distributed GLC, which is for the moment fundamental research.  There are reasons to believe that distributed GLC may be good for secure computing, principal among them being that GLC (and chemlambda, actually the main focus of research) is not based on the IT paradigm of gates and wires, but instead on something which can be described as signal transduction, see How is different signal transduction from information theory?   There is another reason, now described by the words “no semantics“.

But basically,  this is not naturally a project in secure computing. It may become one, later, but for the moment the project consists into understanding asynchronous, decentralized computations performed by GLC actors and their biological like behaviour. See What is new in distributed GLC?

Together with Louis Kauffman, we are about to study this, he will present at the ALIFE 14 conference our paper  Chemlambda, universality and self-multiplication,   arXiv:1403.8046.

There is much more to tell about this, parts were told already here at chorasimilarity.

From this moment I believe that instead of thinking security and secrecy, the project should be open to anybody who wishes to contribute, to use or to criticize. That’s the future anyway.

______________________________________________________

 

 

Zipper logic appeared in math.CO (combinatorics)

Today,  a week after the submission, the article Zipper Logic   arXiv:1405.6095 appeared in math.CO , math GT, and not in math.LO , math.GT.

Actually, combinatorics, especially lately, is everywhere, being probably one of the most dynamic research fields in mathematics.

There is a small world connection of zipper logic with combinatorics,   because zipper logic is a variant of chemlambda, which is a variant of GLC, which has an emergent algebra sector  arXiv:1305.5786 section 5 , which is a sort of approximate algebraic structure, alike, but not the same as the approximate groups arXiv:1110.5008 .

____________________________________

 

Birth and death of zipper combinators (I)

Because zipper logic is a graph rewriting system which does not use variable names, just like GLC and chemlabda,  it needs ways to multiply, distribute or kill  things.

See  Chemlambda, universality and self-multiplication  for multiplication, distribution or propagation phenomena.

In this post we shall see how zipper combinators die or are born, depending on the sense of reading the moves. This can happen in several ways, one of them explained in this post.

This can also be seen as the statement: we can do garbage collection in chemlambda, GLC or zipper logic.

____________________________________

 

Zipper combinators.   The set of zipper combinators is the smallest set of zipper graphs with the properties:

  •  it contains the S, K, I zipper graphs defined in  the next figure

zipper_not_6

  • for any natural number n >0 and for any n+1 zipper combinators, the zipper graph obtained by connecting the out arrows of the  zipper combinators to the in arrows of the (+n) half-zipper is a zipper combinator.

appli

  • any zipper graph which is obtained by applying a zipper logic move to a zipper combinator is a zipper combinator.

 

 

I want to prove that for any zipper combinator A, there is a number n and  a succession of n  LOCAL PRUNING,  CLICK and ZIP moves such that:

zipd_0

Proof.     Becauseof the LOC PRUNING move satisfied by  $latex  (+)$ half-zippers, it follows that by a finite number of these moves we can achieve the effect described in the next figure, for any zipper combinator A (of course, the number of these moves depends on A):

zipd_2

It is left to prove that the free arrow ending with a termination node can “eat”, one by one, all the I, K, S zipper combinators.

For the I combinator, it happens like this:

zipd_1

The case of the combinator K is described next:

zipd_3

The combinator S is absorbed by the following succession of moves:

zipd_4

 

The proof is done.

_____________________________________

We can read all backwards, by saying that an arrow connected to a termination node can give birth to  any zipper combinator, with the out arrow connected to a termination node.

_____________________________________