Tag Archives: Lafont

Graphic lambda calculus and chemlambda (IV)

This post continues with chemlambda v2. For the last post in the series see here.

Instead of putting even more material here, I thought it is saner to make a clear page with all details about the nodes and rewrites of chemlambda v2. Down the page there are examples of conflicts.

Not included in that page is the extension of chemlambda v2 with nodes for Turing machines. The scripts have them, in the particular case of a busy beaver machine. You can find this extension explained in the article Turing machines, chemlambda style.

Turing machines appear here differently from the translation technique of Lafont (discussed here, see also these (1), (2) for other relations between interaction combinators and chemlambda). Recall that he proves  prove that interaction combinators are Turing universal by:

  • first proving a different kind of universality among interaction nets, to me much more interesting than Turing universality, because purely graph related
  • then proving that any Turning machine can be turned into an interaction nets graphical rewrite system.

In this extension of chemlambda v2 the nodes for Turing machines are not translated from chemlambda, i.e. they are not given as chemlambda graphs. However, what’s interesting is that the chemlambda and Turing realm can work harmoniously together, even if based on different nodes.

An example is given in the Chemlambda for the people slides, with the name Virus structure with Turing machines, builts itself

spiral_boole_bb_short

but mind that the source link is no longer available, since I deleted the chemlambda g+ collection. The loops you see are busy beaver Turing Machines, the structure from the middle is pure chemlambda.

 

 

Advertisements

The shuffle trick in Lafont’ Interaction Combinators

For the shuffle trick see The illustrated shuffle trick…    In a way, it’s also in Lafont’ Interaction Combinators article, in the semantics part.

lafont-4

It’s in the left part of Figure 14.

In chemlambda the pattern involves one FO node and two FOE nodes. In this pattern there is first a FO-FOE rewrite and then a FI-FOE  one. After these rewrites we see that now we have a FOE instead of the FO node and two FO instead of the previous two FOE nodes. Also there is a swap of ports, like in the figure.

You can see it all in the linked post, an animation is this:

shuffle

For previous posts about Lafont paper and relations with chemlambda see:

If the nodes FO and FOE were dilations of arbitrary coefficients a and b, in an emergent algebra, then the equivalent rewrite is possible if and only if we are in a vector space. (Hint: it implies linearity, which implies we are in a conical group, therefore we can use the particular form of dilations in the general shuffle trick and we obtain the commutativity of the group operation. The only commutative conical groups are vector spaces.)

In particular the em-convex axiom implies the shuffle trick, via theorem 8.9 from arXiv:1807.02058  . So the shuffle trick is a sign of commutativity. Hence chemlambda alone is still not general enough for my purposes.

You may find interesting the post Groups are numbers (1) . Together with the em-convex article, it may indeed be deduced that [the use of] one-parameter groups [in Gleason-Yamabe and Montgomery-Zippin] is analoguous to the Church encoding of naturals. One-parameter groups are numbers. The em-convex axiom could be weakened to the statement that 2 is invertible and we would still obtain theorem 8.9. So that’s when the vector space structure appears in the solution of the Hilbert 5th problem. But if you are in a general group with dilations, where only the “em” part of the em-convex rewrite system applies (together with some torsor rewrites, because it’s a group), then you can’t invert 2, or generally any other number than 1, so you get only a structure of conical group at the infinitesimal level. However naturals exist even there, but they are not related to one-parameter groups.

Graphic lambda calculus and chemlambda(III)

This post introduces chemlambda v2. I continue from the last post, which describes the fact that chemlambda v1, even if it has only local rewrites, it is not working well when used with the dumbest possible reduction algorithms.

Nature has to work with the dumbest algorithms, or else we live in a fairy tale.

Chemlambda v2 is an artificial chemistry, in the following sense:

  • it is a graph rewrite system over oriented fatgraphs made of a finite number of nodes, from the list: 5 types of 3-valent nodes, A (application), L (lambda abstraction), FO (fanout), FI (fanin), FOE (external fanout), 1 type of 2-valent node Arrow, 3 types of 1-valent nodes, FRIN (free in), FROUT (free out), T (termination). Compared to chemlambda v1, there is a new node, the FOE. The nodes, not the rewrites, are described in this early explanation called Welcome to the soup. (Mind that the gallery of example which is available at the end of these explanation mixes chemlambda v1 and chemlambda v2 examples.)
  • the rewrites CO-COMM and CO-ASSOC of chemlambda v1 are not available, instead there are several new DIST rewrites: FO-FOE, L-FOE, A-FOE, FI-FO, and a new beta like rewrite FI-FOE. As in chemlambda v1, the patterns of the rewrites fit with the interaction combinator rewrites if we forget the orientation of the edges, but the 3-valent nodes don’t have a principal port, so they don’t form interaction nets. Moreover, the are conflicts among the rewrites, i.e. there are configurations of 3 nodes such that we have a node which belongs to two pairs of nodes which may admit rewrites. The order of application of rewrites may matter for such conflicts.
  • there is an algorithm of application of rewrites, which is either the deterministic greedy algorithm with a list of priority of rewrites (for example beta rewrites have priority over DIST rewrites, whenever there is a conflict), or the random application algorithm.

 

Sources for chemlambda v2:

 

The goal of chemlambda v2: to explore the possibility of molecular computers in this artificial chemistry.

This needs explanations. Indeed,  does the system work with the simplest random algorithm? We are not interested into semantics, because it is, or it relies on global notions, We are not (very) interested into reduction strategies for lambda terms, because they are not as simple as the dumbest algorithms we use here. Likewise for readback, etc.

So, does chemlambda v2 work enough for making molecular computers?  Pure untyped lambda calculus reduction problems are an inspiration. If the system works for the particular case of graphs related to lambda terms then this is a bonus for this project.

As you see, instead of searching for an algorithm which could implement, decentralized say, a lambda calculus reduction strategy, we ask if a particular system reduces (graphs related to) terms with one algorithm from the fixed class of dumbest ones.

That is why the universality in the sense of Lafont is fascinating. In this post I argued that Lafont universality property of interaction combinators means, in this pseudo-chemical sense, that the equivalent molecular computer based on interaction combinators reactions (though not the translations) works for implementing a big enough class of reactions which are Turing universal in particular (Lafont  shows concretely that he can implement Turing machines).

(continues with the part IV)

A quine in Lafont’ Interaction combinators

I continue with a second post about Y. Lafont Interaction combinators. Here is the first one.

In the Figure 3 from the article is given an example of a nonterminating computation:

lafont-3

This is a quine. Indeed it is a graph which has a periodic evolution under the deterministic greedy reduction algorithm, using the interaction rules (i.e. the graph rewrites) of interaction combinators.

By comparison, a chemlambda quine is a molecule (graph in chemlambda) which has a periodic evolution under the deterministic greedy reduction algorithm which uses the chemlambda graph rewrites, with the priority of the rewrites set to “viral”, i.e. the DIST family of rewrites comes first. In chemlamdba is needed a prority of rewrites (for the deterministic algorithm) because there exist conflicts between the rewrites, i.e. overlaping left patterns.

About a third of the molecules from the library are chemlambda quines and they are interesting mostly when reduced with the random reduction algorithm. While for interaction combinators the random reduction algorithm brings nothing new (the system is confluent), for chemlambda with the random reduction algorithm the system is not confluent and the chemlambda quines may die. All of the quines from the library are at best immortal, i.e. the probability of death does not depend on the age of the molecule.

A reason for this phenomenon is that all these chemlambda quines don’t use termination nodes (which correspond to the epsilon nodes of interaction combinators). The smallest chemlambda quine without T nodes is the 9_quine, which has  9 nodes. But we may use termination nodes and produce a quine which is similar to Lafont’ example:

lafont-quine-1-det

In mol notation this quine is:

FO 1 2 3
T 2
FOE 3 4 1
T 4

The animation is obtained with the scripts from the chemlambda repository, in the same way as those used for the comparison with the Dynamic GOI Machine. In order to obtain a deterministic reduction all weights (i.e. all parameters “wei_*” from the relevant awk script were set to 0.

You see that this is really a 6 nodes quine, why? Because even in Lafont example, a deterministic greedy reduction would lead at step 2 to the simultaneous application of rewrites which increase the number of nodes and of those which decrease the number of nodes, so a correct application of the deterministic greedy algorithm would be similar with the example from chemlambda.

Maybe it would be interesting (as a tool) and straightforward to modify the chemlambda scripts into an interaction combinators version.

 

 

Universality of interaction combinators and chemical reactions

In the foundational article Interaction combinators, Yves Lafont describes interaction rules as having the form

lafont-1

He then gives three examples of particular families of interaction rules which can be used to simulate Turing Machines, Cellular Automata and Unary Arithmetics.

The main result of his article (Theorem 1) is that there is an algorithm which allows to translate any interaction system (i.e. collection of interaction rules which satisfy some natural conditions) into the very simple system of his interaction combinators:

lafont-2

In plain words, he proves that there is a way to replace the nodes of a given interaction system by networks of interaction combinators in such a way that any of the interaction rules of that interaction system can be achieved (in a finite number of steps) by the interaction rules of the interaction combinators.

Because he has the example of Turing Machines as an interaction system, it follows that the interaction combinators are universal in the Turing sense.

The most interesting thing for me is that Lafont has a notion of universality for interaction systems, the one he uses in his Theorem 1. This universality of interaction combinators is somehow larger than the universality in the sense of Turing. It is a notion of universality at the level of graph rewrite systems, or, if you want, at the level of chemical reactions!

Indeed, why not proceed as in chemlambda and see an interaction rule as if it’s a chemical reaction? We may add an “enzyme” per interaction rule, or we may try to make the reaction conservative (in the number of nodes and wires) as we did in chemlambda strings.

Probably the rewrites of chemlambda are also universal in the class of directed interaction networks. If we take seriously that graph rewrites are akin to chemical reactions then the universality in the sense of Lafont means, more or less:

any finite collection of chemical reactions among a finite number of patterns of chemical molecules can be translated into reactions among chemlambda molecules

But why keep talking about chemlambda and not about the original interaction combinators of Lafont. Let’s make the same hypothesis as in the article Molecular computers and deduce that:

such molecular computers which embody the interaction combinators rewrites as chemical reaction can indeed simulate any other finite collection of chemical reactions, in particular life.

For me that is the true meaning of Lafont universality.