Tag Archives: chemlambda

9-quine string animation

I use the chemlambda strings version to show  how the 9-quine works. [What is a quine in chemlambda? See here.]

 

9-quine-string-anim

 

 

The 9-quine is the smallest quine in chemlambda which does not have a termination node.  There exist smaller quines if the termination node is admitted. For example  the chemlambda equivalent of a quine from Interaction Combinators  which appears in Lafont’ foundational article.

As you see this version is conservative and there are no enzymes.

I shall come back with a post which explains why and how the 9-quine dies. It is of course due to the conflicts in chemlambda, see the examples from the page on chemlambda v2.

Lambda calculus inspires experiments with chemlambda

In the now deleted chemlambda collection I told several stories about how lambda calculus can bring inspiration for experiments with chemlambda. I select for this post a sequence of such experiments. For previous related posts here see this tag and this post.

Let’s go directly to the visuals.

Already in chemlambda v1 I remarked the interesting behaviour of the graph (or molecule) which is obtained from the lambda term of the predecessor applied to a Church number.  With the deterministic greedy algorithm of reductions, after the first stages of reduction there is a repeating pattern of  reduction, (almost) up to the end. The predecessor applied to the Church number molecule looks almost like a closed loop made of pairs A-FO (because that’s how a Church number appears in chemlambda), except a small region which contains the graph of the predecessor, or what it becomes after few rewrites.

In chemlambda v2 we have two kinds of fanouts: FO and FOE.  The end result of the reduction of the same molecule, under the same algorithm, is different: where in chemlambda v1 we had FO nodes (at the end of the reduction), now we have FOE nodes. Other wise there’s the same phenomenon.

Here is it, with black and white visuals

pred

Made by recording of this live (js) demo.

1. What happens if we start not from the initial graph, but from the graph after a short number of rewrites? We have just to cut the “out” root of the initial graph, and some nodes from it’s neighbourhood and glue back, so that we obtain a repeating pattern walking on a circular train track.

Here is it, this time with the random reduction algorithm:

bigpred_train-opt

I previously called this graph an “ouroboros”. Or a walker.

2. That is interesting, it looks like a creature (can keep it’s “presence”) which walks in a single direction in a 1-dimensional world.  What could be the mechanism?

Penrose comes to mind, so in the next animation I also use a short demonstration from a movie by Penrose.

bigpred_penrose-opt

 

3. Let’s go back to the lambda calculus side and recall that the algorithm for the translation of a lambda term to a chemlambda molecule is the same as the one from GLC, i.e the one from Section 3 here. There is a freedom in this algorithm, namely that trees of FO nodes can be rewired as we wish. From one side this is normal for GLC and chemlambda v1,  which have the CO-COMM and CO-ASSOC rewrites

convention_3

In chemlambda v2 we don’t have these rewrites at all, which means that in principle two diferent molecules,  obtained from the same lambda term, which differ only by the rewiring of the FO nodes may reduce differently.

In our case it would be interesting to see if the same is true for the FOE nodes as well. For example, remark that the closed loop, excepting the walker, is made by a tree of FOE nodes, a very simple one. What happens if we perturb this tree, say by permuting some of the leaves of the tree, i.e. by rewiring the connections between FOE and A nodes.

bigpred_train_perm-opt

The “creature” survives and now it walks in a world which is no longer 1 dimensional.

Let’s play more: two permutations, this time let’s not glue the ends of the loop:

bigpred_train_egg

It looks like a signal transduction from the first glob to the second. Can we make it more visible, say by making invisible the old nodes and visible the new ones? Also let’s fade the links by making them very large and almost transparent.

bigpred_train_egg_mist_blue

Signal transduction! (recall that we don’t have a proof that indeed two molecules from the same lambda term, but with rewired FO trees reduce to the same molecule, actually this is false! and true only for a class of lambda terms. The math of this is both fascinating and somehow useless, unless we either use chemlambda in practice or we build chemlambda-like molecular computers.)

4.  Another way to rewire the tree of FOE nodes is to transform it into another tree with the same leaves.

bigpred_tree-opt

 

5. Wait, if we understand how exactly this works, then we realize that we don’t really need this topology, it should also work for topologies like generalized Petersen graphs, for example for a dodecahedron.

dodecahedron_walker

 

This is a walker creature which walks in a dodecaheral “world”.

6. Can the creature eat? If we put something on it’s track, see if it eats it and if it modifies the track, while keeping it’s shape.

walker_bit-opt

So the creature seems to have a metabolism.

We can use this for remodeling the world of the creature. Look what happens after many passes of the creature:

walker_bit_new

 

7. What if we combine the “worlds” of two creatures, identical otherwise. Will they survive the encounter, will they interact or will they pass one through the other like solitons?

bigpred_bif

 

Well, they survive. Why?

8. What happens if we shorten the track of the walker, as much as possible? We obtain a graph wit the following property: after one (or a finite give number of) step of the greedy deterministic algorithm we obtain an isomorphic graph. A quine! chemlambda quine.

At first, it looks that we obtained a 28 nodes quine. After some analysis we see that we can reduce this quine to a 20 nodes quine. A 20-quine.

Here is the first observation of the 20-quine under the random algorithm

20_quine_50steps

According to this train of thoughts, a chemlambda quine is a graph which has a periodic evolution under the greedy deterministic algorithm, with the list of priority of rewrites set to DIST rewrites (which add nodes)  with greater priority than beta and FI-FOE rewrites (which subtract ndoes), and which does not have termination nodes (because it leads to some trivial quines).

These quines are interesting under the random reduction algorithm, which transform them into mortal living creatures with a metabolism.

____________

So this is an example of how lambda calculus can inspire chemlambda experiments, as well as interesting mathematical questions.

A project in chemical computing and Lafont universality

The post Universality of interaction combinators and chemical reactions ends with the idea that Lafont universality notion, for interaction systems, may be the right one for chemical computing.

These days are strange, every one comes with some call from one of my old projects. (About new ones soon, I have so many things.) Today is even more special because there were two such calls.One of them was from what I wrote in A project in chemical computing page from april 2015. It ends with:

    If you examine what happens in this chemical computation, then you realise that it is in fact a means towards self-building of chemical or geometrical structure at the molecular level. The chemlambda computations are not done by numbers, or bits, but by structure processing. Or this structure processing is the real goal!
     Universal structure processing!

There is even this video about an Ackermann function molecular computer I forgot about.

The idea is that the creation of a real molecule to compute Ackermann(2,2) would be the coolest thing ever made in chemical computing. If that is possible then as possible as well would be an Ackermann goo made from Ackermann(4,4):

ackermann_4_4_75steps

In Graphic lambda calculus and chemlambda (III) I comment again on Lafont:

    • Lafont universality property of interaction combinators means, in this pseudo-chemical sense, that

the equivalent molecular computer based on interaction combinators reactions (though not the translations) works

    for implementing a big enough class of reactions which are Turing universal in particular (Lafont  shows concretely that he can implement Turing machines).

 

In the series about Lafont interaction combinators and chemlambda (1) (2) (3), as well as in the paper version of the article Molecular computers, an effort is made to reconnect chemlambda research with much older work by Lafont. [UPDATE: I retrieved this, I forgot about it, it’s mostly chemlambda v1  to chemlambda v2, see also this post ]

The numberphile microbe and the busy beaver

This is another weirdly named, but contentful post after this one, During an attempt to launch myself into video explanations, I made a post on the numberphile microbe.

The numberphile microbe is the chemlambda version of a multiplication of two Church numbers, in this case 5X5=25. I called the creature evolving in the video a “numberphile microbe” because it really consumes copies of the number 5, metabolizes them and produces eventually 25. In a very careful way, though, which inspired me the following description (but you have to see the video from that post):

“The numberphile microbe loves Church numbers. His strategy is this: never one without the other. When he finds one Church number he looks around for the second one. Then he chains the first to the second and only after that he starts to slowly munch the head of the first. Meanwhile the second Church number watches the hapless first Church number entering, atom by atom, in the numberphile mouth.

Only the last Church number survives, in the form of the numberphile’s tail.”

The  mol file used is times_only.mol.  Yes, allright, is the mol version of the AST of a lambda term.

You can see the numberphile also in this animation, together with a busy beaver Turing machine (the chemlambda version explained here):

times_only_bb

 

In the first half of the animation you see the “numberphile” at the left and the busy beaver as a reddish loop at the right.

What happens is that the lambda term like 5X5 reduces to 25 while in the same time the busy beaver machine works too. In the same time, the Church number 25 in the making already makes the small loop to replicate and to grow bigger and bigger, eventually 25 times bigger.

So that explains the title.

The mol file used is times_only_bb.mol. Open it and see how is it different than the first.

You can see a simulation (js) of Church number applied to a busy beaver here.

And the most important is: during the making of this short movie, no human director was present to stage the act.

An extension of hamiltonian mechanics

This is an introduction to the ideas of the article arXiv:1902.04598

UPDATE: If you think about a billiard-ball computer, the computer is in the expression of the information gap. The model applies  also to chemlambda, molecules have a hamiltonian as well and the graph rewrites, aka chemical reactions, have a description in the information gap. That’s part of the kaleidos project 🙂

__

Hamiltonian mechanics is the mechanism of the world. Indeed, the very simple equations (here the dot means a time derivative)

hamiltonian-1

govern everything. Just choose an expression for the function H, called hamiltonian, and then solve these equations to find the evolution in time of the system.

Quantum mechanics is in a very precise sense the same thing. The equations are the same, only the formalism is different. There is a hamiltonian which gives the evolution of the quantum system…

Well, until measurement, which is an addition to the beautiful formalism. So we can say that hamiltonian mechanics, in the quantum version, and the measurement algorithm are, together, the basis of the quantum world.

Going back to classical mechanics, the same happens. Hamiltonian mechanics can be used as is in astronomy, or when we model the behavior of a robotic arm, or other purely mechanical system. However, in real life there are behaviors which go beyond this. Among them: viscosity, plasticity, friction, damage, unilateral contact…

There is always, in almost all applications of mechanics, this extra ingredient: the system does not only have a hamiltonian, there are other quantities which govern it and which make, most of the time, the system to behave irreversibly.

Practically every  object, machine or construction made by humans needs knowledge beyond hamiltonian mechanics. Or beyond quantum mechanics. This is the realm of applied mathematics, of differential equations, of numerical simulations.

In this classical mechanics for the real world we need the hamiltonian and we also need to explain in which way the object or material we study is different from all the other objects or materials. This one is viscous, plastic, elsot-plastic, elasto-visco-plastic, there is damage, you name it, these differences are studied and they add to hamiltonian mechanics.

They should add, but practically they don’t. Instead, what happens is that the researchers interested into such studies choose to renounce at the beaustiful hamiltonian mechanics formalism and to go back to Newton and add their knowledge about irreversible behaviours there.

(There is another aspect to be considered if you think about mechanical computers. They are mostly nice thought experiments, very powerfull ideas generators. Take for example a billiard-ball computer. It can’t be described by hamiltonian mechanics alone because of the unilateral contact of the balls with the biliard and of the balls one with another. So we can study it, but we have to add to the hamiltonian mechanics formalism.)

From all this  we see that it may be interesting to study if there is any information content of the deviation from hamiltonian mechanics.

We can measure this deviation by a gap vector, defined by

hamiltonian-2

and we need new equations for the gap vector \eta.  Very simple then, suppose we have the other ingredient we need, a likelihood function \pi \in [0,1] and we add that

hamiltonian-3

where z = z(t) = (q(t), p(t)). That is we ask that    if the system is in the state z then the velocity \dot{z} and the gap vector \eta   maximize the likelihood \pi .

Still too general, how can we choose the likelihood? We may take the following condition

hamiltonian-4

that is we can suppose that the algorithm max  gives a  categorical answer when applied to any of the 2nd or 3rd argument of the likelihood.

(It’s Nature’s business to embody the algorithm max…)

We define then the information content associated to the likelihood as

hamiltonian-5

So now we have a principle of minimal information content of the difference from hamiltonian evolution: minimize

hamiltonian-6

 

In arXiv:1902.04598 I explain how this extension of hamiltonian mechanics works wonderfully with viscosity, plasticity, damage and unilateral contact.

[see also this]

Graphic lambda calculus and chemlambda (IV)

This post continues with chemlambda v2. For the last post in the series see here.

Instead of putting even more material here, I thought it is saner to make a clear page with all details about the nodes and rewrites of chemlambda v2. Down the page there are examples of conflicts.

Not included in that page is the extension of chemlambda v2 with nodes for Turing machines. The scripts have them, in the particular case of a busy beaver machine. You can find this extension explained in the article Turing machines, chemlambda style.

Turing machines appear here differently from the translation technique of Lafont (discussed here, see also these (1), (2) for other relations between interaction combinators and chemlambda). Recall that he proves  prove that interaction combinators are Turing universal by:

  • first proving a different kind of universality among interaction nets, to me much more interesting than Turing universality, because purely graph related
  • then proving that any Turning machine can be turned into an interaction nets graphical rewrite system.

In this extension of chemlambda v2 the nodes for Turing machines are not translated from chemlambda, i.e. they are not given as chemlambda graphs. However, what’s interesting is that the chemlambda and Turing realm can work harmoniously together, even if based on different nodes.

An example is given in the Chemlambda for the people slides, with the name Virus structure with Turing machines, builts itself

spiral_boole_bb_short

but mind that the source link is no longer available, since I deleted the chemlambda g+ collection. The loops you see are busy beaver Turing Machines, the structure from the middle is pure chemlambda.

 

 

The shuffle trick in Lafont’ Interaction Combinators

For the shuffle trick see The illustrated shuffle trick…    In a way, it’s also in Lafont’ Interaction Combinators article, in the semantics part.

lafont-4

It’s in the left part of Figure 14.

In chemlambda the pattern involves one FO node and two FOE nodes. In this pattern there is first a FO-FOE rewrite and then a FI-FOE  one. After these rewrites we see that now we have a FOE instead of the FO node and two FO instead of the previous two FOE nodes. Also there is a swap of ports, like in the figure.

You can see it all in the linked post, an animation is this:

shuffle

For previous posts about Lafont paper and relations with chemlambda see:

If the nodes FO and FOE were dilations of arbitrary coefficients a and b, in an emergent algebra, then the equivalent rewrite is possible if and only if we are in a vector space. (Hint: it implies linearity, which implies we are in a conical group, therefore we can use the particular form of dilations in the general shuffle trick and we obtain the commutativity of the group operation. The only commutative conical groups are vector spaces.)

In particular the em-convex axiom implies the shuffle trick, via theorem 8.9 from arXiv:1807.02058  . So the shuffle trick is a sign of commutativity. Hence chemlambda alone is still not general enough for my purposes.

You may find interesting the post Groups are numbers (1) . Together with the em-convex article, it may indeed be deduced that [the use of] one-parameter groups [in Gleason-Yamabe and Montgomery-Zippin] is analoguous to the Church encoding of naturals. One-parameter groups are numbers. The em-convex axiom could be weakened to the statement that 2 is invertible and we would still obtain theorem 8.9. So that’s when the vector space structure appears in the solution of the Hilbert 5th problem. But if you are in a general group with dilations, where only the “em” part of the em-convex rewrite system applies (together with some torsor rewrites, because it’s a group), then you can’t invert 2, or generally any other number than 1, so you get only a structure of conical group at the infinitesimal level. However naturals exist even there, but they are not related to one-parameter groups.