Ouroboros predecessor (IV): how the walker eats

Continues from  Ouroboros predecessor (III): the walking machine .  This time you have to imagine that the walker machine sits on a long enough train track.

The regularity of the train track is corrupted by a bit of food (appearing as a L node connected to a termination node), see the next (BIG) picture. It is at the right of the walker.

You can see (maybe if you click on the image to make it bigger) that the walker ingests the food. The ingested part travels through the walker organism and eventually is expelled as a pair L and A nodes.




Perhaps, by clever modifications of the walker (and some experiments with its food) one can make a Turing machine.

This would give a direct proof that chemlambda with the  sequential strategy is universal as well. (Well, that’s only of academic interest, to build trust as well, before going to the really nice part, i.e. distrbuted, decentralized, alife computations.)


Ouroboros predecessor (III): the walking machine

In the post   Ouroboros predecessor (II): start of the healing process   there is a curious phenomenon happening: there are 3 quasi-identical reduction steps, each involving 8 reductions.

That is because there is a walking machine in those graphs.

Explanations follow.

Recall the reduction strategy:

  • at each step we do all the possible (non-conflicting) graph rewrites involving the moves BETA, FAN-IN, DIST, LOC PRUNING, from left to right only. See the  definition of moves post.
  • then we do all the COMB moves until there is no COMB move possible (also from left to right only).
  • then we repeat.

In the drawings the COMB moves are not figured explicitly.

Let’s come back to the walking machine. You can see it in the following figure.



In the upper side of the figure we see one of the graphs from the reduction of the “ouroboros predecessor”, taken fom the last post.

In the lower side there is a part of this graph which contains the walking machine, with the same port names as in the upper side graph.

What I claim is that in a single reduction step the machine “goes to the right” on the train track made by pairs of FO and A nodes. That is why some of the reduction steps from the last post look alike.

One reduction step will involve:

  • 8 reduction moves, namely 4 DIST, 2 BETA, 2 FAN-IN
  • followed by some COMB moves.

Let’s start afresh, with the walking machine on tracks, with new port names (numbers).



For the sake of explanations only, I shall do first the two BETA and the two FAN-IN moves, then will follow the four  DIST moves. There is nothing restrictive here, because the moves are all independent, moreover, according to the reduction strategy, these are all the moves which can be done in this step, and they can be done at once.

OK, what do we see? In the upper side of this figure there is the walking machine on tracks, with a new numbering of ports. We notice some patterns:

  • the pair of L and A nodes, i.e. L[1,2,3] A[2,35,1]  which, in the figure, appears over the A node  A[3,4,5]. Remark that A[3,4,5] would make a good pair (i.e. a part of the “track”) with FO[38,4,36], if it would have the ports  “3” and “5”  switched.
  • the pattern of 5 red FI and L nodes from the middle upper side of the walking machine
  • the 3 green and 2 red nodes which make a kind of a pentagon at the right side of the walking machine
  • the 2 DIST right patterns for application node (green) and the 2 DIST right patterns for the lambda node (3 red, one green) which are like 4 train cars on the track.

In the lower part of the figure we see what the graph looks like after the application of the 2 BETA moves and the 2 FAN-IN moves which are possible.

Let’s look closer. In the next figure is taken the graph from the lower part of the previous figure. Beneath it is the same graph, only arranged on the page such that it becomes simpler to see the patterns. Here is this figure:




Recall that we are working with graphs (called g-patterns, or molecules), not with particular embeddings of the graphs in the plane. The two graphs are the same, only the drawings on the plane are different. Chemlambda does not matter about, nor uses embeddings. This is only for you, the reader, to help you see things better.

OK, what do we see:

  • there are some arrows (edges) with 2 names on them, this is because there are Arrow elements which still exist because we have not done yet the COMB moves
  • we see that already there is a new pair of A and FO nodes (in green, at the left of the lower graph). At the right of the lower graph we see that there is a missing piece of train track which “magically” appeared at the left.
  • then, at the right of the piece of the train track piece which appeared at left, the walking machine already looks like before the moves, in the sense that there is an A node  with “switched” ports, there is a pair of green and red nodes hovering over it,
  • moreover the pattern of 5 red nodes is there again, …

… but all these patterns are not the old ones, but new ones!

The 4 train cars made by DIST patterns are missing! Well, they appear again after we do the remaining 4 DIST moves.

In the next figure we see the result of these 4 DIST moves. I did not numbered the new edges which appear.




I also did the COMB moves, if you look closer you will see that now any arrow either has one or no number on it. The arrows without numbers are those appeared after the DIST moves.

Let’s compare the initial and final graphs, in the next figure.




We see that indeed, the walking machine went to the right! It did not move, but instead the walking machine dismembered itself and reconstructed itself again.

This is of course like the guns from the Game of Life, but with a big difference: here there is no external grid!

Moreover, the machine destroyed 8 nodes and 16 arrows (by the BETA, FAN-IN and COMB moves) and reconstructed 8 nodes and 16 arrows by the DIST moves. But look, the old arrows and nodes migrated inside and outside of the machine, assembling in the same patterns.

This is like a metabolism…





Ouroboros predecessor (II): start of the healing process

Let’s continue from the place we stopped in  Ouroboros predecessor, another chemlambda reduction .  After the first reduction stage which involves 6 rewrites, we see that the signal is given for the start of the healing process.



The signal for the healing is given by the beta reduction

L[59,59,23]  A[23,27,14] –beta–>

Arrow[59,14] Arrow[27,59]

The COMB moves are not figured. But they go like this in this case:

Arrow [59,14] Arrow[27,59] –COMB–>


and then

A[52,54,27] Arrow[27,14] –COMB–>


In the third graph we see the element:


which comes from yet another COMB move

A[52,54,27] Arrow[19,52] –COMB–>


where the Arrow[19,52] comes from the FAN-IN move

FI[6,19,2] FO[2,52,53] –FAN-IN–>

Arrow[6,53] Arrow[19,52]

There are 8 rewrites per reduction step, starting from the 2nd figure. The repeating patterns are:

  • a train of 4 DIST patterns which travel along the external (in our arbitrary embedding in the plane)  ring of FO elements.
  • a “healing” pair which appears first time in the 3rd figure as A[30,50,31] L[31,30, 19]. This healing pair is consumed (by a beta move) each time, but is recreated. In the 4th figure we see it again as A[47,69,46] L[46,47,29], in the 5th figure appears a A[67,89,65] L[65,67,42]. It has the role to switch in the right position the ports of an A element, to form pairs A and FO which heal back the graph.
  • FI nodes in such a pattern so that at any step of the reduction only two FAN-IN moves are possible, one coming from a newly created DIST pattern, the other coming from an older FI node.

The number of nodes, from the 2nd to the 5th figure is the same.

What will happen next?


Ouroboros predecessor, another chemlambda reduction

Well, let’s see if I learned a trick from the predecessor function from the post Answer to “what reduction is this?”.

I make an ouroboros from something like the Pred 8:




We’re in the middle of the computation, what will give eventually, can you guess?

Next time!



Answer to “what reduction is this?”

In lambda calculus, the predecessor function is surprisingly difficult, of course when compared with the successor, addition or multiplication.

In the post What reduction is this? I used chemlambda with the stupid sequential reduction strategy stated in Y again: conflict!, namely:

  •  we start with a g-pattern and we reduce it sequentially
  • at each step we identify first all the LEFT patterns from the following moves: beta, DIST, FAN-IN, LOC PR
  • we do the moves only from LEFT to RIGHT
  • repeat until no move available OR until conflict.

… And there is no conflict in the predecessor reduction.

In the post “What reduction is this?” I asked some questions, let me answer:

  • what kind of evaluation strategy is this?  NO EVALUATION STRATEGY, BECAUSE THERE ARE NO VALUES
  • are there reduction steps and self-multiplication steps? NO, THEY MIX  WITH NO EXTERNAL CONTROL
  • is this in lambda calculus? NO, IS INSPIRED FROM, BUT BETTER

This is a streamlined version of the reduction hidden in

PRED(3) –> 2

where numbers appear as stacks of pair FO and A nodes. They are “bare” numbers, in the sense that all the currying has been eliminated.

Admire the mechanical, or should I say chemical precision of the process of reduction (in chemlambda, stupid sequential strategy). In the following figure I eliminated all the unnecessary nodes and arrows and we are left now with the pure phenomenon.



I find amazing that it works even with this stupidest strategy. Shows that chemlambda is much better than anything on the market.

Let me tell again: this is outside IT fundamental assumption that everything  is reduced at signals send through wires, then processed by gates. 

It is how nature works.


Is this a conflict of interest, if you are an editor?

David Roberts made notes here   about the following event:

“MU Panel 2. Future of Publishing
Date & Time : 18:00 – 19:30, August 19 (Tue), 2014
Moderator: Jean-Pierre Bourguignon, European Research Council, Belgium

Rajendra Bhatia, Indian Statistical Institute, New Delhi, India and Sungkyunkwan University, Suwon, Korea
Jean-Pierre Demailly, Institut Fourier, France
Chris Greenwell, Elsevier, The Netherlands
Thomas Hintermann, European Mathematical Society Publishing House, Switzerland
+Nalini Joshi, University of Sydney, Australia
Ravi Vakil, Stanford University, USA

I am extremely intrigued about this part:

“E[lsevier?]  does pay its editors-in-chief (=academics) and sometimes associate editors – doesn’t go all the way to reimburse them for the time they spend. Q from floor: where are these figures published? A: “We don’t generally make that available, mostly because the individual editors probably don’t want their colleagues to know” (~http://youtu.be/RbIBrE0vepM?t=1h14m30s) Q: this is unfair A: depends on editors. There’s nothing in the contract stopping them from telling people. Most of them probably wouldn’t want to tell you. Averages out at about $100 per paper handled.”

This practice may be OK from the point of view of the publisher, but, in my opinion, the paid editors HAVE to tell in order to avoid a conflict of interest.

The conflict of interest appears when an editor is in a jury, or otherwise in any process which rewards  publication in journals like the ones where the guy is a paid editor (hiring, phd supervising, grants dispensing).  This is something which is worth discussing, I guess. Is not specific to math.

It is not a matter of the editor “wouldn’t want to tell you”, as cynically put by the E[lsevier?] speaker. It is a matter of being honest.

Recall in this context the post

We have met the enemy: part I, pusillanimous editors, by Mark C. Wilson

“My conclusions, in the absence of further information: senior researchers by and large are too comfortable, too timid, too set in their ways, or too deluded to do what is needed for the good of the research enterprise as a whole. I realize that this may be considered offensive, but what else are the rest of us supposed to think, given everything written above? I have not even touched on the issue of hiring and promotions committees perpetuating myths about impact factors of journals, etc, which is another way in which senior researchers are letting the rest of us down”…

Are we living in a research banana republic?

Apparently (some of) the publishers think we are morons, because they secured collaboration of (some of) the academic bosses.

I think there is no difference between this situation and the one of a medical professional who has to disclose payment by pharmaceutical companies.

What do you think?


What reduction is this?

It ends in the Church encoding of 2.

Can you guess what is this? (click on the big image to see it better)




As you see, you may ask:

  • what kind of evaluation strategy is this?
  • are there reduction steps and self-multiplication steps?
  • is this in lambda calculus?
  • what can we learn from this particular example?