History of chemlambda status: on hold

Starting from Today July 9 the status of the arXiv version

Graph rewrites, from graphic lambda calculus, to chemlambda, to directed interaction combinators

of the online version

Graph rewrites, from emergent algebras to chemlambda

is “on hold”:

Screenshot from 2020-07-09 09:13:06

I don’t understand what part of the content can be controversial in any way. Truth on hold.

Among my articles on arXiv this is only the third time when this happens. Until now in all cases there was only a delay of a few days or weeks. This made me start using figshare for alternative archiving. One should keep several sources of the same documents in long term archives, to be sure.

It is however significant in my eyes that the other two cases were (with arXiv and figshare sources)

Chemical concrete machine  (arXiv) (figshare)

Zipper logic (arXiv) (figshare)

Mind that on arXiv you can see the submission date but not the date when the article appears.

As you know I consider arXiv a great thing, this is not a critic of arXiv. I just don’t understand what is happening and who and why is upset, consistently.

Pure See working draft

is now available at this link. It is a working draft, meaning that: the exposition will  change, there are  (many) missing parts, there are not enough explanations which link to previous work.

I could finish it in a few days, but I think I shall write at a slower pace, which will allow me to weed it better.

At the end it should be as least as usable as lambda calculus (well that’s not very much compared with languages which are widespread), but better in some respects. If you are  a geometer you may like it from a point of view different than lambda calculus.

If you are interested, please use this working draft as main source of information.

Maps of spaces into other spaces and non-euclidean analysis

I posted to figshare the presentation Non-euclidean analysis of dilation structures (which is also hosted on this blog).  In it is explained how the process of making maps of spaces into other spaces leads to a more general analysis (and differential calculus) than usual. It was the starting point into the direction of “computing with space” which will soon reach a conclusion with the “pure see” construction.

A graphical history of chemlambda

I made a page


which contains all the graph rewrites which were considered in various versions of chemlambda and  glc. I should have done this a long time ago and I have no doubt it will be useful.

It complements

Alife properties of directed interaction combinators vs. chemlambda.  Marius Buliga (2020), https://mbuliga.github.io/quinegraphs/ic-vs-chem.html#icvschem

which is an interactive version, which provides validation means,  of the article   arXiv:2005.06060


Quarantine garden

During these two months I spent a lot of time in a garden. A year ago there was nothing there. I worked the few patches of earth, threw the debris and started to plant stuff. This year the efforts begin to show.


There are now roses and jasmin, a small grapevine,


and a lot if ivy, in the downtown of a big city


Made also some garden drawings




Lots of things waiting to grow.







16 animations which supplement arXiv:2005.06060

Here is a list of 16 animations (which you can play with!) which supplement well the article

Artificial life properties of directed interaction combinators vs. chemlambda

and it’s associated page of experiments.


From the page of experiments you can travel to other directions, too!

Alife properties of directed interaction combinators vs. chemlambda

UPDATE: see arXiv:2005.06060.

The chemlambda project has a new page:

Alife properties of directed interaction combinators vs. chemlambda

This page allows experiments with graph quines under two artificial chemistries: directed interaction combinators and chemlambda. The main conclusion of these experiments is that graph rewrites systems which allow conflicting rewrites are better than those which don’t, as concerns their artificial life properties. This is in contradiction with the search for good graph rewrite systems for decentralized computing, where non-conflicting graph rewrite systems are historically preferred. Therefore we propose conflicting graph rewrite systems and a very simple algorithm of rewrite as a model for molecular computers.

Here we compare two graph rewrites systems. The first is chemlambda. The second is Directed Interaction Combinators, which has a double origin. In Lafont’ Interaction Combinators is also described a directed version. We used this proposal to provide a parsing from IC to chemlambda,



which works well if essentially one chemlambda rewrite is modified. Indeed, we replace the chemlambda A-FOE rewrite by a FI-A rewrite (which is among Asperti’ BOHM machine graph rewrites). Also some termination rewrites have to be modified.

For the purposes of this work, the main difference between these two graph rewrite systems is that chemlambda has conflicting rewrite (patterns) and Directed IC does not have such patterns. The cause of this difference is that some chemlambda nodes have two active ports while all Directed IC nodes (as the original IC nodes) have only one active port.


Internet of Smells, remember?

Now there is a time of work, very productive. Until next article, let’s remember the Internet of Smells short story from 2017.

Wouldn’t it be nice to have something like this now?  Maybe one day we’ll have it and it will not be only for the riches.

The closest equivalent to a graphical LISP repl I have is this.

There is another short story in this series, Home remodeling, which is a Proust plagiate.

More web-talks, more discussions, a new open revolution?

An unexpected gain of the quarantines is that more people realize that we can collaborate more with the available web tools.

As a small example, together with Ionut Tutu we made the site imaronline to help our colleagues to share and learn about others activities online. We are happy that now the site bloomed and we hope for more.

By the way, I would be happy to explain more about chemlambda or pure see or metric geometry or … or … discuss with you, via a web-conference. Contact me if you want this.

I look around and I see many people who start to seriously consider online communications for work. I think this  will change the way we see work as open science and open access changed the way we think about science communication. Only in this case the number of people pushing for a change will be much bigger!

There are lots of unexpected advantages of the transition from paper scientific journals to online journals. Obvious only in retrospect. One of them, for example, is that in online communication there are constraints which no longer apply. In the case of research articles, the length of the article is no longer a constraint. Moreover, the content of the article is no longer limited to what can be printed on paper.

Of course, these limitations (length, content type) are still used without other reason that they are familiar. Or with reasons which hide some dark patterns. As an example consider the case of bibliographies, or references, which still in their majority ignore the possibilities. References are many times not as links to available sources and they often ignore the sources which are not under the control of parasite publishers.

In the same way, these new web talks will evolve, I believe, into new ways of interactive collaboration.

The human presence in a discussion is a very strong motivator. An online video or chat discussion cannot have an arbitrary length, true. It is also, past to a point, a more shallow exchange than a written one, in the sense that many details in the mind of the participants do not pass to the other participants. Such a discussion should instead be alternated with written, more rigorous ones. Working material should be made available and follow-ups should be considered.

I know people complain that these online activities are not used more. For example, parents would like more from the teachers of their kids. There are many situations where the work is half stunted, although possible via web-talks. We have to remember this when the quarantines will be over. Next time try not to ignore the “foolish” ones who propose new ways of communications for professional purposes (like research, teaching, etc). Now that we need more teachers, researchers, etc, competent with these new means of communications, we need to accept as wrong  the previous general indifference or even hostility towards those who tried.


Zoom in between

Work progresses well and soon will come continuations of  arXiv:2003.14332, but now there is a time in between. What to do, other than have some talks, look at some links?

Re talk I’ll finally install zoom (I know) and if you want to organize an AMA or if you want to talk about computation, hamiltonians or bipotentials, let me know.

Re links, two which I think are funny:

  • at the gossip blog Not even wrong there is a new post about Mochizuki. The good part of the post are the links to articles. Fesenko is a realist. The  post is aggressively against Mochizuki, but there  is no knowledge to back up the tone. Among the comments, though, there are 2 or 3 from people who are competent, therefore they are interesting to read. [update: more interesting comments appeared.]
  • I didn’t know about this when I wrote a post about Alexandra Elbakyan back in February, but I learned about news concerning Alexandra’s fight against her bullying during her studies. Shortly, after she appealed to the ethics commission, during the audience she spat on them. Not something I approve as a behaviour, but the pressure on her is so huge probably, that she made a public relations mistake. Let us not forget that tens or hundreds of thousands of academic researchers, among them most of the academic managers, agree with the parasites who suck the blood of research for short term commercial reasons. Shame on them! Maybe this pirate gesture (she is a pirate, right? according to parasites) awakes some of them to the realisation of the gravity of their complicity.


lambdalife.html, a local version

lambdalife.html is the locally hosted version of chemlambda.github.io . There you can find the same pages, excepting the animation collection, which has bigger animations. You can also download the source scripts which fuel the pages.

So, for those who wait for the final concerning anharmonic lambda or the alternative linear logic, is the same base to build on it. The new ones will come.

Here is why I hosted it locally as well (1) and here is why I insist on making clear the basis before going further (2).

(1) Because of so many weird happenings around this project, I checked these days to see if my gmail messages which contain related stuff arrive (by using two channels). They don’t or they do arrive half a day later, after I used the second channel. So I basically, in general, have 0 trust in any corporate channel. If you think I overdramatize then how do you explain this or  this, sorry that I can’t speak about private exchanges?

The alternative to filtering is open science.

(2) During these years when I tried to absorb previous knowledge in the field, I was very  rarely helped by one of the professionals in the field. I am a professional too, but in other fields, like geometry or convex analysis. As far as I am aware in my general mathematical areas, which are huge compared with this new scientific field, open fair collaboration is the rule. Ask me something and to my best I’ll answer you. It is true that when it comes to walking on another’s lawn, things are not nice in mathematics as well. But with time we are confident we shall  be better, more human, less ape.

OK, so why tf nobody among hundreds of (very good in their field) specialists mentioned Bawden, for example. Instead, all sort of questions, down to the “I don’t know what am I looking at”. When you have the programs and you know your field? I know that in CS the wheel is reinvented all the time, but probably the wheel is proprietary most of the time, because this science is so young compared with mathematics. People don’t yet behave as scientists, they need to educate themselves in these matters.

So next time when you think why tf I don’t proceed further with explanations, please ask these categorical or linear logic snake-oil sellers if they understand what they are looking at.

(Don’t get me started with linear logic! WTF is linear in your version? Nothing. Any mathematician  can embedd anything into a Banach space, come on! Mathematicians to the  rescue soon. Oh, but wait, nobody gives a fuck on this subject, except the practitioners and the programmers who want to look intelligent. They should, despite this selected public, because this is a subject at the core of mathematics. As everybody knows and don’t like, mathematics is in everything and the best way to produce new science.)

Nature vs nurture in lambda calculus terms

This is one of the experiments among those accessible from arXiv:2003.14332 ,  or as well from figshare.

Btw if you want to know how you can contribute contribute read especially section 2.

Imagine that the lambda term

(\a.a a)(\x.((\b.b b)(\y.y x)))

is the gene of a future organism. How will it grow? Only nature (the term, the gene) matters or nurture (random evolution) matter?

Maybe you are familiar with this term, I saw it first time in arXiv:1701.04691, it is one which messes many purely local graph rewrite algorithms for lambda calculus. (By purely local I also mean that the parsing of the lambda term into a graph does not introduce non-local boxes, croissants, or other devices.)

As a term, it should reduce to the omega combinator. This is a quine in lambda calculus and it is also a chemlambda quine. You can play with it here.

I was curious if chemlambda can reduce corectly this term.  At first sight it does not. In this page use the “change” button until you have the message “random choices” and put the rewrites weights slides in the middle between “grow” and “slim”.  These are the default settings so at first you don’t have to do anything about.

Click the “start” button. You’ll see that after a while the graph (generated by the lambda to chemlambda parser) reduces to only one yellow node (a FOE) and the FROUT (free out) node of the root (of the lambda term). So chemlambda seems to be not capable to reduce this term correctly.

You can play with the (graph of the) term reducing it by hand, i.e. by hovering with the mouse over the nodes to trigger rewrites. If you do this carefully then you’ll arrive to reduce the graph to one which is identical with the graph of the omega combinator, with the exception of the fact that it has FOE nodes instead of FO nodes. By using the mol>lambda button you can see that such a graph does represent the omega combinator.

Now, either you succeeded this task, or not. Fo not just erload (by using lambda>mol button) and click “change” to get “older first” and move the rewrites weigths slider to “GROW”. This is the standard setting to check if the graph is a quine. Push start. What happens?

The reduction continues forever. This is a quine!

You’ll see that the mol>lambda gives nothing. This means that the decoration with lambda terms does not reach the root node. We are outside lambda calculus with this quine, which is different from the quine given by the omega combinator.

The two quines have the same gene. One of the quines (the graph of omega) never dies. The other quine is mortal: it dies by reducing eventually (in the “random choices” case) to a FOE node.


Chemlambda, lambda calculus and interaction combinators experiment notes ready

As  arXiv:2003.14332 [cs.AI]    appeared the notes “Artificial chemistry experiments with chemlambda, lambda calculus, interaction combinators”,  which are the access for the library of experiments.

For those interested, I recommend to read  section 1, How not to read these notes, and section 2, How you can contribute.

Next, for one side I’ll continue to modify the draft version according to needs, but on the other, main side, I’ll be cheeky and take a chance to go directly to an alternative linear logic essay.

Asymmetrical Interaction Combinators rewrites

I put asymmetrical IC rewrites in chemistry.js, which look almost like the beta rewrite:

// action modified from “GAMMA-GAMMA” with 1 pair Arrow-Arrow added, asymmetric
{left:“GAMMA”,right:“GAMMA”,action:“GAMMA-GAMMA-arrow1”, named:“GAMMA-GAMMA”, t1:“Arrow”,t2:“Arrow”, kind:“BETA”},
// action modified from “DELTA-DELTA” with 1 pair Arrow-Arrow added, asymmetric

{left:“DELTA”,right:“DELTA”,action:“DELTA-DELTA-arrow1”, named:“DELTA-DELTA”, t1:“Arrow”,t2:“Arrow”, kind:“BETA”},

The previous rewrites, now commented, were symmetrical.

The asymmetry comes from the Arrow elements, which can be inserted in two different ways, depending on the two possible identifications of a GAMMA-GAMMA (or DELTA-DELTA) pattern.

IC graphs are not oriented, but all nodes have distinguished ports numbering. The asymmetric rewrites lead to the curious case of theoretically self-conflicting rewrites.

But this asymmetry is harmless, as you can verify by playing with IC quines.

12 pages to start a macbethian tale

UPDATE: From now on follow the evolution of the draft at this link.


As of today, these are the  first 12 pages [see update!]  of description of the programs from the chemlambda landing page.

So I needed 12 pages just to start. And they could still be improved. For example everything could be written category theory style. Resulting in… more than 12 pages, have you seen DPO, sections 2.2-2.4 ?   I could go on a tangent in this direction, wouldn’t it be nice to write everything in terms of say “anharmonic groupoids”? Well, I could and maybe I should. Or in graph theory style…

It is amazing that just to start… And who will read these pages? If I want them to be used by anybody interested in the subject, for example geometers and logicians and chemists, or just any programmer…

Well, I’ll persist, but what do you think about these first pages?

I don’t think now that this is only my problem. An article is almost “a tale told by an idiot, full of sound and fury, signifying nothing”. Well,  is a story of the research, told in a very specialized language, full of unmentioned background, signifying nothing to most of the people who would maybe need it, but they can’t know.

There has to be a better way. I think that the chemlambda page is better.

Though, any way which is not macbethian academic legacy is frown upon. These days we see that a great majority of all these institutions who persist because they exist, do nothing. Serve to nothing else than to preserve the pyramid.

Anyway, I wanted to name the last section “Made of money” 🙂 I don’t know if the joke is relevant, just it sounds funny, seen the previous coin fashion.

Molecules laboratory and news about the 10-node quine

There is now a virtual lab where you can input molecules by hand and play with them, supplementary of choosing them from the menu.

When you download the lab page tou see that the text area of input molecules is already loaded with something.

You may of course delete what is written there and just build your molecule: write the molecule line by line and hit input to see it. You may manipulate it by trigerring rewrites with the mouse hover. Then hit “update” to see the new molecule you get.

For example, suppose you delete what’s written in the text area and you start fresh by typing

A 1 2 3^

then you hit “input”. What do you get?

You add another node, say you continue by

A 1 2 3^FI 3 4 5^

hit input, what do you get?

Now you may like to connect two free edges, say the 5 with the 1: you add text to get

A 1 2 3^FI 3 4 5^Arrow 5 1^

hit input, what do you get?

Use mouse hover over the “Arrow” node (white one), what do you get?

hit “update”, what do you get?

And so on.

Now, let’s return to the molecule already present in the text area. There are two molecules in fact, they are the chilfren of a 10-node quine.

In a previous post I announced that the 10-node quine can duplicate. I put an animation movie which shows that. But the lab showed me I am wrong.

Exercise: in the lab choose the 10-node quine from the menu.

With mouse hover try to modify the molecule until you obtain two separated molecules.

If you succeed, then prepare to check if they are quines, by:

  • click on the “change” button to see “older is first” message. This means that the rewrites will be done based on the age of nodes and links.
  • move the rewrites weights slider to “GROW”
  • hit “start”

If the two molecules don’t die and they don’t grow indefinitely, then you get two quines from one.

Hit stop from time to time to see what you have. Hit update to have the molecules code in the text area.

That what I did to obtain the two children of the 10-nodes quine. They look so much alike. They are both 10 nodes quines, they have almost identical links, but they are not the same!

One of them is the original 10-nodes quine, the other one is a quine which is diferent only in some links.

For the moment I have not succeeded to duplicate by hand this second quine.

Maybe you find another way the 10-nodes quine can duplicate?





Chemlambda page 99%

Hey, I need criticism, bugs finding, suggestions for improvements, as concerns the legacy chemlambda.

Thank you, be well!


In the collection there is now added, whenever appropriate, the list of posts where a mol is used. This is a variant of search by mol files. It is dynamical, like all the collection, in the sense that whenever a post is added or retired, everything works without other effort needed.

Another problem is to connect in the same way the other pages with the collection. The difficulty is that the collection is in one place (because is big) and the other pages are in other places. I’ll have to think about that.

What is left is to write a text … something I tried to avoid. But in order to “publish” based on this work, this is needed.

Anyway, for the interested, is in front of you and usable 🙂 For those who wait for the new parts, announced, I’m here to talk, and I prepare stuff for this. The story is big, as you see, only for stuff which I already call “legacy”, there are lots and lots of details which I have to do (which I like) and circles to hop through (which I don’t like) for the sake of … a system which dies basically.

Another two updates:

1. Right now I’m commenting the code, so the text is/will be in the programs. Let’s see where this goes.

2. I have the impression that I arrive to some overengineered result. What I would need to talk about is for example the awesomeness of the Heisenberg picture of mechanics as seen in emergent algebras plus hamiltonians with dissipation framework. But, you see, the problem is, as Robert Hermann wrote:

“… and I am supposed to sit back and wait for Professor Whosits to tell me when he thinks problems are “mature”…
I sent the papers he mentions to very few people … I am also interested to note that he did look at them, since there is considerable overlap in methodology with a recent paper by one of his students, with no mention of my papers in his bibliography …
any money spent by NSF on a Mathematics Research Institute would be down the proverbial rat hole – it would only serve to raise Professor Whosits’ salary and make him ever more arrogant. ”

🙂 So I try to make my position safe from any attack from Whosits of the academic world.

Which is time lost for making new stuff…


Biological immortality and probabilities of chemical rewrites

This post is a continuation of the post Random choices vs older first. There is an apparent paradox involving the probability to die and the probability of a chemical rewrite.

An organism is biologically immortal if its probability to die does not depend on its age. Another probability which matters for organisms made of chemical compounds is the probability of a chemical rewrite, say relevant for the organism metabolism.

Let’s suppose that there is a mechanism for such a chemical rewrite. For example, for each rewrite there is an enzyme which trigger it, like we supposed in the case of the chemical concrete machine. We can imagine two hypothetical situations.

Situation 1. The organism is chemically stable, so in the neighbourhood of a rewrite pattern there is a constant in time chance of the presence of a rewrite enzyme.  The probability of the chemical rewrite would be then independent on time. We would expect, for example, that biologically immortal organisms are in this situation.

Situation 2. There is a more complex mechanism for chemical rewritesm where there is a way to make a probability of a chemical rewrite to depend on time. As an example, suppose that the presence of the rewrite pattern triggers the production of rewrite enzymes. This would make the probability of the rewrite to be bigger for older rewrite patterns. But if probabilities of older rewrite patterns to be transformed is bigger than the probabilities of newer rewrite patterns, that would imply that the general probability to die (no more rewrites available) would be time dependent. It seems that such organisms are not likely to be biologically immortal.

The paradox is that it may be the other way around. Biologically immortal organisms may be in situation 2 and mortal organisms in situation 1.

This is illustrated by chemlambda quines. The page How to test a quine allows to change the probability of the rewrites with respect to time.

By definition a quine graph is one with a periodic evolution under the greedy algorithm of rewrites, which performs as many non-conflicting rewrites as possible in a single step. The definition can be refined by adding that in case of conflicting rewrites the algorithm will choose a rewrite which “grows” the graoh, by increasing the number of nodes.  What is interesting is how such a quine graph behaves under the random rewrites algorithm.

A graph reduced with this greedy algorithm evolves as if it is in the extreme of the Situation 2, where older rewrite patterns are always executed first (maybe excepting a class of rewrites which are alway executed first, like the “COMB” rewrites). This observation leads us to redefine a quine graph as a graph which is biologically immortal in situation 2!

You can check for yourself that the various graphs (from chemlambda or Interaction combinators) from that page are indeed quine graphs. Do like this: pick a graph from the menu. Move the “rewrites weights slider” to the “grow side”. Click on the “change” button to have the message “older is first”. Click “start”. If you want to reload then click “reload”.

A quine graph is an example of an (artificial life) organism which is immortal if the rewrites probabilities depends on the age of the rewrite pattern.

Under the random rewrite algorithm , i.e. Situation 1, such a graph quine may die or it may reproduce. They are indeed alive.


Random choices vs older first

There is now a page to check if a graph is a quine. This is done by a clarification of the notion of a quine graph.

The initial definition for a quine graph is: one which has a periodic evolution under the greedy algorithm of rewrites. The greedy algorithm performs at each step the maximal number of non-conflicting rewrites.

Mind that for some graphs, at one moment there might be more than one maximal collections of non-conflicting rewrites. Therefore the greedy algorithm is not deterministic, but almost.

In the js simulations there is performed one graph rewrite at each step, so how can we modify this to be able rto check if a graph is a quine?

The answer is to introduce an age of the nodes and links of the graph. There is an age (global) counter which counts the number of steps. Each new node and each new edge receive an “age” field which keeps the age of the birth of the said node or link.

The age of a rewrite pattern is then the maximum of ages of it’s nodes and links.

The “random choices” lets the initial reduction algorithm as is. That means the rewrites are executed randomly, with the exception of “COMB” rewrites which are always executed first.

The “older is first” executes always the rewrites on the oldest rewrite patterns (with the exception of “COMB” rewrites which are executed first, when available. The effect is that the algorithm reduces sequentially maximal collections of non-conflicting graph rewrites!

Therefore the definition of a quine graph should be modified to: a graph which has bounded (number of nodes and links) evolution under the “older is first” algorithm.

It is however intriguing the stabilising effect of “older is first”. Just look at the 9_quine, with the “older is first” it never dies, as expected, but under  “random choices” it does.

Similarly, use the new page to look at the behaviour of the 10_nodes quine. It either dies immediately or it never dies. This is understandable from the definition, because this quine has either a maximal collection of rewrites which kills it or another maximal collection of rewrites which transforms it into a graph which ahs periodic evolution from that point.

But what is intriguing is the suggestion that there should be a tunable parameter, between “random choices” and “older is first”, namely a parameter in the probablility of the rewrites which decrease with age (i.e. older patterns are more probable to be rewritten than the newer ones). At one extreme older patterns are always executed first. At the other extreme there is the same probability no matter the age of the pattern.

To have probabilities strongly depending on the age of the pattern stabilize the evolution of a quine graph: it may die quick or it lives forever. Probably it does not reproduce.

On the other side, a lower dependence on age implies a more natural evolution: the quine may now die or reproduce.

The decorator: a glimpse into the way chemlambda does lambda calculus

The lambda2chemlambda parser has now a decorator. Previously there was a button (function) which translates (parses) a lambda term into a mol. The reverse button (function) is the decorator, which takes a mol and turns it into a lambda term and some relations.

Let’s explain.

The lambda calculus to mol function  and the mol to lambda calculus are not inverse one to the other. This means that if we start from a mol file, translate it into a lambda term, then translate the lambda term back into mol, then we don’t always get the same mol. (See for example this.)

Example 1: In this page we first have the lambda term PRED ((POW 3) 4), which is turned into a mol. If we use the mol>lambda button then we get the same term, up to renaming of variables. You can check this by copy-paste of the lambda term from the second textarea into the first and then use lambda>mol button. You can see down the page (below the animation window) the mol file. Is the same.

Now push “start” and let chemlambda reduce the mol file. It will stop eventually. Move the gravity slider to MIN and rescale with mouse wheel to see the structure of the graph. It is like a sort of a surface.

Push the mol>lambda button. You get a very nice term which is a Church number, as expected.

Copy-paste this term into the first textarea and convert it into a mol by using lambda>mol. Look at the graph, this time it is a long, closed string, like a Church number should look in chemlambda. But it is not the same graph, right?

The mol>lambda translation produces a decoration of the half-edges of the graph. The propagation of this decoration into the graph has two parts:

  • propagation of the decoration along the edges. Whenever an edge has already the source and targed decorated, we get a relation between these decorations (they are “=”)
  • propagation of the decoration through the nodes. Here the nodes A, L, and FO are decorated according to the operations application, lambda and fanout. The FOE nodes are decorated like fanouts (so a FO and a FOE node are decorated the same! something is lost in translation). There is no clear way about how to decorate a FI (fanin) node. I choose to decorate the output of a FI with a new variable and to introduce two relations: first port decoration = output decoration and second port decoration = output decoration.

The initial decoration is with variables, for the ports of FRIN nodes, for the 2nd ports of the L nodes and for the output ports of the FI nodes.

We read the decoration of the graph as the decoration of the FROUT node(s). Mols from lambda terms have only one FROUT,

The graph is well decorated if the FROUT node is decorated and there are no relations.

Example 2: Let’s see how the omega combinator reduces. First click the mol>lambda buton to see we have the same term back, the omega.

Now use the “step” button, which does one reduction step, randomly. In the second textarea you see the decoration of the FROUT node and the relations.

What happens? Some steps make sense in lambda calculus, but some others don’t. In these relations you can see how chemlambda “thinks” about lambda calculus.

If you use the “step” button to reduce the mol, then you’ll see that you obtain back the omega combinator, after several steps.



How does a zipper logic zipper zip? Bonus: what is Ackermann goo?

Zipper logic is an alternative to chemlambda, where two patterns of nodes, called half-zippers, appear.

It may be more easy to mimic, from a molecular point of view.

Anyway, if you want to have a clear image about how it works then there are two ways to play with zippers.

1. Go to this page and execute a zip move. Is that a zipper or not?

2. Go to the lambda2chemlambda page and type this lambda term

(\h.\g.\f.\e.\d.\c.\b.\a.z) A B C D E F G H

Then reduce it. [There is a difference, because a, b, … h do not occur in A B C D E F so the parser adds a termination node to each of them, so when you reduce it the zipper will zip and then will dissappear.]

You can see here the half-zippers



which are the inspiration of the zippers from zipper logic.

In chemlambda you can also make FI-zippers and FOE-zippers as well, I used this for permutations.

BONUS: I made a comment at HN which received the funny reply “Thanks for the nightmares! 🙂“, so let me recall, by way of this comment, what is an Ackermann goo:

A scenario more interesting than boundless self-replication is Ackermann goo [0], [1]. Grey goo starts with a molecular machine able to replicate itself. You get exponentially more copies, hence goo. Imagine that we could build molecules like programs which execute themselves via chemical interactions with the environment. Then, for example, a Y combinator machine would appear as a linearly growing string [2]. No danger here. Take Ackermann(4,4) now. This is vastly more complex than a goo made of lots of small dumb copies.

[0] https://chemlambda.github.io/collection.html#58

[1] https://chemlambda.github.io/collection.html#59

[2] https://chemlambda.github.io/collection.html#259


Robert Hermann on peer review

The gossip blog “Not even wrong”, not a friend of Open Science, has an update of the post Robert Hermann 1931-2020. Following the update to an older post, the reader is led to some very relevants quotes from Robert Hermann on peer review.

For those who are not aware, Robert Hermann was far ahead of his time not only in the understanding of the geometrical structure of topics in modern physics, but also in his efforts concerning research sharing.

I reproduce the quotes here,  copy-pasted from the sources in the linked comments.

Before that, some very short answers to potential questions you may have:

  • I don’t think bad about American mathematics or physics, on the contrary, the point is that if bad things happen in that strong research community, then it is expected the same or worse in other communities. I believe these opinions of Hermann apply everywhere in the research community, today.
  • Peer review is better than no peer review, but it is worse than validation.
  • Peer reviews are opinions, with  good and bad sides. They are not part of the scientific method.
  • By comparison, the author who makes all the work available (i.e. Open Science) opens the way to the reader to independently validate the said work. This is the real mechanism of the scientific method.

The quotes, from the sources:

[source] … consider these quotes from two letters he published in his 1979 book “Cartanian Geometry, Nonlinear Waves, and Control Theory: Part B”:
“… I am not the only one who has been viciously cut down because I tried to break out of the rigid shell and narrow grooves of American mathematics. … My proposal was to continue my … work with … Frank Estabrook and Hugo Wahlquist of the Jet Propulsion Laboratory. … I most deeply resent the arrogance of the referee #3 toward their work … typical … arrogance of Referee #3 is his blather about the “prematureness” of our work … Now, we are working in a field – nonlinear waves – which is moving extremely rapidly and which has the potential for the most important applications, ranging from … Josephson junction to … fusion … and I am supposed to sit back and wait for Professor Whosits to tell me when he thinks problems are “mature”…
I sent the papers he mentions to very few people … I am also interested to note that he did look at them, since there is considerable overlap in methodology with a recent paper by one of his students, with no mention of my papers in his bibliography …
any money spent by NSF on a Mathematics Research Institute would be down the proverbial rat hole – it would only serve to raise Professor Whosits’ salary and make him ever more arrogant. It would do more good to throw the money off the Empire State Building: at least there is a chance it would be picked up and used creatively by a poor, unemployed mathematician …
This issue transends my own personal situation …
Most perversely, the peer review system … works as a sort of Gallup poll to veto efforts by determined individuals … As budgets have tightened, the specialists fight more and more fiercely to keep what little money is available for their own interests. Thus, people with a generalist bent are driven out …”.

[source] … Hermann said in letters published in his 1979 book “Cartanian Geometry, Nonlinear Waves, and Control Theory: Part B”:
“… In 1975 … I had essentially quit my academic job at Rutgers (so I could do my research full time), and my main support came from Ames Research Center (NASA) for my work on control theory. I was also starting a publishing company, Math Sci Press, writing books for it to hold out the hope that, some day, I would get off this treadmill of endless grant proposals. (Unfortunately, it is still [March 1979] at best bearly breaking even.) …
Ever since I lost my ONR grant in 1970, thanks to Senator Mansfield, I have been trying to persuade NSF … that my work on the differential geometric foundations of engineering and physics is worthy of their support … I see my colleagues who stay within the disciplinary “clubs” receiving support much more readily … Thanks to Freedom of Information, I finally see what the great minds of my peers object to, and I see nothing but vague hearsay, bitchiness, and plain incompetence in reviewing … specialized cosed shops that blatantly discriminate against the sort of … work that I do.”


Parser gives fun arrow names

Not yet released with modifications, but the lambda2chemlambda parser can be made to give fun arrow names. Like for example the term

((\g.((\x.(g (x x))) (\x.(g (x x))))) (\x.x))

which is the Y combinator applied to id, becomes the mol

FROUT [^L [((\g. [((\g [((\^L [((\g.((\x. [((\g.((\x [((\g.((\^FO [((\g [((\g.((\x.(g [((\g.((\x.(g*^A [((\g.((\x.(g [((\g.((\x.(g@( [((\g.((\x.(g@^FO [((\g.((\x [((\g.((\x.(g@(x [((\g.((\x.(g@(x*^A [((\g.((\x.(g@(x [((\g.((\x.(g@(x@x [((\g.((\x.(g@(x@^Arrow [((\g.((\x.(g@(x* [((\g.((\x.(g@(x@x^Arrow [((\g.((\x.(g@(x@ [((\g.((\x.(g@(^Arrow [((\g.((\x.(g@ [((\g.((\x.(^Arrow [((\g.((\x.( [((\g.((\x.^Arrow [((\g.((\ [((\g.((^A [((\g.(( [((\g.((\x.(g@(x@x)))@( [((\g.((\x.(g@(x@x)))@^L [((\g.((\x.(g@(x@x)))@(\x. [((\g.((\x.(g@(x@x)))@(\x [((\g.((\x.(g@(x@x)))@(\^Arrow [((\g.((\x.(g* [((\g.((\x.(g@(x@x)))@(\x.(g^A [((\g.((\x.(g@(x@x)))@(\x.(g [((\g.((\x.(g@(x@x)))@(\x.(g@( [((\g.((\x.(g@(x@x)))@(\x.(g@^FO [((\g.((\x.(g@(x@x)))@(\x [((\g.((\x.(g@(x@x)))@(\x.(g@(x [((\g.((\x.(g@(x@x)))@(\x.(g@(x*^A [((\g.((\x.(g@(x@x)))@(\x.(g@(x [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x [((\g.((\x.(g@(x@x)))@(\x.(g@(x@^Arrow [((\g.((\x.(g@(x@x)))@(\x.(g@(x* [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x^Arrow [((\g.((\x.(g@(x@x)))@(\x.(g@(x@ [((\g.((\x.(g@(x@x)))@(\x.(g@(^Arrow [((\g.((\x.(g@(x@x)))@(\x.(g@ [((\g.((\x.(g@(x@x)))@(\x.(^Arrow [((\g.((\x.(g@(x@x)))@(\x.( [((\g.((\x.(g@(x@x)))@(\x.^Arrow [((\g.((\x.(g@(x@x)))@(\ [((\g.((\x.(g@(x@x)))@(^Arrow [((\g.((\x.(g@(x@x)))@ [((\g.(^Arrow [((\g.( [((\g.^Arrow [((\ [((^A [(( [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@( [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@^L [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@(\x. [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@(\x [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@(\^Arrow [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@(\x [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@(\x.x^Arrow [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@(\x.x [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@(\x.^Arrow [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@(\ [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@(^Arrow [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@ [(^Arrow [( [

Recall that the parser is part of the landing page for all chemlambda projects.

So, if you write in the parser:

(\x.a) b

which is the pattern for a beta rewrite, then the relevant part of the mol (with funny arrow names) and the rewrite will have the form:



Biography of Sci-Hub creator Alexandra Elbakyan

I found today Alexandra Elbakyan biography, written by herself.

This link is to the original (in Russian) and this link is the google translate into English.

I think this is a very interesting read. You can get a really first hand description of the context and motivations of the creation of Sci-Hub. It is also a glimpse into the mind of a special individual who was born and lived in a middle of nowhere and who changed the world.

Some quotes, which I particularity resonate with:

“What is this misfortune?” I thought “again they see in me not a man, but a programmer”


“It was 2012, and I turned 24. I was a patriot and supported Putin’s policies. And I was also the creator of the Sci-Hub service, which, according to numerous reviews, incredibly helped Russian science.

But no one called and wrote to me like that.
No one invited me to participate in any scientific projects.
Every day I went in a cold, crowded train from Odintsovo, where the HSE hostel was located – to the university and back.”


Especially this I can’t understand. For anyone creative it would be a privilege to participate in a scientific project with Elbakyan.

Beta and dist are emergent, just like Reidemeister 3 and Hamiltonian mechanics

The title says all 🙂 This is a time tag announcement. The computing with space project is essentially finished. The last piece was discovered today.

I still have to write all down though. It would be helpful to make me do it quicker by making me give talks or something like that.

It is beautiful.

All goes well!

The “attack” on my institute web, or whatever that was, seems to have a solution. So now all pages go well (Feb 10 2020).

In conclusion, you may use:

I can be found at:

EDIT: some words about the revived collection. There are 264 posts/animations, which is a bit more than 1/2 of the original collection. Now there is the possibility to rerun in js the simulation, because whenever possible there is a mol file attached to the animation, which can be reduced in js. Some numbers now. In verifiedMol.js there are 500 mol files, but some are duplications, in order to manually enhance the automated  matching of posts with mols, so say there are about 490 mol files. If they are too big to be used without stalling the js reduction, this is signaled by the message “mol too big” in the post. If there is no mol which matches, this is signaled as “mol unavailable”. Of all 264 posts, 36 of them fall in the “mol too big” category, 46 in the “mol unavailable” and there are 6 posts which don’t have a chemlambda simulation inside. So this leaves 264-88=176 posts which have matching mol files to play with. Finally, there are two situations where the matching mol-post is not perfect: (1) when in the original simulation is used a mol file which contains nodes of a busy-beaver Turing machine (of the kind explained here), (2) when in the original is used a .mola file (a mol with actors). In both cases the js reduction does not know how to do this.

Pure See, emergent beta, Heisenberg

Some updates, for things to come and plans.

1. Pure See is a relative of lambda calculus, in the sense that it is Turing universal, is very simple, but it does not use abstraction, application, let as primitives. It is a programming language built over  commutative emergent algebras, i.e. those with the shuffle trick, or equivalently with the algebraic properties of em-convex (but mind that em-convex still uses lambda and application operations; these are not needed).

I plan to make a parser for Pure See very soon.

2. This means that Pure See is as commutative as lambda calculus. Or, the general theory that I have in mind is non-commutative. And emergent, in the sense of emergent algebras.

Before going full non-commutative, one has to realize the beta rewrite as emergent. This is true, in the same way as associativity is emergent in the equational theory of emergent algebras, or the way to realize Reidemeister 3 rewrite from R1 and R2 (and a passage to the limit). The fact that beta is emergent is what makes Pure See to work and answers to the question: do emergent algebras compute? Yes, they do, because in the most uninteresting situation, the commutative one, we can implement lambda calculus with commutative emergent algebras.

3. The first non-coomutative case is the Heisemberg group, described as a non-commutative emergent algebra. I have since a long time the description. The shuffle trick becomes something else. Means that beta rewrite and DIST rewrites change into something more interesting. The whol eformalism actually becomes something else.

I thought that the general non-commutative case is in principle far more complex than the Heisenberg case. It was also unsatisfying that I had no explanation for the reason why Heisenberg groups appear in physics. What’s special about them?

Now I know, they are logically unavoidable (again in the frame of emergent algebras).

So I still play with this new point of view and I wonder what to do next.

The wise thing would be to carefully explain, in a legacy way, all this body of work. My initial plan was to base this explanations on a backbone of openly communicated programs and demos, so that the article versions would be a skin of the whole description. Who wants to read betdime stories has the article. Who wants more has the programs. Who wants all thinks about all this.

With the DDOS or whatever is it,  it becomes harder to use independent ways of sharing.

Or should I jump directly to the non-commutative case?

Or somebody really started to make molecular computers?  If so,  it would be, short time span, the most interesting thing.


DDOS attack or huge number of hits from US and China

UPDATE: Feb 10 2020 seems to work. The situation was between  Jan 19 – Feb 10 2020.

These are the two explanations I received about the bad behaviour of the site where I have my professional page. It started around Sunday, Jan 19 2020.(Mentioned here.)

I don’t know which is right, any of them, both or none. This however is a block of the access to this copy of the chemlambda collection, which I put online here on Saturday, Jan 11 2020.

In the case you  want to access the collection, then there are the following possibilities:

  • the original site, which sits in a place not under the control of a corp.
  • a copy of the site, with smaller pictures (I am limited by 0.5GB limit), at github, which may be blocked in your country (??)
  • you can take the original simulations which were used to make the animations from figshare. For the comments and the internal links, take them from one of the available places
  • the landing page for all chemlambda projects is on github too…
  • or maybe there is a kind soul which has access to these sites and can host the whole stuff in a non-corp. place which is accessible to everybody
  • or you find a way to notice me that you’re interested, or willing to help, and we see what we can do.

EDIT: some words about the revived collection. There are 264 posts/animations, which is a bit more than 1/2 of the original collection. Now there is the possibility to rerun in js the simulation, because whenever possible there is a mol file attached to the animation, which can be reduced in js. Some numbers now. In verifiedMol.js there are 500 mol files, but some are duplications, in order to manually enhance the automated  matching of posts with mols, so say there are about 490 mol files. If they are too big to be used without stalling the js reduction, this is signaled by the message “mol too big” in the post. If there is no mol which matches, this is signaled as “mol unavailable”. Of all 264 posts, 36 of them fall in the “mol too big” category, 46 in the “mol unavailable” and there are 6 posts which don’t have a chemlambda simulation inside. So this leaves 264-88=176 posts which have matching mol files to play with. Finally, there are two situations where the matching mol-post is not perfect: (1) when in the original simulation is used a mol file which contains nodes of a busy-beaver Turing machine (of the kind explained here), (2) when in the original is used a .mola file (a mol with actors). In both cases the js reduction does not know how to do this.

Anyway, wait for pure see, that will be a sight 🙂

All chemlambda projects landing page

I put a version of the collection of animations on github. I made a user named “chemlambda” and now there is a landing page for all chemlambda projects (bare minimum).

I’ll add more and I’ll structure more that page, but with the occasion of making available the collection, making such a site was only natural.

Please let me know if there are more (than the excellent ones I know about, like -hask, -py or -editor).



3 days since the server is too busy, so…

UPDATE: a version of the collection is on github.


the collection needs a better place. Alternatively, I could temporarily use github (by making the animations smaller, I can cram the collection into 480MB). Or better with a replacement of animations by the simulations themselves. As you see these simulations occupy 1GB, but they can be mined, in order to extract the right parameters (gravity, force strength, radii and colors, mol source) and then just reuse them in the js.

Anybody willing? I need to explain what pure see is about.

Also, use this working link to my homepage.

Google+ salvaged collection of animations (III): online again!

UPDATE: Chemlambda collection of animations. is the version of the collection hosted on github. The original site is under very heavy traffic (in Jan 2020).

UPDATE: For example the 2 neurons interacting animation can be remaked online to look like this:


First you use the mouse wheel to rescale, mouse to translate. Notice the gravity slider position. This is an animation screencasted from the real thing which takes 8 min to unfold. But in this way you see what is happening beyond the original animation.

Btw, what is such a neuron? It is simply a (vectorial) linear function, which is applied to itself,  written in lambda calculus. These two neurons are two linear functions, with some inputs and outputs connected.

Soon the links will be fixed (internal and external) [done] and soon after that there will be a more complete experience of the larger chemlambda universe. (and then the path is open for pure see)


In Oct 2018  I deleted the G+ chemlambda collection of animations, before G+ went offline. Now, a big part of it is online, at this link. For many of the animations you can do now live the reduction of the associated molecule.

The association between posts, animation and source mol file is from best fit.

There are limitations explained in the last post.

There are still internal links to repair and there has to be a way to integrate all in one experience, to pay my dues.

I put on imgur this photo with instructions, easy to share:

Screenshot from 2020-01-12 19:34:17

Use wheel mouse to zoom, mouse to move, gravity slider to expand.


The salvaged collection of animations (II)

UPDATE: Chemlambda collection of animations is the version of the collection hosted on github. The original site is under very heavy traffic (in Jan 2020).

UPDATE: much better now, although I seriously consider to jump directly to pure see. However is very rewarding to pass over blocks.


(Continues the first post.) I forgot how much the first awk chemlambda scripts were honed, and how much the constants of the animations produced were further picked so to illustrate in a visually interesting way a point of view. The bad part of the animations first produced is that they are big html files, sometimes taking very long to execute.

The all-in-one js solution built by ishanpm, then modified and enhanced by me, works well and fast for graphs with a no of nodes up to 1000, approximatively. The physics is fixed, there are only two controls: gravity (slider) which allows to expand/contract the graphs, and the rewrites slider, which changes the probabilities of rewrites which increase/decrease the number of nodes. Although there is randomness (initially in the ishanpm js solution there was not), it is a weak and not very physical one (considering the idea that the rewrites are caused by enzymes). It is funny that the randomness is not taken seriously, see for example the short programs of formality.

After I revived the collection of animations from G+ (I kept about 300 of them), I still had to associate the animations with the mol files used (many of them actually not in the mol library available) and to use the js chemlambda version (i.e. this one) with the associated mol files. In this way the user would have the possibility to re-done the animations.

It turns out it does not work like this. The result is almost always of much lesser quality than the animation. However, the sources of the animations (obtained from the awk scripts) are available here.  But as I told at the beginning of the post, they are hard to play (fast enough for the goldfish attention), actually this was the initial reason for producing animations, because the first demos, even chosen to be rather short, were still too long…

So this is a more of a work of art, which has to be carefully restored. I have to extract the useful info from the old simulations and embed it into a full js solution. Coming back to randomness, in the original version there are random cascades of rewrites, not random rewrites, one at a time, like in the new js version… and they extinguish the randomly available pockets of enzymes, according to some exponential laws… and so on. That is why the animations look more impressive than the actual fast solution, at least for big graphs.

It is true that the js tools from the quine graphs repository have many advantages: interaction combinators are embedded, there is a lambda calculus to chemlambda parser… With these tools I discovered that the 10 nodes quine does reproduce, that the ouroboros is mortal, that there are many small quines (in interaction combinators too), etc.

And it turns out that I forgot that many interesting mols and other stuff was left unsaid or is not publicly available. My paranoid self in action.

In conclusion probably I’ll make available some 300 commented gifs from the collection and I’ll pass to the scientific part. I’d gladly expose the art part somewhere, but there seems to be no place for this art, technically, as there is no place, technically, for the science part, as a whole, beyond just words telling stories.

There will be, I’m sure.

The salvaged collection of animations

UPDATE: Chemlambda collection of animations is the version of the collection hosted on github. The original site is under very heavy traffic (in Jan 2020). Small images, about a 1/2 of the collection, due to memory limitations. But you can play the simulations in js!

UPDATE: As I progress into integrating more, I think I might sell microSD cards with the full experience. Who knows, in a year from now I might even think about a whole (real or game like) VR programming medium in a sort of chemlisp crossed with pure see. If anybody interested call me.

One more thing: as you shall see, the animations (and the originals) are the result of both a work of science and a work of art. Into the constraints (random evolution, only physical constants and colors are allowed to modify, only before, only cuts allowed) a world of dreams open.


Now I have a functional (local) version of the chemlambda collection of animations, salvaged from G+. A random slice:


On the short term todo list is:

  • integrate it with the quine graphs and with the lambda stuff.
  • add text to the chemlambda for the people and integrate with the rest.
  • to release the quine graphs article I still need a decorator, a deterministic reducer and a quine discoverer, all of them pretty standard. Make a chemlisp repl, perhaps? It would need only a small rewrite of the parser…
  • to release the hapax article I need to add a visual loop, also to rewrite some of the functions because I already need them in the quine discoverer.

… and then my dues will be finally paid and I can attack pure see and stochastic SBEN with full serenity.

Oh wait, I still have to make a big intro to em-convex, release the second part, describe the category CONICAL and related work in sub-riemannian geometry,  explain the solution of the computation power of emergent algebras … and then my dues will be paid and … 🙂

Open access in 2019: still bad for the career

Have you seen this: https://newsroom.publishers.org/researchers-and-publishers-oppose-immediate-free-distribution-of-peer-reviewed-journal-articles

“The American publishing industry invests billions of dollars financing, organizing, and executing the world’s leading peer-review process in order to ensure the quality, reliability, and integrity of the scientific record,” said Maria A. Pallante, President & CEO of the Association of American Publishers. “The result is a public-private partnership that advances America’s position as the global leader in research, innovation, and scientific discovery. If the proposed policy goes into effect, not only would it wipe out a significant sector of our economy, it would also cost the federal government billions of dollars, undermine our nation’s scientific research and innovation, and significantly weaken America’s trade position. Nationalizing this essential function—that our private, non-profit scientific societies and commercial publishers do exceedingly well—is a costly, ill-advised path.”

Yes, well, this is true! It is bad for publishers, like Elsevier, it is bad for some learned societies which sign this letter, like the ACM.

But it would be a small step towards a more normal, 21st century style of communication among researchers. Because researchers do no longer need scientific publishers of this kind.

What is more important? That a useless industry loose money, or that researchers could discuss normally, without the mediation of this parasite from an older age?

Obviously, researchers have careers, which depend on the quantification of their scientific production. The quantification is made according to rules dictated by academic management. The same management who decides to buy from the publishers something the researchers already have (access).

So, no matter how evil the publishers may be, management is worse. Because suppose I make a social media app which asks 1$ for each word one types into it. Would you buy it, in case you want to exchange messages with your colleagues? No, obviously. No matter how evil I am by making this app, I would have no clients. But suppose now that your boss decides that the main criterion of career advancement is the number of words you typed into this app. Would you buy it, now? Perhaps.

Why, tell me why the boss would decide to make such a decision? There has to be a reason!

Who is the most evil? I or the boss?

There was a coincidence that the same day I learned about the letter against open access, I also read Scott Aaronson post about the utmost important problem of the name “quantum supremacy”.

The post starts with a good career news:

“Yay! I’m now a Fellow of the ACM. […] I will seek to use this awesome responsibility to steer the ACM along the path of good rather than evil.”

Then Scott spends more than 3100 words discussing the “supremacy” word. Very important subject. People in the media are concerned about this.

First Robert Rand comment, then mine, asked about Scott’ opinion  as a new member of the ACM, concerning the open access letter.

The answer has a 100 words, the gist being:

“Anyone who knows the ACM better than I do: what would be some effective ways to register one’s opposition to this?”

A  possible answer for my question concerning bosses is: OA is still bad for the career, in 2019.


Use the lambda to chemlambda parser to see when the translation doesn’t work

I use the parser page mainly and other pages will be mentioned in the text.

So chemlambda does not solve the problem of finding a purely local conversion of lambda terms to graphs, which can be further reduced by a purely local random algorithm, always. This is one of the reasons I insist both into going outside lambda calculus and into looking at possible applications in real chemistry, where some molecules (programs) do reduce predictively and the span of the cascade of reactions (reductions) is much larger than one can achieve via massive brutal try-everything on a supercomputer strategy.

Let’s see: choose

(\a.a a)(\x.((\b.b b)(\y.y x)))

it should reduce to the omega combinator, but read the comment too. I saw this lambda term, with a similar behaviour, in [arXiv:1701.04691], section 4.

Another example took me by surprise. Now you can choose “omega from S,I combinators”, i.e. the term

(\S.\I.S I I (S I I)) (\x.\y.\z.(x z) (y z)) \x.x

It works well, but l previously used a related term, actually a mol file, which corresponds to the term where I replace I by S K K in S I I (S I I), i.e. the term

S (S K K) (S K K) (S (S K K) (S K K))

To see the reduction of this term (mol file) go to this page and choose “omega from S,K combinators”. You can also see how indeed S K K reduces to I.

But initially in the parser page menu I had  the term

(\S.\K.S (S K K) (S K K) (S (S K K) (S K K))) (\x.\y.\z.(x z) (y z)) (\x.(\y.x))

It should reduce well but it does not. The reason is close to the reason the first lambda term does not reduce well.

Now some bright side of it. Look at this page to see that the ouroboros quine is mortal. I believed it is obviously imortal until recently. Now I started to believe that imortal quines in chemlambda are rare. Yes, there are candidates like (the graph obtained from) omega, or why not (try with the parser) 4 omega

(\f.(\x.(f(f (f (f x)))))) ((\x.x x) (\x.x x))

and there are quines like the “spark_243501” (shown in the menu of this page) with a small range of behaviours. On the contrary, all quines in IC are imortal.

Lambda calculus to chemlambda parser (2) and more slides

This post has two goals: (1) to explain more about the lambda to chemlambda parser and (2) to talk about slides of presentations which are connected one with the other across different fileds of research.

(1) There are several incremental improvements to the pages from the quine graphs repository. All pages, including the parser one, have two sliders, each giving you control about some parameters.

The “gravity” slider is kind of obvious. Recall that you can use your mose (or pinching gestures) to zoom in or out the graph you see. With the gravity slider you control gravity. This allows you to see better the edges of the graph, for example, by moving the gravity slider to the minimum and then by zooming out. Or, on the contrary, if you have a graph which is too spreaded, you can increase gravity, which will have as aeffect a more compactly looking graph.

The “rewrites weights slider” has as extrema the mysterious words “grow” and “slim”. It works like this. The rewrites (excepting COMB, which are done preferentially anyway) are grouped into those which increase the number of nodes (“grow”) and the other ones, which decrease the number of nodes (“slim”).

At each step, the algorithm tries to pick at random a rewrite. If there is a COMB rewrite to pick, then it is done. Else, the algorithm will try to pick at random one “grow” and one “slim” rewrite. If there is only one of these available, i.e. if there a “grow” but no “slim” rewrite, then this rewrite is done. Else, if there is a choice between two randomly choses “grow” and “slim” rewrites, we flip a coin to choose among them. The coin is biased towards “grow” or “slim” with the rewrites weights slider.

This is interesting to use, for example with the graphs which come from lambda terms. Many times, but not always, we are interested in reducing the number of nodes as fast as possible. A strategy would be to move the slider to “slim”.

In the case of quines, or quine fights, it is interesting to see how they behave under “grow” or “slim” regime.

Now let’s pass to the parser. Now it works well, you can write lambda terms in a human way, but mind that “xy” will be seen as a variable, not as the application of “x” to “y”. Application is “x y”. Otherwise, the parser understands correctly terms like

(\x.\y.\z.z y x) (\x.x x)(\x. x x)\x.x

Then I followed the suggestion of my son Matei to immediately do the COMB rewrites, thus eliminating the Arrow nodes given by the parser.

About the parser itself. It is not especially short, because of several reasons. One reason is that it is made as a machine with 3 legs, moving along the string given by the lexer. Just like the typical 3-valent node. So that is why it will be interesting to see it in action, visually. Another reason is that the parser first builds the graph without fanout FO and termination T nodes, then adds the FO and and T nodes. Finally, the lambda term is not prepared in advance by any global means (excepting the check for balanced parantheses). For example no de Bruijn indices.

Another reason is that it allows to understand what edges of the (mol) graph are, or more precisely what port variables (edge variables) correspond to. The observation is that the edges are in correspondence with the position of the item (lparen, rparen, operation, variable) in the string. We need at most N edge names at this stage, where N is the length of the string. Finally, the second stage, which adds the FO and T nodes, needs at most N new edge names, practically much less: the number of duplicates of variables.

This responds to the question: how can we efficiently choose edge names? We could use as edge name the piece of the string up to the item and we can duble this number by using an extra special character. Or if we want to be secretive, now that we now how to constructively choose names, we can try to use and hide this procedure.

Up to now there is no “decorator”, i.e. the inverse procedure to obtain a lambda term from a graph, when it is possible. This is almost trivial, will be done.

I close here this subject, by mentioning that my motivation was not to write a parser from lambda to chemlambda, but to learn how to make a parser from a programming language in the making. You’ll see and hopefully you’ll enjoy 🙂

(2) Slides, slides, slides. I have not considered slides very interesting as a mean of communication before. But hey. slides are somewhere on the route to an interactive book, article, etc.

So I added to my page links to 3 related presentations, which with a 4th available and popular (?!) on this blog, give together a more round image of what I try to achieve.

These are:

  • popular slides of a presentation about hamiltonian systems with dissipation, in the form baptized “symplectic Brezis-Ekeland-Nayroles”.  Read them in conjuction with arXiv:1902.04598, see further why
  • (Artificial physics for artificial chemistry)   is a presentation which, first, explains what chemlambda is in the context of artificial chemistries, then proceeds with using a stochastic formulation of hamiltonian systems with dissipation as an artificial physics for this artificial chemistry. An example about billiard ball computers is given. Sure, there is an article to be written about the details, but it is nevertheless interesting to infer how this is done.
  • (A kaleidoscope of graph rewrite systems in topology, metric geometry and computer science)  are the most evolved technically slides, presenting the geometrical roots of chemlambda and related efforts. There are many things to pick from there, like: what is the geometrical problem, how is it related to emergent algebras, what is computation, knots,  why standard frames in categorical logic can’t help (but perhaps it can if they start thinking about it), who was the first programmer in chemlambda, live pages where you can play with the parser, closing with an announcement that indeed anharmonic lambda (in the imperfect form of kali, or kaleidoscope) soves the initial problem after 10 years of work. Another article will be most satisfactory, but you see, people rarely really read articles on subjects they are not familiar with. These slides may help.
  • and for a general audience my old (Chemlambda for the people)  slides, which you may appreciate more and you may think about applications of chemlambda in the real world. But again, what is the real world, else than a hamiltonian system with dissipation? And who does the computation?



Lambda calculus to chemlambda parser

To play with at this page.  There are many things to say, but will come back later with details about my first parser and why is it like this.

UPDATE: After I put the parser page online, it messed with the other pages, but now everything is allright.

UPDATE: I’ll demo this at a conference on Dec 4th, at IMAR, Bucharest.

Here are the slides.

The title is “A kaleidoscope of graph rewrite systems in topology,metric geometry and computer science“.

So if you are in Bucharest on Dec 4th, at 13h, come to talk. How to arrive there.

I already dream about a version which is purely “chemical”, with 3-legged parser spiders reading from the DNA text string and creating the molecules.

Will do, but long todo list.

Quine graphs (5), anharmonic logic and a flute

Several things:

  • added a new control to the quine graphs pages (all can be accessed from here). Is caled the “rewrites weights slider”: you can either favor the rewrites DIST, which add nodes to the graph, or the rewrites β  which take away nodes from the graph (for chemlambda these are the L-A rewrite, but also the termination rewrites and FI-FOE). This changes, sometimes radically, the result of the algorithm. It depends what you want. In the case of reductions of lambda terms, you may want to privilege β rewrites, but as usual this is not generally true, see the last example. In the case of the fight arena for quines, you can favor a species of quines over another by your choice.
  • of course in the background I continue with kali, or anharmonic lambda calculus. The new thing is that why not conceive a programming language which compiles to kali? It works fine, you can say things as

“from a see b as c” or

“let a mark b as c” or

“is a as b” or

“c := let x  = b in c”

and many others! Soon, hope at the beginning of December, will be available.

  • It helps to do new things, so I finished t day my first flute. Claudia, my wife, is a music professor and her instrument of choice is the flute. From a physics point of view a flute is a tube with holes placed in the right possitions, how hard would it be to make one? After I made it, it is usable but I started to learn a lot of stuff about how unnatural is the modern flute and how much you can play with all variables. As a mathematician I  oscilate between “is a known thing to simulate a flute numerically” and “I shall concentrate on the craft and will play with various techniques”. My first flute was a craft only effort, but now, when I know more, I am hooked!

OK, so that’s the news, I like to make things!


A small ouroboros has a short life (Quine graphs 4)

As I said many times, there is a lot of background to the hundreds of chemlambda mol files and associated simulations. Now that I took the Quine graphs repository seriously, I have to select rather clear and short strings of facts associated to the mol files, so that I can tell clear and short enough stories.

So it came out of the depth of this pile of stuff I have that the ouroboros is not an immortal quine. It can die and his life is of the order exp(length) where length is the length of the double track it contains.  The shortest ouroboros appears here (look in the MENU).

Otherwise, I reorganize (and continue to do so) the js scripts, so that they become easy to understand. Now there is a chemistry, nodes, etc. so thare will be a convergence with the modules of hapax.

Quine graphs (3), ouroboros, hapax and going public

Several news:

I decided that progressively I’m going to go public, with a combination of arXiv, Github and Zenodo (or Figshare), and publication. But there is a lot of stuff I have to publish and that is why this will happen progressively. Which means it will be nice to watch because it is interesting, for me at least,  to answer to the question:

What the … does a researcher when publishing? What is this for? Why?

Seriously, the questions are not at all directed against classical publication, nor are they biased versus OA. When you publish serially, like a researcher, you often tell again and again a story which evolves in time. To make a comparison, it is like a sequence of frames in a movie.

Only that it is not as simple. It is not quite like a sequence of frames,  is like a sequence of pictures, each one with it’s repeating tags, again and again.

Not at all compressed. And not at all like an evolving repository of programs which get better with time.

Quine graphs (2)

UPDATE: This page has now comments for each graph  , or its sister page. Now I can start to mass produce 🙂

There are several technical details to add before I arrive to a final version, but now there are, as usual, two sister pages:

  • (the one I want to write about today) here
  • and the one at the quinegraph repository here (link changed, mentioned in the last post)

The first page, ie the newer one, has the algorithm involving the COMB rewrites and Arrow nodes, even for Interaction Combinators, which is part of the original chemlambda programs.  It may be funny to see the Arrows and COMB rewrites in action in the case of IC.

This part is needed because the way a graph quine is defined. The Arrow and COMB rewrites allow to make purely local parallel rewrites. For the case of Interaction Combinators, where the graphs are not oriented, a hack can be done in order to use Arrows, which consists into using a pair of them, tip to tip, instead of only one. (Not the first time when I think that IC is bosonic and chemlambda is fermionic 🙂 )

That being said, what more is needed?

  • to fully implement the greedy algorithm, because only the sequential random one is in the js version (let’s recall again that the js-only version was first done by ishanpm, then modified by me according to (many) needs. It is a marvel, even if it needs more tweaks, in particular from my point of view because by using it I discovered that the 10 quine can reproduce itself). Needed because of the definition of a quine.
  • to put a graph matching function. There is one in the more ambitious project hapax. (It does not have pretty pictures, but I’ll modify it after, maybe that is why it does not attract?) Useful for the verification of a quine.
  • depending on time, implement a sort of arena where quines compete. Needed for a part of the article (or maybe a supplement?) where I explain some things mentioned in the deleted g+ chemlambda collection (adaptation is another semantic ilusion)
  • finally I need to arrive to a not too primitive form of html presentation, these days people are used to such things and the academic who just want to show a proof of principle will be ignored without. Needed because it can be done and is good for my education.

All in all, ironically, my initial awk program (4364 sloc) does that and more (Turing machines, to come in js version, as well as Formality Core) in so many lines because perhaps they were needed? I say ironically because that program is written without the use of functions, so perhaps it is about 1000 sloc with functions. Compare with the js version which has the same number of lines, written by ishanpm. Compare with the Haskell version written by synergistics,  Compare with the python version by 4lhc.

Most of these lines are needed to destroy the information added to the system by the semantic assumptions of those who wrote the programming languages. Yes.

I close here by saying that even if I don’t believe in semantics, as it is understood since Aristotle (but not by Plato), I do believe in the existence of a different, more beautiful replacement, which can be glimpsed in the way nature needs randomness for creating structure.


Quine graphs (1)

Update: here is  a page, which will be used with the article. Thank you for letting me know about any suggestions, improvements, bugs, requests. The demos will support the article and they will be grouped on families, based on subject or on common phenomena. In this article, up to now,  only chemlambda and Interaction Combinators are touched, not kali or hapax.


While hapax and anharmonic lambda calculus enter the final stage, it is time I prepare for publication the quines exploration project. I intend to make an article (first to arXiv) with a paired github repository and some github.io pages where the real things happen. For the search of new quines I might go back to twitter and make a dedicated account. Let’s see how all this will work together.

For the moment I collect here the posts from this chorasimilarity blog, or other links  which are related to the  project. (Well there is a collection of google+ posts which are going to be useful, I have to see what I do with these).

So here is it, in reverse time order:

A very useful yool was the full js chemlambda editor written by ishanpm. I modified it and tuned it in many places and I shall continue or I’ll write a new one, but definitely I learned stuff by using it, thanks!

I might explore also quines in other

although I shall not touch the relations with knot theory, in this article. Only look for quines, as example.

If you have questions, suggestions, or if you want to contribute to the subject, let me know.


6 months since my first javascript only

… program, this one: How time flows: Gutenberg time vs Internet time . Before I used js only for the latest stage, written (clumsily, I admit) by other programs. Since then I wrote hapax  and I modified other scripts to fit my needs, mainly, but this corrected a gap in my education 🙂

Oh btw if anybody interested to see/interact on this talk I’d like to propose: [adapted from a pdf (sigh) for my institution management, though they are in the process to reverse to  the pre-internet era  and they managed to nuke all mail addresses @imar.ro a domain which was  rock solid since at least 20 years; that’s why I post it here]

A kaleidoscope of graph rewrite systems in topology, metric geometry and computer science
Graph rewrite systems are used in many research domains, two among many examples are Reidemeister moves in knot theory or Interaction Combinators in computer science. However, the use of graph rewrites systems is often domain dependent. Indeed, for the knot theory example we may use the Reidemeister move in order to prove that the Kauffman bracket is a knot invariant, which means that it does not change after the graph is modified by any rewrite. In the other case given as an example, Interaction Combinators are interesting because they are Turing universal: any computation can be done with IC rewrite rules and the rewrites are seen as the computational steps which modify the graphs in a significant way.

In this talk I want to explain, for a general audience, the ocurence and relations among several important graph rewrite systems. I shall start with lambda calculus and the Church-Turing thesis, then I shall describe Lafont’ Interaction Combinators [1]. After that I shall talk about graphic lambda calculus [2], about joint work with Louis Kauffman [3] on relations with knot theory. Finally I explain how I, as a mathematician, arrived to study graph rewrites systems applications in computer science, starting from emergent algebras [4] proposed in relation with sub-riemannian geometry and ending with chemlambda [5], hapax (demo page [6], presentation slides [7]) and em-convex [8] with the associated graph rewrite system [9] (short of “kaleidoscope”).

During the talk I shall use programs which are based on graph rewrites, which are free to download and play with from public repositories.

[1] Y. Lafont, Interaction Combinators, Information and Computation 137, 1, (1997), p. 69-101

[2] M. Buliga, Graphic lambda calculus. Complex Systems 22, 4 (2013), p. 311-360

[3] M. Buliga, L.H. Kauffman, Chemlambda, Universality and Self-Multiplication, The 2019 Conference on Artificial Life 2014 NO. 26, p.490-497
[4] M. Buliga, Emergent algebras, arXiv:0907.1520

[5] M. Buliga, Chemlambda, GitHub repository (2017)
[6] M. Buliga, Hapax, (2019) demo page http://imar.ro/~mbuliga/hapax.html,
Github repository https://github.com/mbuliga/hapax

[7] M. Buliga, Artificial physics of artificial chemistries, slides (2019)

[8] M. Buliga, The em-convex rewrite system, arXiv:1807.02058

[9] M. Buliga, Anharmonic lambda calculus, or kali (2019),
demo page https://mbuliga.github.io/kali24.html


I’d like to make this much more funny than it looks by using these js scripts. Also “kaleidoscope” is tongue-in-cheek, but that’s something only we know. Anyway kali is on the way to be finished, simplified and documented. And somehow different. For a short while, encouraged by these js scripts and similar attempts, I tried to believe that maybe, just maybe there is a purely local way to do untyped lambda, right around the corner. But it seems there isn’t, although it was fun to try again to search it. But then what to do? Maybe to be honest with the subject and say that indeed a purely local system, geometry inspired, exists, it it Turing universal, but it is not lambda calculus (although it can be guided by humans into being one, so that’s not the problem)? Maybe going back to my initial goal, which was to understand space computationally, which I do now? Yeah, I know that lambda calculus is fascinating, even more if untyped,  but em is so much better!


Experimental generation of anharmonic lambda calculus (II)

UPDATE: There is still another version, less exotic, at github. Both are working versions of kali.  For those with mathematical inclinations there are explanations in the tool rhs.js. This will come to a conclusion with the explanations about FO and with the tools to choose optimal collections of rewrites.


There is a working version of kali which I posted today. Enjoy and wonder how can it (sometimes!) reduce lambda terms even if it has nothing to do with lambda calculus in the background. The js script is interesting to read, or better the script from the page which generates DIST rewrites (which you can add in your copy of the main script and play with it, it is enough to add it to the list of rewrites).

It is comical how does it fail sometimes. Space is symmetrical and logic is constrained.

Also I think I’ll pursue the idea to train the (choice of rewrites, probabilities) for certain tasks, as evoked here.

Experimental generation of anharmonic lambda calculus (I)

UPDATE: This is from yesterday 15.10.2019. It gives the possible RHS of DIST rewrites in a more precise way, compatible with the 6 nodes: A, FI, D (port 2 “in”) and L, FO, FOE (port 2 “out”) (each node has port 1 “in” and port 3 “out”). Moreover which can be decorated with generic dilations which satisfy em-convex. It turns out that there are exactly two possibilities for each rewrite. However, there are fewer than 3^36 rewrite systems which are potentially interesting, because many of these rewrites inhibit others.

For example all the rewrites used in  the working version of kali are there (with the exception of those involving the node FO, there is a separate treatment for this node; also I ignore for the moment the rewrites invoving T and I take for granted the 3 beta-like rewrites, which are not generated by the rhs.js script, because they are not DIST rewrites).The version I have today shortens the list a lot and soon enough will be merged into a more explorable version of kali.


How many rewrites are possible in anharmonic lambda and how do they relate one with the other? Is there a way to discover experimentally the best collection of rewrites? I think it is and that’s what I’m doing right now.

Let me explain.

Suppose we start by the anharmonic group, isomorphic with S(3), or why not we start with S(4), with an eye into the relation it has with projective geometry. But basically we want to associate to every element of the group (call it G) a trivalent node with named ports “1”, “2” and “3”, such that we can decorate the edges of a graph made by such nodes by “terms” which are somehow relevant with respect to the geometric nature of the group G (either S(3) as the anharmonic group, or S(4) as it appears in projective geometry). For the moment how we do that, or why are we doing it, or even what that has to do with lambda calculus are irrelevant questions.

There are two big possibilities further.

Either we work with graphs with undirected edges, but we restrict the patterns which trigger rewrites at pairs of nodes connected via the ports “3”, i.e. at pairs of nodes linked by an edge which connects the port “3” of on enode with the port “3” of the other. For example this is the case in Interaction Combinators. The advantage is that there will never be conflicting rewrites.

Or we work with graphs with directed edges, we attach orientations to ports (say ports “1” is always “in”, i.e. the orientation point to the node, the port “3” is always “out” and the port “2” may be “in” or “out”, depending on the node type) and we ask that any edge connects a port which is “in” with a port which is “out”. Then we look for patterns which trigger rewrites ahich are made by a pair of nodes linked by an edge which connects a port “3” with a port “1”. This is like in chemlambda. The advantage here is opposite: we allow conflicting rewrites.

So in either case we think about rewrites which have as LHS (left hand side) a pair of nodes, as explained, and as RHS (right hand side) either no node (actually two edges), like in the case of the beta rewrite, or 4 nodes. The pattern of 4 nodes should be like in Interaction Combinators (say like GAMMA-DELTA rewrites) or like the DIST rewrites in chemlambda.

If I take the second case, namely graphs with oriented edges, as explained. how many different rewrites are possible? More precisely, denote a node with a “+” if the port “2” is “in” and with a “-” if the port “2” is out. Then, how many rewrites from 2 nodes to 4 nodes, in the pattern DIST, are possible?

Doing it by hand is dangerous, so we better do it with a program. >We are looking for patterns of 4 nodes arranged like in DIST rewrites, each node being a “+” or a “-” node, with oriented edges, so that the 4 half edges (or external ports)  have the same orientation as the corresponding ones of the  LHS pattern made of 2 nodes (“+” or “-“). How many are they?

There are 4 cases, depending on the LHS nodes, which we denote by “+,+”, “-,-“, “+,-” and “-,+”.

I was surprised to learn from programs that theer are 386=16X23 possible RHS patterns, divided into 80=16X5, 80=16X5, 112=16X7 and 96=16X6 patterns.

While I can understand the 16X part, the 5,5,7,6 are strange to me. The 16X appears from the fact that for a node “+” we can permute the “1”,”2″ ports without changing the edges orientations, and similarly for a node “-” we can permute the “2” “3” ports with the same (lack of) effect.

Now let’s get further. We look now at the nodes which are decorated with elements of the group G, or say the group G acts somehow on the set of node types which has as many elements as G. We look for rewrites from LHS to RHS such that the decoration of the external ports does not change during the rewrites. This imposes strong restrictions on the choice of nodes (and it means something geometrically as well).

Not any possible rewrite can be accepted though. For example we reject rewrites with the property that the RHS contains a copy of the LHS. So each rewrite has associated a collection of rewrites which are inhibited by it. This will place restrictions on the collections of rewrites which we may choose for our calculus.

We reject also rewrites whose RHS contain a copy of the LHS of a rewrite we like, like the beta rewrite.

We reject rewrites which have the RHS made such that the external ports will all be “2”, because such a pattern will last forever during further rewrites.

Still when we look at what is left we see that we have about |G|X|G| LHS patterns  and much more RHS choices, so there will still be many possible calculi.

We may look for symmetries, related to the action of G on the node types and we may ask: which is the most (or a most) symmetric collection and which will be a least symmetric (but maximal in number)?

Or we may say that the human mind is feeble and let the collections compete, by reducing a family of graphs. Take lambda calculus and some tests and let the possible collections compete. Take emergent algebras and some tests and do the same. Or even look for collections more fit for certain chosen tests from lambda calculus or others.

Who will win?

Anharmonic lambda calculus (II)

Some news after the anouncement of kali, or anharmonic lambda calculus. I use again two pages, which are updated in tandem:


At the moment I write this post the github.io version is more recent. I think the termination rewrites are better than before.

There is still some work on the exact choice of rewrites, among the many possible which are compatible with the underlying geometric structure. But you can see by looking at kali24.js that there is some connection between the nodes and the anharmonic group.

All this will be explained when I’ll arrive to the most satisfying version.

I added to the lambda calculus examples some more, not lambda calculus ones. Among them the much discussed 10-node quine and also the most amazing molecule I discovered until now. It appears in the menu as “10 nodes sometimes becomes quine from [graph A-L-FI-FOE 540213]” and please do reload it often and let it unfold. For an archived video see this one. It is a graph which basically shatters the notion that a quine is enough, conceptually, to describe life. More simply put, it can evolve in so many ways, among them in a chemlambda quine way, but not uniquely. Amazing.

You can see it also on the menu of the find a quine page. There the graphs look more compact and you can see more of the overall structure, but less the detailed linking of nodes.

Coming back to kali24, the chunk which I need to explain first is what does it have to do with em-convex. That is an enriched lambda calculus which describes my work on emergent algebras. It ends with the proof that in the presence of an axiom called “convex”,  we end up with usual vector spaces over terms of type N (in the paper) and that also the term of type N have an em calculus themselves, which is a way of saying that we recover on them a structure which is like the Riemann sphere.

What is not proved are two things: (1) is  there is a graphical rewrite system which can reproduce the proofs of em-convex, under the algorithm of rewrites used with chemlambda? (2) can we dispense of the lambda calculus part (the abstraction and application operations) and redo everything only with the pure em calculus?

Back to earth now, just for the fun, don’t you think that a geometrical model  of lambda calculus on the Riemann sphere would be nice?

Kali: anharmonic lambda calculus

You can play with some examples of lambda terms (SKK, Y combinator, Omega combinator, Ackermann(2,2), some duplications of terms, lists, Church numbers multiplications). It is important to try several times, because the reduction algorithm uses randomness in an essential way! This justifies the “reload” button, the “start” which does the reduction for you (randomly), the “step” which choses a random reduction step and shows it to you. Or you may even use the mouse to reduce the graphs.

It may look kind of alike the other chemlambda reductions, but a bit different too, because the nodes are only apparently the usual ones (lambdas, applications, fanins and fanouts), in reality they are dilations, or homotheties, if you like, in a linear space.

I mean literary, that’s what they are.

That is why the name: anharmonic lambda calculus. I show you lambda terms because you are interested into those, but as well I could show you emergent (actually em-convex) reductions which have apparently nothing to do with lambda calculus.

But they are the same.

Here is my usual example Ackermann(2,2), you’ll notice that there are more colors than precedently:


The reason is that what you look at is called “kali24”, which for the moment uses 7 trivalent nodes, out of 24 possible from projective space considerations.

I will fiddle with it, probably I’ll make a full 24 nodes versions (of which lambda calculus alone would use only a part), there is still work to do, but I write all the time about the connections with geometry and what you look at does something very weird, with geometry.

Details will come. Relevant links:

  • kali24, the last version
  • kali, the initial version with 6 nodes, which almost works
  • em-convex, the dilations enhanced lambda calculus which can be also done with kali
  • and perhaps you would enjoy the pages to play and learn.

One more thing: when all fiddling will end, the next goal would be to go to the first interesting noncommutative example, the Heisenberg group. Its geometry, as a noncommutative linear space (in the sense of emergent algebras, because in the usual sense it is not a linear space), is different but worthy of investigation. The same treatment can be applied to it and it would be interesting to see what kind of lambda calculus is implied, in particular. As this approach is a machine of producing calculi, I have no preference towards the outcome, what can it be? Probably not quite a known variant of lambda, quantum or noncommutative, because the generalization does not come from a traditional treatment [update: which generalizes from a too particular example].

The life of a 10-nodes quine, short video

I’m working on an article on the 10-nodes quine (see here and here previous posts) and I have to prepare some figures, well rather js simulations of relevant phenomena.

I thought I’ll made a screencast for this work in progress and put it in the internet archive:


UPDATE: I found, by playing, an even more, apparently, assymetrical variant of chemlambda, but also I found a bug in the js version, which is, fortunately, not manifesting in any of the cases from the menu of this page.  For the moment is not completely clear for me if the more, apparently, assymetrical variant behaves so much better, when I reduce lambda terms, because of the bug. I think not. Will tell after I fix the thing. Is something unexpected, not in my current program, but exciting, at least because either way is beneficial: I found a bug or I found a bounty.

Cryptocurrency for life (2)

Continues from (part 1). Back home and almost healed I read Anand Giridharadas crusade where he has a very reasonable point:

“But then I had the following thought.

Why are the people not connected to Epstein leaving this orbit, while people connected to Epstein remain?

Shouldn’t it be the other way around?”

To have a direct confirmation of these self-protected circles of power is interesting. Rich donors and academia are some of the players. I’m directly interested about this from the point of view of somebody who tries to do Open Science since a long time: to paraphrase Anand

Why are the people not obeying old practices of academic publication leaving this orbit, while people connected with the useless legacy publishers remain?

Shouldn’t it be the other way round?


The same academic managers are in so friendly relations with publishers which do not offer anything to the scientific community. The honest effort of Open Access has become a caricature where it is entirely normal to baptize the_author_pays_for_publication as the way to do Open Access.

OK, so what is this having to do with the subject of this post? Simple: if the cryptocurrencies communities do want to explore new social models then research (of biological life as decentralized computing, as I suggest) should be a part of it. You can’t turn to the old fatigued elites, because they already gave what they can do to MS or others alike. They don’t have new ideas since a very long time. Hot air with old boys support.

But now comes my point: would these cryptocurrencies efforts support a new research structure? Why not? There are very clever people there who understand the importance.

But maybe they are in bed with the circle of power. Just maybe.

The following are beliefs only (what proof can you ask?). For reasons along the lines explained previously, since years I’m very skeptical about anything ethereum based, but I am really amazed by btc. Well, but who really know?

Does not the cryptocurrency community (or the parts of it which are not in bed with the enemy) want to make a point in research?




Cryptocurrency for life

Biological life is a billions years old experiment. The latest social experiments, capitalism and communism, are much more recent. Cryptocurrencies experiments are a really new response to the failures of those social experiments.

We don’t really understand biological life starting from it’s computational principles. As well, we don’t understand in depth decentralized computation which is at the basis of many cryptocurrencies experiments.

My point is that we try to solve the same problem, so that we shall be able to evolve socially at a human time scale. Not in hundred thousands years, in decades instead.

Therefore it would be only natural if the active people in the cryptocurrency realm would dedicate significant financial support to the problem of life.

Perturbed quine experiment

I use as a tool the page find a quine.  You saw in the post 10-node quine can duplicate that indeed in rare situations the simple 10-node quine can indeed reproduce spontaneously. You can see for yourself: choose “original 10_nodes quine” from the menu, hit the  “start” and if it dies hit “reload” and “start” until you witness a reproduction.

In this post I want to show you another phenomenon, which is logical after you see it, but perhaps is surprising at first sight.

Recall that a chemlambda quine is a graph which has a periodic evolution under the greedy algorithm of rewrites. It is of course interestign what happens under the random reduction algorithm. Maybe this definition should be changed?

What if we modify the definition by saying that a quine is a graph which can evolve into a graph which itself is a quine according to the original definition. Then, OK, but it may be the case that a graph, even one of a quine according to the original definition, has the following weird property: it can evolve into two different quines. Namely there are two different reduction paths which, each, lead to a quine.

Here is a proof that this is happening for the quine which you can load from the menu as “10 nodes quine 2 = [A-L-FI-FO 245013]”.

First, what does “= [A-L-FI-FO 245013]” means? If you look in the menu, there is the option “new random 4 nodes graph A-L-FI-FO” which generates one of the 720 graphs with 4 nodes A, L, FI, FO. Then there are some examples of such graphs which evolve into  quines, my favorite being the remarkable “quine from [graph A-L-Fi-FO 243501]”.

Then “10 nodes quine 2 = [A-L-FI-FO 245013]” means that this particular 10 nodes graph evolves into the same quine as the one which is obtained from [graph A-L-Fi-FO 245013]. So there’s one quine. (For the notation [A-L-FI-FO 245013],  notice that 245013 is a permutation of 012345 which describes in which way the nodes A, L, FI, FO are connected. You can see how is done, and especially why there are 720 such graph, if you look in the molLib.js at the lines after

case random_egg_A_L_FI_FO


The second quine can be obtained by a perturbation. Indeed choose “10 nodes quine 2 = [A-L-FI-FO 245013]” from the menu and start to perturb it with the mouse:


Then press “start”


That’s a different quine. Indeed, most of the time you get the first one which is small:


You can see it by doind simply “reload” and “start”.

Only rarely you’ll see the bigger quine.

Other surprises wait for you, and for me too.




Lafont’ quine online

UPDATE: Now you can search for Interaction Combinators quines among those generated by random 4 nodes (2 GAMMA, 2 DELTA). They are all immortal, because there are no conflicts in IC. They seem to be not so rare, I quickly discovered 6. Also, because the IC rewrites have so many symmetries, they are less varied than chemlambda quines generated from 4 nodes graphs.


I discovered that ishanpm js version of chemlambda does not give a darn on arrow orientations. So I added Interaction Combinators rewrites and nodes and now you can see online Lafont’ quine 🙂



So after you choose “Lafont’ quine” from the menu, hit “load” and then you can either hover with the mouse to trigger rewrites, or you can move the point or view and scale the graph with the mouse, or you use “step” to make 1 rewrite step, or “start” and “stop” to let the program do it.

It is interesting that this is possible and it is done by cleverly exploiting the mol file notation. Indeed, in this js version Ishan baptizes the ports by

“left” , “out”, “right”

for any of the 3-valent nodes, and the information about nodes types (L, A, FI, FO, FOE) and ports through which are connected  it is sufficient to unambiguously identify the chemlambda rewrites.

That means the script does not care if you put (in the molLib.js) an incorrect mol file (see here for the correct mol notation in chemlambda and hapax). It is sufficient that it executes correctly the chemlambda rewrites.

However this leaves some room. I added nodes types GAMMA and DELTA, which are 3-valent, with the ports “left”, “out”, “right” of type “0” (that means “in”, but who cares if this is never used later?). I use the node “T” from chemlambda as the node “EPSILON” from IC.

So the node with notation:

GAMMA 1 2 3

represents a gamma node in IC, with the principal port 1, and other ports 2 and 3. Same for DELTA.

A GAMMA-GAMMA rewrite is therefore, in mol notation terms, a transformation of the pattern:

GAMMA 1 2 3

GAMMA 1 4 5

into a pattern

Arrow 3 4

Arrow 2 5

What’s funny is that Ishan does not use Arrow rewrites either, because he executes at one time step only one rewrite, at random, so that means that instead a GAMMA-GAMMA rewrite as written before transforms into an empty pattern and a gluing of the remaining halves of the edges 2, 3, 4, 5 such that 3=4 and 2=5.

And so on.

So you have for the first time the Lafont’ quine online and don’t forget about the other quines discovered in chemlambda just by randomly shuffling of sources and targets of the 10_node chemlambda quine.


10-node quine can duplicate

Only recently I became aware that sometimes the 10-node quine duplicates. Here is a screenshot, at real speed, of one such duplication.


You can see for yourself by playing find-the-quine, either here or here.

Pick from the menu “10_nodes quine” or “original 10 nodes quine” (depending on where you play the game), then use “start”. When the quine dies just hit the “reload” button. From time to time you’ll see that it duplicates.  Most of times this quine is short lived, sometimes it lives long enough to make you want to hit “reload” before it dies.



Find the quine: who ordered this?

I put a version of the Find the Quine on github. You may help (and have fun) to find new chemlambda quines.

The page lets you generate random 10 nodes graphs (molecules), which are variants of a graph called “10_quine”. There are more than 9 billion different variants, therefore the space of all possibilities is vast.

Up to the moment 15 new quines candidates were found. You can see them in that page too.

New phenomena were discovered, to the point that now I believe that chemlambda quines are a dot in a sea of “who ordered this?”.

Who ordered this? Just look at “new quine? 3”.  “10 nodes quine 5”. It displays an amazing range of outcomes. One of them is that it dies fast, but other appear rather frequently. The graph just blooms not into a living creature, more like into a whole ecology.

You can see several interesting pieces there.

There is a “growing blue tip” which keeps the graph alive.

There are “red spiders” who try to reach for the growing blue tip and eat it. But the red spiders sometimes return to the rest of the graph and rearrange it. They live and die.

There is a “blue wave” which helps the growing blue tip by fattening it.

There is a “bones structure” which appears while the red spiders try to eat the growing blue tip. It looks like the bones structure is dead, except that sometimes the red spiders travel back and modify the bones into new structures.

And there are also graphs which clearly are not quines, but they are extremely sensitive to the order of rewrites. See for example “!quine, sensitive 1”. There seems to be a boundary between the small realm of quines and the rest of the graphs. “new quine? 3” is on one side of that boundary and “!quine, sensitive 1” is on the other side.

So, play “Find the Quine” and mail me if you find something interesting!

On top of the page there is a link to my pages to play and learn. Mind that the versions of find the quine there and at github are slightly different, because I update them all the time and so they are not synchronized. I use github in order to have a copy just in case. In some places I can update the github pages, in other places I can update my homepage…



What is a chemlambda quine?

UPDATE 4: See Interaction combinators and Chemlambda quines.

UPDATE 3: I made a landing page for my pages to play and learn.

UPDATE 2: And now there is Fractalize!

UPDATE: The most recent addition to the material mentioned in the post is Find a Quine, which let you generate random 10 nodes graphs (there are 9 billion of them) and to search for new quines. They are rare, but today I found 3 (two all of them are shown as examples).  If you find one, mail me the code (instructions on the page).


The ease of use of the recently written chemlambda.js makes easier the sharing of past ideas (from the chemlambda collection) and as well of new ideas.

Here is some material and some new thoughts. Before this, just recall that the *new* work is in hapax. See what chemlambda has to do with hapax, especially towards the end.

A video tutorial about how to use the rest of new demos.

The story of the first chemlambda quine, deduced from the predecessor of a Church number. Especially funny is that this time you are not watching an animation, it happens in front of you 🙂

More quines and random eggs, if you want to go further in the subject of chemlambda quines.  The eggs are 4-nodes graphs (there are 720 of them). They manifest an amazing variety of behaviour. I think that the most interesting is that there are quines and there are also graphs which have a reversible evolution, without being quines. Indeed, in chemlambda a quine is one which has a periodic evolution (thus is reversible) under the greedy algorithm of rewrites. But there is also the reversible, but not quine case, where you can reverse the evolution of a graph by picking a sequence of rewrites.

Finally, if you want to look also at famous animations, you have the feed the quine. This contains some quines but also some other graphs which featured in the chemlambda collection.

Most of all, come back to see more, because I’m going to update and update…

Feed the quine!

The chemlambda.js version of chemlambda-v2 made by ishanpm allows to understand how quines work in chemlambda. See this previous post about chemlambda.js.

So, now you may feed the quine.


UPDATE: Ishan made a github repository for his chemlambda.js. I suggest you follow his work if you are a chemlambda fan, there is big promise there.  The github page is live here. See if you can contribute to one of the issues at his repository, or to one of the issues at the chemlambda repository.

For me this chemlambda.js is pure gold, for many reasons: it does all the computation/visualization in one place, it may give a way to recover the work lost from the chemlambda collection, and as a work of art. Not to say that it will be certainly connected with hapax.

If you wonder why I don’t update the chemlambda repo readme with his important contribution, I refrain from touching that repository. Probably a new one which forks all the contributions in one place is better. This autumn.

I’ve also made a second page “feed the quine“, it has presently more examples  (not anymore) and it is compatible es5. Both pages change very often, so at any point one may be more advanced than the other.


Play with quines in the chemlambda editor

ishanpm made a wonderful chemlambda editor prototype and I enjoyed playing with the 9_quine. It looks like this (screencast, real speed)



You can do the same, or other stuff! You need a mol file to input, for example I took the mol file of the 9_quine from here.  You can pick from lots of them (not guaranteed all of them work yet in the editor, especially those with “bb” in the name), from the chemlambda collection of molecules.

Is much much better than the animation made by hand


I look forward for a convergence with hapax, it would be nice to make a gamelike “feed the quine” where you have the “tokens” which make the reactions happen and you feed them with the mouse to the quine, etc.

UPDATE: this time I used bigpred-train and I let it reduce automatically, gives this psihedelic


Chemlambda and hapax

I wrote an expository text about chemlambda and hapax (and interaction combinators). You can see clearly there how hapax works differently and, as well, clear exposition of several conventions used, about the type of graphs and the differences in the treatment of rewrites.

Chemlambda and hapax

Please let me know if you have any comments.

What is the purpose of the project Hapax?

“hapax” means “only once” in ancient Greek. You may have seen it in the form hapax legomenon, quote: ” a word that occurs only once within a context, either in the written record of an entire language, in the works of an author, or in a single text”.

After a bit of research I found the correct, I hope, form that I  use for this project:


It reads “hapax cheon” and it means, again in ancient Greek, “poured only once”.

Why this? Because, according to this wiki link, “the Greek word χυμεία khumeia originally meant “pouring together””.

The motivation of the project hapax comes from the realization that we only explored a tiny drop in the sea of possibilities. As an example, just look at lambda calculus, one of the two pillars of computation. Historically there are many reasons to consider lambda calculus something made in heaven, or a platonic ideal.

But there are 14400 = 5! X 5! alternatives to the iconic beta rewrite only. Is the original beta special or not?

By analogy with the world of CA, about a third of cellular automata are Turing universal. My gues is that a significant fraction of the alternatives to the beta rewrite are as useful as the original beta.

When we look at lambda calculus from this point of view, we discover that all the possible alternatives, not only of beta, but of the whore graph rewriting formalism, say in the form of chemlambda, all these alternative count a huge number, liek 10^30 in the most conservative estimates.

Same for interaction combinators. Same for knot theory. Same for differential calculus (here I use em).

I started to collect small graph rewrite systems which can be described with the same formalism.

The formalism is based on a formulation which uses exclusively permutations (for the “chemistry”  and Hamiltonian mechanics side) and a principle of dissipation which accounts for the probabilistic side.

The goal of the project hapax is to build custom worlds (physics and chemistry)

“poured only once”

which can be used to do universal computation in a truly private way. Because once the rules of computation are private,  this leads to the fact that the who;le process of computation becomes incomprehensible.

Or is it so? Maybe yes, maybe not. How can we know, without trying?

That is why I starded to make the hapax stuff.

For the moment is not much, only demos like this one, but the rest will pass from paper to programs, then we’ll play.


Chemlambda collection samples

UPDATE: Chemlambda collection of animations is the version of the collection hosted on github. The original site is under very heavy traffic (in Jan 2020). Small images, about a 1/2 of the collection, due to memory limitations. But you can play the simulations in js!

I started a page of chemlambda collection samples. I would rather put videos like this one  (I have hours of them) which I used in this presentation.

However soon there will be a playable chemlambda thing 🙂


Hapax chemlambda

Chemistry is a game with a pair of dices.

You roll two dices and act. The dices are permutohedra.

Which leads to ask what certain chemistries (artificial or real) have so special. The conjecture is that (probabilistically speaking) a sizeable proportion of them are special.

For example, we can evade the lambda calculus by choosing one of the  14400 rewrites for ( β with random right patterns) .

Hapax chemlambda!

How time flows: Gutenberg time vs Internet time

Based on  a HN comment, I made a page which proposes the hypothesis:

(Δ t-historic) = (Δ t-today)^(log 5/ log 2)


where the Δ t-historic is the time in decades from the invention of the printing press and Δ t-today is the time in decades from the opening of the ARPANET.

A collection of interesting correspondences is given, as well as some predictions, if this hypothesis is to be taken seriously.

The page has a small JS script for a calculator t-historic to t-today, so you can easily find new correspondences if you like the game. Please let me know in case.

UPDATE: There is now a very amusing python3 script by 4lhc, at this gist. It lets you write a year, recent or old, then it proposes two events, one from the old time and one from recent time. I played with it on my computer and it’s just cute!

[I had to install the wikipedia module and then the correct command is

“python3 The_Gutenberg_Internet_analogy.py”

wait a short moment and get the pair of events!]



Stick-and-ring graphs (I)

Until now the thread on small graph rewrite systems (last post here) was about rewrites on a family of graphs which I call “unoriented stick-and-ring graphs”. The page on small graph rewrite systems contains several formalisms, among them IC2, SH2 and system X are on unoriented stick-and-ring graphs and chemlambda strings is with oriented edges. Emergent algebras and Interaction Combinators are with oriented nodes. Pseudoknots are stick-and-ring graphs with oriented nodes and edges.

In this post I want to make clear what unoriented stick-and-ring graphs are, with the help of some drawings.

Practically an unoriented stick-and-ring graph is a graph with colored nodes, of valence 1, 2 or 3, which admit edges with the ends on the same node. We imagine that the nodes have 1, 2, or 3 ports and any edge between two nodes joins a port of one with a port of another one. Supplementary, we accept loops with no nodes and moreover any 3-valent node has a marked port.


If we split each 3-valent node into two half-nodes, one of them with the one marked port, the other with the remaing two ports, then we are left with a collection of disjoint connected graphs made of 1-valent or 2-valent nodes.


These graphs can be either sticks, i.e. they have 2 ends which are 1-valent nodes, or they can be rings, i.e. they are made entirely of 2-valent nodes.


It follows that we can recover our initial graph by gluing along  the sticks ends on other sticks or rings. We use dotted lines for gluing in the next figure.


A drawing of an unoriented stick-and-ring graph is an embedding of the graph in the plane. Only the combinatorial information matters. Here is another depiction of the same graph.marked-graphs-3



Fold rewrite, dynamic DNA material and visual DSD

As it happened with chemlambda programs, I decided it is shorter to take a look myself at possible physical realizations of chemlambda than to wait for others, uninterested or very interested really, people.

Let me recall a banner I used two years ago


It turns out that I know exactly how to do this. I contacted Andrew Phillips, in charge with Microsoft’ Visual DSD  with the message:

Dear Andrew,

I am interested in using Visual DSD to implement several graph-rewriting formalisms with strand graphs: Lafont Interaction Combinators, knots, spin braids and links rewrite systems, my chemlambda and emergent algebra formalisms.

AFAIK this has not been tried. Is this true? I suggest this in my project chemlambda but I don’t have the chemical expertise.

About me: geometer working with graph rewrite systems, homepage: http://imar.ro/~mbuliga/index.html or

Some links (thank you for a short reception of the message reply):

– github project: https://github.com/chorasimilarity/chemlambda-gui/blob/gh-pages/dynamic/README.md
– page with more links: http://imar.ro/~mbuliga/chemlambda-v2.html
– arXiv version of my Molecular computers article https://arxiv.org/abs/1811.04960

Emergent algebras:
– em-convex https://arxiv.org/abs/1807.02058


I still wait for an answer, even if Microsoft’ Washington Microsoft Azure and Google Europe immediately loaded the pages I suggested in the mail.

Previously, I was noticed by somebody [if you want to be acknowledged then send me a message and I’ll update this] about Hamada and Luo Dynamic DNA material with emergent locomotion behavior powered by artificial metabolism  and I sent them the following message

Dear Professors Hamada and Luo,

I was notified about your excellent article Dynamic DNA material with emergent locomotion behavior powered by artificial metabolism, by colleagues familiar with my artificial chemistry chemlambda.

This message is to make you aware of it. I am a mathematician working with artificial chemistries and I look for ways to implement them in real chemistry. The shortest description of chemlambda is: an artificial chemistry where the chemical reactions are alike a Turing complete family of graph rewrites.

If such a way is possible then molecular computers would be not far away.

Here is a list of references about chemlambda:

– GitHub repository with the scripts https://github.com/chorasimilarity/chemlambda-gui/blob/gh-pages/dynamic/README.md
– page which collects most of the resources http://imar.ro/~mbuliga/chemlambda-v2.html

Thank you for letting me know if this has any relation to your interests. For my part I would be very thrilled if so.

Best regards,
Marius Buliga

Again, seems that these biology/chemistry people have problems with replies to mathematicians, but all ths makes me more happy because soon I’ll probably release instructions about how everybody could make molecular computers along the lines of Molecular computers.

I’ll let you know if there are future “inspiration” work. Unrelated to chemlambda, there are several academic works which shamelessly borrow from my open work without acknowledgements, I’ll let you know about these and I’ll react in more formal ways. I hope though this will not be the case with chemlambda, however, this happened before twice at least.  (I say nothing about enzymes/catalysts, category theory and cryptocurrencies… for the moment.)

Finally, here is a realization of the lambda calculus beta rewrite via a FOLD rewrite


which shares a relation with the ZIP rewrite from Zipper Logic. It seems I was close to reality,  now though I got it exactly 🙂 .

Let’s talk soon!




Small graph rewrite systems (5)

Here are some more tentative descriptions of system X and a play with the trefoil knot. This post comes after the intermezzo and continues the series on small graph rewrite systems.

Recall that system X is a proposal to decompose a crossing into two trivalent nodes, which transforms a knot diagram into an unoriented stick-and-ring graph.


The rewrites are the following, written both with the conventions from the stick-and-ring graphs and also with the more usual conventions which resemble the slide equivalence or spin braids mentioned at the intermezzo.

The first rewrite is GL (glue), which is a Reidemeister 1 rewrite in only one direction.


The second rewrite is RD2, which is a Reidemeister 2 rewrite in one direction.


There is a DIST rewrite, the kind you encounter in interaction combinators or in chemlambda.


And finally there are two SH rewrites, the patterns as in chemlambda or appearing in the semantics of interaction combinators.



One Reidemeister 3 rewrite appears from these ones, as explaned in the following figure (taken from the system X page).


Let’s play with the trefoil knot now. The conversion to stick-and rings


is practically the Gauss code. But when we apply some sequences of rewrites


we obtain more complex graphs, where

  • either we can reverse some pairs of half-crossings into crossings, thus we obtain knotted Gauss codes (?!)
  • or we remark that we get fast out of the Gauss codes graphs…

thus we get sort of a recursive Gauss codes.

Finally, remark that any knot diagram has a ring into it. Recall that lambda terms translated to chemlambda don’t have rings.

An example of “Official EU Agencies Falsely Report More Than 550 Archive.org URLs as Terrorist Content”

Today I read Official EU Agencies Falsely Report More Than 550 Archive.org URLs as Terrorist Content.  Two comments on this.

1. It happened to me in Feb 2019. I archived one of my stories from the chemical sneakernet universe. The original story is posted on telegra.ph. Here is the message which appeared when I checked the archived link:


What? I contacted archive.org and got an answer from the webmaster, pretty fast. The problem was with telegra.ph, not with my link in particular. Now the archived link is available.

After I sent the message to archive but before I received the answer, I searched for a way to contact EU IRU, to ask what the problem might be.  I was unable to identify any such way. However there was a way to send a message to EU officials, who might redirect my message to whom it may concern. It worked, but it took longer than the time needed by archive webmaster to respond and unblock the link. I was not contacted since.

2. As you see in the post from archive, it was not EU IRU the institution which sent the blocking orders. But nevermind, how can one try to block arXiv articles? This reminded me of a very recent story: Google Scholar lost my Molecular computers arXiv article. As the article is on the same subject as the story from point 1, I wonder if by any (mis)chance Google Scholar received a blocking order.

System X, semantic pain and disturbing news to some

This is a temporary post. Soon some news will come, some disturbing for some. This is just to entertain you with the System X, a small graph rewrite system proposed as a replacement for slide equivalence. Here is some prose I wrote while trying to understand 3 tiny graphic beta rewrites. This qualifies as semantic pain, but it was a very good exercice because it gives ideas (to those prone to have them, as opposed to those who lack personal ideas and take them without acknowledgement).

Small graph rewrite systems (4)

This post follows Problems with slide equivalence. A solution is to replace slide equivalence with System X.

This supposes to change the decomposition of a crossing like this:


I let you discover system X (or will update later) but here I want to show you that the Reidemeister 3 rewrite looks like that:


There is now a page dedicated to small graph rewrite systems and stick-and-rings graphs.


Google Scholar lost my molecular computers

Today I noticed that my Molecular computers article arXiv:1811.04960 is replaced by Google Scholar with the unrelated article  Defining Big Data Analytics Benchmarks for Next Generation Supercomputers, arXiv:1811.02287. I’m not an author of that article.

Screenshot from 2019-04-07 21:52:00


A cosmic ray is the cause?

Google search can still find it, but Google Scholar gives the wrong result.

UPDATE: I added the article by hand, but the link to the source (i.e. arXiv article) is not present. How can they loose arXiv articles? Or more precisely  arXiv e-prints , in no place arXiv uses the name “preprint arXiv”. Maybe google scholar merged with legacy publishers, who knows, these days…

Do you experience errors in Google Scholar?

Problems with slide equivalence

UPDATE: System X is a solution.


After the Intermezzo, in this post I’ll concentrate on the slide equivalence for unoriented (virtual) links, as defined in L.H. Kauffman, Knots and Physics, World Scientific 1991, p. 336.



Later on we shall propose a small graph rewrite system which is different from this, but we first need to understand that there are some problems with slide equivalence.

Kauffman rule I’ is half a definition, half a rewrite rule. He gives two decompositions of a crossing into two 3-valent nodes. The rewrite is that we can pass from one decomposition to the other.

Problem 1. How many types of 3-valent nodes are used? My guess is just one.


Problem 2. Is the rule II’ needed at all? Why not use instead the rule III’, with the price of a loop:


Problem 3. Is the rule I’ too strong? Maybe, look at the following configuration made of two crossings.


Neighboring crossings dissappear.

We don’t even need two neighboring crossings. In the next figure I took the left pattern from the rule IV’, first part. It is also a pattern where the rules I’, then III’ apply.


The result is very different from the application of IV’.

The same happens for the right pattern of the rule IV’, first part.


We can use again I’ and III’ to obtain a very different configuration than expected.


Conclusion.  The slide equivalence rewrites with a “dumb” algorithm of rewrites application behaves otherwise than expected. By “dumb” I mean my favorite algorithms, like greedy deterministic or random.

Used with intelligence, the slide equivalence rewrites have interesting computational aspects, but what about the “intelligent” algorithm? Kauffman brains are rare.



Intermezzo (small graph rewrite systems)

Between the last post on small graph rewrite systems and a new one to follow, here are some other, real world examples of such systems.

Where is this from? Answer: M. Khovanov,   New geometrical constructions in low-dimensional topology, preprint 1990, fig. 20


Where is this from? Answer: L.H. Kauffman, Knots and Physics, World Scientific 1991, p. 336.


How can we put this in order?

Small graph rewrite systems (3)

Previous posts on the same subject are (1) (2). Related is this post.

In this post I update the small rewrite system SH-2.1 to SH-2.2.  If you look at SH-2.1, it has 3 rewrites: SH, GL and RM.

None of these rewrites allow two “sticks” to merge or one stick to transform into a ring.

Compare with the interaction combinators inspired IC-2.1, with the rewrites DIST, RW1 and RW2. Is true, that system is too reactive, but it has one rewrite, namely RW2, which allows two sticks to merge.

A rewrite which has this property (sticks merge) is essential for computational purposes. The most famous of such rewrites is the BETA rewrite in lambda calculus, or the \gamma \gamma and the \delta \delta rewrites from interaction combinators:


(figure from Lafont article).

In the oriented sticks and rings version of chemlambda we have the rewrites BETA (or A-L) and FI-FOE, with the same property.



We shall modify therefore one of the rewrites from SH-2.1.

The SH-2.2 system

We keep the rewrites SH and GL from the SH-2.1 system:




and we replace the rewrite RM with the new rewrite R2:


The new rewrite R2 needs a ring!

Let’s show that SH-2.2 is better than SH-2.1. All we need is to be able to do the rewrite RM from SH-2.1 in SH-2.2. Here is it.


Mind that the ring from the upper right graph is not the same as the ring from the bottom graph. Indeed, in the rewrite R2 the ring from the bottom is consumed and  a new ring appears from the merging of the ends of the stick with two blue nodes which sits on the top of the other stick with two yellow ends from the bottom graph.

Compared with the original RM rewrite


we have an extra ring at the left and at the right of the rewrite RM, as it appears in SH-2.2. Otherwise said the ring plays the role of an enzyme.






Data, channels, contracts, rewrites and coins

We can think about all these in terms of dimension.

Data are  like points, 0 dimensional.

Channels (of communication) are 1-dimensional. They are the fundamental bricks of IT. They are not enough, though.

Contracts are 2-dimensional. A contract is an algorithm over channels.

As an example, here’s how we can turn a graph rewrite system in the  sticks and rings version  into a contract-like one.

For the (oriented) sticks and rings version of chemlambda see this. Two small rewrite systems involving non-oriented sticks are described here  and there.

The closest algorithm relevant for this post is the one from needs.  I shall mention in ths post names from there, like  “succ”, “ccus”, “gamma” and “ammag”.

We can descrine a sticks and rings graph with two invertible functions. Each node of a trivalent graph is decomposed as two half-nodes. Indees, cut a trivalent node  in half, you get a half-node with one half-edge  (this is type  I half-node) and another half-node with two half-edges (this is type II half-node).   Each stick has two ends which are type I half-nodes and it is a list which starts and ends with type I half-nodes and contains  otherwise only type II hald-nodes. A ring is a circular list made of type II half-nodes.

Succ and it’s inverse ccus is the function (permutation) which for any half-node from a  stick or ring gives the sucessor half-node, if any (or the predecessor half-node  in the case of ccus).

Gamma and it’s inverse  ammag is the function (permutation) which pairs each type I half-node with it’s type  II other half-node.

The new thing here is that we shall think about half-nodes as names of channels in an asynchronous pi-calculus.

In the needs algorithm the rewrites (which are  conservative in the number of half-nodes) are small permutations of half-nodes. The gamma-ammag function is passive, in the sense that it never changes.   The rewrites are  random.

In the version I   propose here   each half-node is a unidirectional channel which can be used to communicate other channels names and some data. In the case of the graph rewrite systems we discuss here the other data is the color of the (half-)node.

In the case of chemlambda strings we  need a bit for the half-node type and 2 bits for the colors. As a (half) tongue-in-cheek I used DNA or RNA like encoding, like in this screen casting, to turn a mol file into the sticks and rings version.


In the version proposed here we  change the algorithm of random application of a rewrite with an algorithm which involves only communication via the channels  associated with the half-nodes.

Here is  the description of a SH rewrite


Each  stick is the property of somebody (A, B, C, D, …), say an actor. A stick is a list of  channels names.  So the data (0-dim) are the channels names, organized in lists which are managed by actors.

Actors can send messages through channels and also they can edit their list.

A rewrite, here the SH,  is an contract, i.e. an algorithm which describes how to achieve the rewrite via communication and editing of list, such that each actor can locally  execute it’s part.

But how can we  read  such a contract? In many ways, because the formalism is so general. For example: B and D exchange C in the place A, witnessed by the notary node e1.

Then what can be the pairs  yellow-blue and blue-blue? Recall that originally the SH rewrite is


Well, coins? Or coin (yellow-blue) and gas (blue-blue)?  Or receipts? Imagination is the limit and all can be made  in practice. If you are interested to make it real contact me.


Small graph rewrite systems (2)

I continue from the last post on small graph rewrite systems. Let’s see some more details about the SH-2.1 system.

The last post ends with a realization of the DIST rewrite in SH-2.1. We can do better than that, by showing that the DIST rewrite is reversible:




As you see, the two pairs yellow-blue and blue-blue play the role  of enzymes.

Another two interesting reactions are the following:


So the “yellow ends” duplicate, with a pair blue-blue as an enzyme.



The supplementary “blue ends” prune themselves, in a sort  of duality  with the “yellow ends”. This time there is a yellow-blue pair enzyme.


Small graph rewrite systems (I)

What happens if we use the sticks and rings description of  oriented fatgraphs, like in this post, but we drop the orientation? Moreover, what if we use as few colors as  possible and as few rewrites as possible?

For the sticks and rings version of chemlambda see this.

If we have oriented edges then the sticks and rings  image is equivalent with the usual oriented trivalent fatgraph. But if we drop the edges orientation something interesting happens. The trivalent nodes become invertible. Indeed, take for example, with the notations from chemlambda, a node as seen in a mol file:

A 1 2 3

It means that we have a node “A” (i.e. application)   with a left.in port named “1”, a right.in port named “2” and a out port named “3”. To get a more precise idea, think about “1” as a term “T_1”, about “2” as a term “T_2” and about “3” as the term “T_1 T_2”.

In the sticks and rings version there is an edge which connects “1” and “3”, which is perhaps part  of a stick (which has two ends) or a ring (which has none). “1” and “3” appear as marks on that stick (or ring) and the stick (or ring) has an orientation so that the successor of “1” is “3”.

Another stick, which ends with the mark “2” and the node “A”, is glued between the marks “1” and “2”.

For a node like

FO 1 2 3

(i.e. a fanout) the oriented stick passes from “1” to “3” but this time the second stick starts with the node “FO” and the mark “2”.

Now, if we drop the sticks orientations, it means that we can no longer discern between say “A 1 2 3” and “A 3 2 1”. As an expression which depends on the port “2”,  we can go from “1” to “3” as easily as we go from “3” to “1”, so it  looks invertible.

As a first try let’s see how does a non-oriented sticks and rings version of Lafont interaction combinators look like. We need only two colors, to discern between the \gamma and \delta combinators. We shall not use the combinators with only one port.

The IC-2.1 system

The \gamma \delta  rewrite will be like a DIST rewrite from chemlambda, only unoriented.



Then, the \gamma \gamma looks like this


and finally the \delta \delta may be seen like this


As you see all rewrites are made conservative in the number of nodes, by the addition of supplementary 2-nodes sticks, call them “pairs” from now.


Now we have some problems:

  • the RHS of the DIST rewrite contains the pattern from the LHS  if we add another pair yellow-blue. That is bad because it means we can continue indefinitely the same rewrite if we have enough yellow-blue pairs  at our  disposal
  • practically almost any sticks and rings graph is extremely reactive, because any combination of nodes colors which are neighbours on a stick or ring will trigger a rewrite. Question: which are the graphs which are fully reduced?
  • if we look back to Lafont interaction combinators, then we see that our system has more rewrites than the original. Indeed, that is because the non-oriented sticks and rings image is ambiguous, not equivalent with the interaction combinators. This explains the abundance of patterns for reduction.


The SH-2.1 system

Let’s try another rewrite system, non-oriented sticks and rings and two colors. We’ll take the shuffle trick rewrite as basic, this time:


Then we add a “glue” rewrite


and a “remove” rewrite



Now we are in the realm of emergent algebras, with the yellow node as a generic dilation and the blue node as a fanout (more about this later). We can do lots of funny things with this small system, for example we can do a DIST:





There is a remarkable behaviour here. Look at the pair blue-blue, you have it at the left of the “simulated” DIST and at the right of it.

The pair blue-blue behaves like an enzyme!

[Continues with this post.]

9-quine string animation

I use the chemlambda strings version to show  how the 9-quine works. [What is a quine in chemlambda? See here.]





The 9-quine is the smallest quine in chemlambda which does not have a termination node.  There exist smaller quines if the termination node is admitted. For example  the chemlambda equivalent of a quine from Interaction Combinators  which appears in Lafont’ foundational article.

As you see this version is conservative and there are no enzymes.

I shall come back with a post which explains why and how the 9-quine dies. It is of course due to the conflicts in chemlambda, see the examples from the page on chemlambda v2.

What I do according to ADS search

ArXiv  links to the Astrophysics  Data System, which got a new fancy look. It may be a bit heavy, as a supporter of the wonderful arXiv I would rather applaud if they would allow me to put articles with animations inside, be them only animated gifs. But is nevertheless interesting.

So if I go to my arXiv articles, choose an article and then click on NASA ADS link on the right panel, then I get this page.  Funny that they don’t use the Journal Reference from the arXiv to decide which article is “refereed”, i.e. peer reviewed, even if peer review is less than validation.

I am very pleased though   about the visual representation of what I do, as seen from the arXiv articles.


This is the image which tells how many articles I have on certain keywords, as well as links between keywords which are proportional with the number of the articles which fit a pair of keywords.

TBH this is the first time a neutral bibliometric system  shows an accurate image of my work.

The darker blue sector, which has no words on it is related to variational methods in fracture, Mumford-Shah and convexity articles.

The same picture, but according to the downloads in the last 90 days, is this one.


This is also very satisfying because the hamiltonian/information/… has a big future. For the moment it looks unrelated to the other sectors, but wait for the kaleidos project 🙂

The em-convex rewrite system, where I guess I found the equivalent of the Church numbers for space, is in the dilatation structures/…/selfsimilar sector. In my opinion, important subject.

Lambda calculus inspires experiments with chemlambda

In the salvaged collection of google+ animations (see how  the collection was deleted ) I told several stories about how lambda calculus can bring inspiration for experiments with chemlambda. I select for this post a sequence of such experiments. For previous related posts here see this tag and this post.

Let’s go directly to the visuals. (UPDATE: or see the story of the ouroboros here)

Already in chemlambda v1 I remarked the interesting behaviour of the graph (or molecule) which is obtained from the lambda term of the predecessor applied to a Church number.  With the deterministic greedy algorithm of reductions, after the first stages of reduction there is a repeating pattern of  reduction, (almost) up to the end. The predecessor applied to the Church number molecule looks almost like a closed loop made of pairs A-FO (because that’s how a Church number appears in chemlambda), except a small region which contains the graph of the predecessor, or what it becomes after few rewrites.

In chemlambda v2 we have two kinds of fanouts: FO and FOE.  The end result of the reduction of the same molecule, under the same algorithm, is different: where in chemlambda v1 we had FO nodes (at the end of the reduction), now we have FOE nodes. Other wise there’s the same phenomenon.

Here is it, with black and white visuals


Made by recording of this live (js) demo.

1. What happens if we start not from the initial graph, but from the graph after a short number of rewrites? We have just to cut the “out” root of the initial graph, and some nodes from it’s neighbourhood and glue back, so that we obtain a repeating pattern walking on a circular train track.

Here is it, this time with the random reduction algorithm:


I previously called this graph an “ouroboros”. Or a walker.

2. That is interesting, it looks like a creature (can keep it’s “presence”) which walks in a single direction in a 1-dimensional world.  What could be the mechanism?

Penrose comes to mind, so in the next animation I also use a short demonstration from a movie by Penrose.



3. Let’s go back to the lambda calculus side and recall that the algorithm for the translation of a lambda term to a chemlambda molecule is the same as the one from GLC, i.e the one from Section 3 here. There is a freedom in this algorithm, namely that trees of FO nodes can be rewired as we wish. From one side this is normal for GLC and chemlambda v1,  which have the CO-COMM and CO-ASSOC rewrites


In chemlambda v2 we don’t have these rewrites at all, which means that in principle two diferent molecules,  obtained from the same lambda term, which differ only by the rewiring of the FO nodes may reduce differently.

In our case it would be interesting to see if the same is true for the FOE nodes as well. For example, remark that the closed loop, excepting the walker, is made by a tree of FOE nodes, a very simple one. What happens if we perturb this tree, say by permuting some of the leaves of the tree, i.e. by rewiring the connections between FOE and A nodes.


The “creature” survives and now it walks in a world which is no longer 1 dimensional.

Let’s play more: two permutations, this time let’s not glue the ends of the loop:


It looks like a signal transduction from the first glob to the second. Can we make it more visible, say by making invisible the old nodes and visible the new ones? Also let’s fade the links by making them very large and almost transparent.


Signal transduction! (recall that we don’t have a proof that indeed two molecules from the same lambda term, but with rewired FO trees reduce to the same molecule, actually this is false! and true only for a class of lambda terms. The math of this is both fascinating and somehow useless, unless we either use chemlambda in practice or we build chemlambda-like molecular computers.)

4.  Another way to rewire the tree of FOE nodes is to transform it into another tree with the same leaves.



5. Wait, if we understand how exactly this works, then we realize that we don’t really need this topology, it should also work for topologies like generalized Petersen graphs, for example for a dodecahedron.



This is a walker creature which walks in a dodecaheral “world”.

6. Can the creature eat? If we put something on it’s track, see if it eats it and if it modifies the track, while keeping it’s shape.


So the creature seems to have a metabolism.

We can use this for remodeling the world of the creature. Look what happens after many passes of the creature:



7. What if we combine the “worlds” of two creatures, identical otherwise. Will they survive the encounter, will they interact or will they pass one through the other like solitons?



Well, they survive. Why?

8. What happens if we shorten the track of the walker, as much as possible? We obtain a graph wit the following property: after one (or a finite give number of) step of the greedy deterministic algorithm we obtain an isomorphic graph. A quine! chemlambda quine.

At first, it looks that we obtained a 28 nodes quine. After some analysis we see that we can reduce this quine to a 20 nodes quine. A 20-quine.

Here is the first observation of the 20-quine under the random algorithm


According to this train of thoughts, a chemlambda quine is a graph which has a periodic evolution under the greedy deterministic algorithm, with the list of priority of rewrites set to DIST rewrites (which add nodes)  with greater priority than beta and FI-FOE rewrites (which subtract ndoes), and which does not have termination nodes (because it leads to some trivial quines).

These quines are interesting under the random reduction algorithm, which transform them into mortal living creatures with a metabolism.


So this is an example of how lambda calculus can inspire chemlambda experiments, as well as interesting mathematical questions.

Google translate helps the scholarly poor

Do you know what “scholarly poor” means? I saw this formulation some time ago and it made me ask: am I scholarly poor?

You find this expression in the writings of those who praise Gold Open Access, or in the articles which try to understand the Sci-Hub phenomenon.

Recall that Gold OA means practically that authors pay to publish from funds they receive for research. It’s all in the language: Green OA is not for publication, no sir! Green OA is for archiving. Gold OA is for publication and it may incur costs, you see, which may be covered by the authors. (The readers can no longer be forced to pay, so who’s left?) And the authors pay, not from their pockets, because they are not crazy rich to create and moreover to pay thousands of $ to publish their article. They pay from the funds they receive for reseach, because their bosses, the academic managers, ask them to. These academic managers just love the publishers, be them the traditional ones or this new modern Gold OA blend. They don’t like the Green OA, there’s no money involved, pooh! no value.

Sci-Hub made available practically any scientific article, therefore there is no longer any difference between an article published gratis, but behind a paywall, and an article published for 2000$ and free to read. Both are as easily accessible. IANAL but this is the reality of the world we are living in.

This reality upsets the Gold OA proponents, so they use this expression “scholarly poor” to denote those scholars which don’t have institutional access to the paywalled articles. Because Gold OA proponents love academic managers who are not poor, they ignore the reality that the researchers, in poor or rich (crazy?) academic institutions, all of them would rather read either from Green OA (like arXiv) or from Sci-Hub or from their colleagues who put online their work.

In itself, to name a researcher “scholarly poor” is distasteful.

But Google comes to the rescue! When I first saw this expression I was curious how it translates to French, for example, another language I understand.



Thank you Google Translate! And HAHA. And so poetical!

I checked again, today, when I decided to write this post. I recorded myself using the translate:



Yes, OK, a bit more bland, less poetical, but more  comical for the public at large.

So right, though!





A project in chemical computing and Lafont universality

The post Universality of interaction combinators and chemical reactions ends with the idea that Lafont universality notion, for interaction systems, may be the right one for chemical computing.

These days are strange, every one comes with some call from one of my old projects. (About new ones soon, I have so many things.) Today is even more special because there were two such calls.One of them was from what I wrote in A project in chemical computing page from april 2015. It ends with:

    If you examine what happens in this chemical computation, then you realise that it is in fact a means towards self-building of chemical or geometrical structure at the molecular level. The chemlambda computations are not done by numbers, or bits, but by structure processing. Or this structure processing is the real goal!
     Universal structure processing!

There is even this video about an Ackermann function molecular computer I forgot about.

The idea is that the creation of a real molecule to compute Ackermann(2,2) would be the coolest thing ever made in chemical computing. If that is possible then as possible as well would be an Ackermann goo made from Ackermann(4,4):


In Graphic lambda calculus and chemlambda (III) I comment again on Lafont:

    • Lafont universality property of interaction combinators means, in this pseudo-chemical sense, that

the equivalent molecular computer based on interaction combinators reactions (though not the translations) works

    for implementing a big enough class of reactions which are Turing universal in particular (Lafont  shows concretely that he can implement Turing machines).


In the series about Lafont interaction combinators and chemlambda (1) (2) (3), as well as in the paper version of the article Molecular computers, an effort is made to reconnect chemlambda research with much older work by Lafont. [UPDATE: I retrieved this, I forgot about it, it’s mostly chemlambda v1  to chemlambda v2, see also this post ]

The numberphile microbe and the busy beaver

This is another weirdly named, but contentful post after this one, During an attempt to launch myself into video explanations, I made a post on the numberphile microbe.

The numberphile microbe is the chemlambda version of a multiplication of two Church numbers, in this case 5X5=25. I called the creature evolving in the video a “numberphile microbe” because it really consumes copies of the number 5, metabolizes them and produces eventually 25. In a very careful way, though, which inspired me the following description (but you have to see the video from that post):

“The numberphile microbe loves Church numbers. His strategy is this: never one without the other. When he finds one Church number he looks around for the second one. Then he chains the first to the second and only after that he starts to slowly munch the head of the first. Meanwhile the second Church number watches the hapless first Church number entering, atom by atom, in the numberphile mouth.

Only the last Church number survives, in the form of the numberphile’s tail.”

The  mol file used is times_only.mol.  Yes, allright, is the mol version of the AST of a lambda term.

You can see the numberphile also in this animation, together with a busy beaver Turing machine (the chemlambda version explained here):



In the first half of the animation you see the “numberphile” at the left and the busy beaver as a reddish loop at the right.

What happens is that the lambda term like 5X5 reduces to 25 while in the same time the busy beaver machine works too. In the same time, the Church number 25 in the making already makes the small loop to replicate and to grow bigger and bigger, eventually 25 times bigger.

So that explains the title.

The mol file used is times_only_bb.mol. Open it and see how is it different than the first.

You can see a simulation (js) of Church number applied to a busy beaver here.

And the most important is: during the making of this short movie, no human director was present to stage the act.

Blockchain categoricitis 2, or life as an investor and a category theory fan

… or the unreasonable effectiveness of category theory in blockchain investments.

A year ago I wrote the post Blockchain categoricitis and now I see my prediction happening.

Categoricitis is the name of a disease which infects the predisposed fans of category theory, those which are not armed with powerfull mathematical antibodies. Show them some diagrams from the height of your academic tower, tell them you have answers for real problems and they will believe.

Case in point: RChain. See Boom, bust and blockchain: RChain Cooperative’s cryptocurrency dreams dissolve into controversy.

Update: Epilogue? (28.02.2020)

Yes, just another cryptocurrency story… Wait a moment, this one is different, because it is backed by strong mathematical authority! You’ll practically see all the actors from the GeekWire story mentioned in the posts linked further.


Guestpost at John Baez blog: RChain (archived)

“Programmers, venture capitalists, blockchain enthusiasts, experts in software, finance, and mathematics: myriad perspectives from around the globe came to join in the dawn of a new internet. Let’s just say, it’s a lot to take in. This project is the real deal – the idea is revolutionary […]”

RChain is light years ahead of the industry. Why? It is upholding the principle of correct by construction with the depth and rigor of mathematics.”


Another one, in the same place: Pyrofex (archived). This is not a bombastic guestpost, it’s authored by Baez.

Mike Stay is applying category theory to computation at a new startup called Pyrofex. And this startup has now entered a deal with RChain.”

Incidentally (but which fan reads everything?) in the same post Baez is candid about computation and category theory.

“When I first started, I thought the basic story would be obvious: people must be making up categories where the morphisms describe processes of computation.

But I soon learned I was wrong: […] the morphisms were equivalence classes of things going between data types—and this equivalence relation completely washed out the difference, between, say, a program that actually computes 237 × 419 and a program that just prints out 99303, which happens to be the answer to that problem.

In other words, the actual process of computation was not visible in the category-theoretic framework.” [boldfaced by me]

(then he goes on to say that 2-categories are needed in fact, etc.)

In Applied Category Theory at NIST (archived) we read:

“The workshop aims to bring together two distinct groups. First, category theorists interested in pursuing applications outside of the usual mathematical fields. Second, domain experts and research managers from industry, government, science and engineering who have in mind potential domain applications for categorical methods.”

and we see an animation from the post  “Correct-by-construction Casper | A Visualization for the Future of Blockchain Consensus“.


I never trusted these ideas. I had interactions with some of the actors in this story   (example) (another example), basically around distributed GLC . Between 2013-2015, instead of writing programs the fans of GLC  practically killed the distributed   GLC project  because it was all the time presented in misleading terms of agents and processes, despite my dislike. Which made me write chemlambda, so eventually that was good.

[hype] GLC and chemlambda are sort of ideal Lisp machines which you can cut in half and they still work. But you have to renounce at semantics for that, which makes this description very different from the actual Lisp machines.  [/hype]



Let’s make an invisible conference

An invisible conference is a small community of interacting scholars who assemble suddenly in a public place, acquire knowledge through experimental investigation, then quickly disperse.

UPDATE: I made a page which will be  updated with details, in time.

I made up this description by copy-paste from wikipedia flash mob and invisible college.

The definition is not complete yet, there has to be included something from a key signing party.

Before the conference.       [TBA]

During the conference.     [TBA]

After the conference.     [TBA]


In case you want to meet and talk seriously, what if we organize an invisible conference? I have to think more about the place, say Bucharest, Romania? or maybe this summer at a nice camp by the sea (you need a tent)? other idea?

Express your interest at chora2019@protonmail.com.

I put this contact also on my alternative homepage.

The “invisible” word points to the idea of an invisible college, mentioned in another post.

This post will be updated many times probably, so bookmark it because it will be less visible when other, newer posts will appear.


An extension of hamiltonian mechanics

This is an introduction to the ideas of the article arXiv:1902.04598

UPDATE: If you think about a billiard-ball computer, the computer is in the expression of the information gap. The model applies  also to chemlambda, molecules have a hamiltonian as well and the graph rewrites, aka chemical reactions, have a description in the information gap. That’s part of the kaleidos project 🙂


Hamiltonian mechanics is the mechanism of the world. Indeed, the very simple equations (here the dot means a time derivative)


govern everything. Just choose an expression for the function H, called hamiltonian, and then solve these equations to find the evolution in time of the system.

Quantum mechanics is in a very precise sense the same thing. The equations are the same, only the formalism is different. There is a hamiltonian which gives the evolution of the quantum system…

Well, until measurement, which is an addition to the beautiful formalism. So we can say that hamiltonian mechanics, in the quantum version, and the measurement algorithm are, together, the basis of the quantum world.

Going back to classical mechanics, the same happens. Hamiltonian mechanics can be used as is in astronomy, or when we model the behavior of a robotic arm, or other purely mechanical system. However, in real life there are behaviors which go beyond this. Among them: viscosity, plasticity, friction, damage, unilateral contact…

There is always, in almost all applications of mechanics, this extra ingredient: the system does not only have a hamiltonian, there are other quantities which govern it and which make, most of the time, the system to behave irreversibly.

Practically every  object, machine or construction made by humans needs knowledge beyond hamiltonian mechanics. Or beyond quantum mechanics. This is the realm of applied mathematics, of differential equations, of numerical simulations.

In this classical mechanics for the real world we need the hamiltonian and we also need to explain in which way the object or material we study is different from all the other objects or materials. This one is viscous, plastic, elsot-plastic, elasto-visco-plastic, there is damage, you name it, these differences are studied and they add to hamiltonian mechanics.

They should add, but practically they don’t. Instead, what happens is that the researchers interested into such studies choose to renounce at the beaustiful hamiltonian mechanics formalism and to go back to Newton and add their knowledge about irreversible behaviours there.

(There is another aspect to be considered if you think about mechanical computers. They are mostly nice thought experiments, very powerfull ideas generators. Take for example a billiard-ball computer. It can’t be described by hamiltonian mechanics alone because of the unilateral contact of the balls with the biliard and of the balls one with another. So we can study it, but we have to add to the hamiltonian mechanics formalism.)

From all this  we see that it may be interesting to study if there is any information content of the deviation from hamiltonian mechanics.

We can measure this deviation by a gap vector, defined by


and we need new equations for the gap vector \eta.  Very simple then, suppose we have the other ingredient we need, a likelihood function \pi \in [0,1] and we add that


where z = z(t) = (q(t), p(t)). That is we ask that    if the system is in the state z then the velocity \dot{z} and the gap vector \eta   maximize the likelihood \pi .

Still too general, how can we choose the likelihood? We may take the following condition


that is we can suppose that the algorithm max  gives a  categorical answer when applied to any of the 2nd or 3rd argument of the likelihood.

(It’s Nature’s business to embody the algorithm max…)

We define then the information content associated to the likelihood as


So now we have a principle of minimal information content of the difference from hamiltonian evolution: minimize



In arXiv:1902.04598 I explain how this extension of hamiltonian mechanics works wonderfully with viscosity, plasticity, damage and unilateral contact.

[see also this]

Scientific publishers take their money from the academic managers, blame them too

Wonderful thread  at HN: https://news.ycombinator.com/item?id=19114786

Starting with “All this is an excellent ad for sci-hub, which avoids most of the serious drawbacks of publishers like Elsevier. It was interesting how that was relegated to a veiled comment at the end, “or finding access in other channels”. But basically if the mainstream publishers can’t meet the need, we do need other channels, and right now sci-hub is the only one that actually works at scale.

Then the discussion goes to “Blame the academic administrators who demand publications in top tier journals – the same ones who charge a ton for access.

Or “ in market terms the clients (researchers) manifest a strong preference for other products than those offered by the publishers. Why do they still exist? Does not make any sense, except if we recognize also that the market is perturbed

Enjoy the thread!  It shows that people think better than, you choose:  pirates who fight  only for the media corporation rights,  gold OA diggers who ask for more money than legacy publishers, etc…

UPDATE: for those who don’t know me, I’m for OA and Open Science. I do what I support. I am not for legacy publishers. I don’t believe in the artificial distinction between green OA, which is said to be for archiving, and gold OA which is said to be for publishing. I’m for arXiv and other really needed services for research communication.

My first programs, long ago: Mumford-Shah and fracture

A long time ago, in 1995-1997, I dreamed about really fast and visual results in image segmentation by the new then Mumford-Shah functional and in fracture. It was my first programming experience. I used Fortran, bash and all kinds of tools available in linux.

There is still this trace of my page back then, here at the Wayback Machine. (I was away until 2006.) The present day web page is this.

Here is the image segmentation by the M-S functional of a bw picture of a Van Gogh painting.



And here is a typical result of  fracture propagation (although I remember having hundreds of frames available…)



The article is here.

What’s new around Open Access and Open Science? [updated]

In the last year I was not very much interested into Open Access and Open Science. There are several reasons, I shall explain them. But before: what’s new?

My reasons were that:

  • I’m a supporter of OA, but not under the banner of gold OA. You know that I have a very bad impression about the whole BOAI thing, which introduced the false distinction between gold which is publication and green which is archival. They succeeded to delay the adoption of what researchers need (i.e. basically older than BOAI inventions, like arXiv) and the recognition that the whole academic publication system is working actively against the researchers interests. Academic managers are the first to be blamed about this, because they don’t have the excuse that they work for a private entity which has to make money no matter the price. Publishers are greedy, OK, but who gives them the money?
  • Practically, for the working researcher, we can now publish in any place, no matter how close or anachronically managed, because we can find anything on Sci-Hub, if we want. So there is no reason to fight for more OA than this. Except for those who make money from gold OA…
  • I was very wrong with my efforts and attempts to use corporate social media for scientific communication.
  • Bu still, I believe strongly in the superiority of validation over peer-review. Open Science is the future.

I was also interested in the implications for OA and OS of the new EU Copyright Directive. I expressed my concern that again it seems that nobody cares about the needs of researchers (as opposed to publishers and corporations in general) and I asked some questions which interest me and nobody else seems to ask: will the new EU Copyright Directive affect arXiv or Figshare?  The problem I see is related to automatic filters, or to real ways the researchers may use these repositories.  See for example here for a discussion.  In   Sept 2018 I filed requests for answers to arXiv and to Figshare. For me at least the answers will be very interesting and I hope them to be as bland as possible, in the sense that there is nothing to worry about.

So from my side, that’s about all, not much. I feel like except the gold OA money sucking there’s nothing new happening. Please tell me I’m very wrong and also what can I do with my research output, in 2019.

UPDATE: I submitted two days ago a comment at Julia Reda post Article 13 is back on – and it got worse, not better. About the implications for the research articles repositories, the big ones, I mean, the ones which are used millions of times by many researchers. I waited patiently, either for the appearance of the comment or for a reaction. Any reaction. For me this is a clear answer: pirates fight for the freedom of the corporation to share in its walled garden the product of a publisher. The rest is immaterial for them. They pirates not explorers.

UPDATE 2: This draft of Article 13 contains the following definition: “‘online content sharing service provider’ means a provider of an information society service whose main or one of the main purposes is to store and give the public access to a large amount of copyright protected works or other protected subject-matter uploaded by its users which it organises and promotes for profit-making purposes. Providers of services such as not-for profit online encyclopedias, not-for profit educational and scientific repositories, open source software developing and sharing platforms, electronic communication service providers as defined in Directive 2018/1972 establishing the European Communications Code, online marketplaces and business-to business cloud services and cloud services which allow users to upload content for their own use shall not be considered online content sharing service providers within the meaning of this Directive.

If this is part of the final version of Article 13 then there is nothing to worry as concerns arXiv, for example.

Maybe a separate push should be on upload filters and their legal side (who is responsible for the output of this algorithm? surely not the algorithm!), perhaps by asking for complete, reproducible, transparent information about those: source code and all the dependencies source code, reproducible behavior.


Graphic lambda calculus and chemlambda (IV)

This post continues with chemlambda v2. For the last post in the series see here.

Instead of putting even more material here, I thought it is saner to make a clear page with all details about the nodes and rewrites of chemlambda v2. Down the page there are examples of conflicts.

Not included in that page is the extension of chemlambda v2 with nodes for Turing machines. The scripts have them, in the particular case of a busy beaver machine. You can find this extension explained in the article Turing machines, chemlambda style.

Turing machines appear here differently from the translation technique of Lafont (discussed here, see also these (1), (2) for other relations between interaction combinators and chemlambda). Recall that he proves  prove that interaction combinators are Turing universal by:

  • first proving a different kind of universality among interaction nets, to me much more interesting than Turing universality, because purely graph related
  • then proving that any Turning machine can be turned into an interaction nets graphical rewrite system.

In this extension of chemlambda v2 the nodes for Turing machines are not translated from chemlambda, i.e. they are not given as chemlambda graphs. However, what’s interesting is that the chemlambda and Turing realm can work harmoniously together, even if based on different nodes.

An example is given in the Chemlambda for the people slides, with the name Virus structure with Turing machines, builts itself


but mind that the source link is no longer available, since I deleted the chemlambda g+ collection. The loops you see are busy beaver Turing Machines, the structure from the middle is pure chemlambda.



The shuffle trick in Lafont’ Interaction Combinators

For the shuffle trick see The illustrated shuffle trick…    In a way, it’s also in Lafont’ Interaction Combinators article, in the semantics part.


It’s in the left part of Figure 14.

In chemlambda the pattern involves one FO node and two FOE nodes. In this pattern there is first a FO-FOE rewrite and then a FI-FOE  one. After these rewrites we see that now we have a FOE instead of the FO node and two FO instead of the previous two FOE nodes. Also there is a swap of ports, like in the figure.

You can see it all in the linked post, an animation is this:


For previous posts about Lafont paper and relations with chemlambda see:

If the nodes FO and FOE were dilations of arbitrary coefficients a and b, in an emergent algebra, then the equivalent rewrite is possible if and only if we are in a vector space. (Hint: it implies linearity, which implies we are in a conical group, therefore we can use the particular form of dilations in the general shuffle trick and we obtain the commutativity of the group operation. The only commutative conical groups are vector spaces.)

In particular the em-convex axiom implies the shuffle trick, via theorem 8.9 from arXiv:1807.02058  . So the shuffle trick is a sign of commutativity. Hence chemlambda alone is still not general enough for my purposes.

You may find interesting the post Groups are numbers (1) . Together with the em-convex article, it may indeed be deduced that [the use of] one-parameter groups [in Gleason-Yamabe and Montgomery-Zippin] is analoguous to the Church encoding of naturals. One-parameter groups are numbers. The em-convex axiom could be weakened to the statement that 2 is invertible and we would still obtain theorem 8.9. So that’s when the vector space structure appears in the solution of the Hilbert 5th problem. But if you are in a general group with dilations, where only the “em” part of the em-convex rewrite system applies (together with some torsor rewrites, because it’s a group), then you can’t invert 2, or generally any other number than 1, so you get only a structure of conical group at the infinitesimal level. However naturals exist even there, but they are not related to one-parameter groups.

Category theory is not a theory, here’s why: [updated]

Category theory does not make predictions.

This is a black and white formulation, so there certainly are exceptions. Feel free to contradict.


UPDATE: As I’m watching Gromov on probability, symmetry, linearity, the first part:

I can’t stop noticing several things:

  • he repeatedly say “we don’t compute”, “we don’t make computations”
  • he rightly say that the classical mathematical notation hides the real thing behind, like for example by using numbers, sets, enumerations (of states for ex.)
  • and he clearly thinks that category theory is a more evolved language than the classical.

Yes, my opinion is that indeed the category theory language is more evolved than classical. But there is an even more evolved stage: computation theory made geometrical (or more symmetric, without the need for states, enumerations, etc).

Category theory is some kind of trap for those mathematicians who want to say something  is computable or something is, or should be an algorithm, but they don’t know how to say it correctly. Corectly means without the burden of external, unnatural bagagge, like enumeration, naming, evaluations, etc. So they resort to category theory language, because it allows them to abstract over sets, enumerations, etc.

There is no, yet, a fully geometrical version of computation theory.

What Gromov wants is to express himself in that ideal computation theory, but instead he only has category theory language to use.

Gromov computes and then he says this is not a computation.

Grothendieck, when he soaks the nut in the water, he lets the water compute. He just build a computer and let it run.  He reports the results, that’s what classical mathematical language permits.

That’s the problem with category theory, it does not compute, properly, just reports the results of it.


As concerns the real way humans use category theory…

Mathematicians use category theory as a tool, or as a notation, or as a thought discipline, or as an explanation style. Definitely useful for the informed researcher! Or a life purpose for a few minds.

All hype for the fans of mathematics, computer science or other sciences. To them, category theory gives the false impression of understanding. Deep inside, the fan of science (who does not want/have time/understands anything of the subject) feels that all creative insights are based on a small repertoire of simple (apparently) tricks. Something that the fan can do, something which looks science-y, without the effort.

Then, there are the programmers, wonderful clever people who practice a new science and long for recognition from the classics 🙂 Category theory seems modular enough for them. A tool for abstraction, too, something they are trained in.  And — why don’t you recognize? — with that eternal polish of mathematics, but without the effort.

This is exploited cynically by good  public communicators with a creativity problem.  The recipe is: explain. Take an older, difficult creation, wash it with category theory and present it as new.

Graphic lambda calculus and chemlambda(III)

This post introduces chemlambda v2. I continue from the last post, which describes the fact that chemlambda v1, even if it has only local rewrites, it is not working well when used with the dumbest possible reduction algorithms.

Nature has to work with the dumbest algorithms, or else we live in a fairy tale.

Chemlambda v2 is an artificial chemistry, in the following sense:

  • it is a graph rewrite system over oriented fatgraphs made of a finite number of nodes, from the list: 5 types of 3-valent nodes, A (application), L (lambda abstraction), FO (fanout), FI (fanin), FOE (external fanout), 1 type of 2-valent node Arrow, 3 types of 1-valent nodes, FRIN (free in), FROUT (free out), T (termination). Compared to chemlambda v1, there is a new node, the FOE. The nodes, not the rewrites, are described in this early explanation called Welcome to the soup. (Mind that the gallery of example which is available at the end of these explanation mixes chemlambda v1 and chemlambda v2 examples. I updated the links so that is no longer pointing to this very early gallery of examples. However if you like it here is it.)
  • the rewrites CO-COMM and CO-ASSOC of chemlambda v1 are not available, instead there are several new DIST rewrites: FO-FOE, L-FOE, A-FOE, FI-FO, and a new beta like rewrite FI-FOE. As in chemlambda v1, the patterns of the rewrites fit with the interaction combinator rewrites if we forget the orientation of the edges, but the 3-valent nodes don’t have a principal port, so they don’t form interaction nets. Moreover, the are conflicts among the rewrites, i.e. there are configurations of 3 nodes such that we have a node which belongs to two pairs of nodes which may admit rewrites. The order of application of rewrites may matter for such conflicts.
  • there is an algorithm of application of rewrites, which is either the deterministic greedy algorithm with a list of priority of rewrites (for example beta rewrites have priority over DIST rewrites, whenever there is a conflict), or the random application algorithm.


Sources for chemlambda v2:


The goal of chemlambda v2: to explore the possibility of molecular computers in this artificial chemistry.

This needs explanations. Indeed,  does the system work with the simplest random algorithm? We are not interested into semantics, because it is, or it relies on global notions, We are not (very) interested into reduction strategies for lambda terms, because they are not as simple as the dumbest algorithms we use here. Likewise for readback, etc.

So, does chemlambda v2 work enough for making molecular computers?  Pure untyped lambda calculus reduction problems are an inspiration. If the system works for the particular case of graphs related to lambda terms then this is a bonus for this project.

As you see, instead of searching for an algorithm which could implement, decentralized say, a lambda calculus reduction strategy, we ask if a particular system reduces (graphs related to) terms with one algorithm from the fixed class of dumbest ones.

That is why the universality in the sense of Lafont is fascinating. In this post I argued that Lafont universality property of interaction combinators means, in this pseudo-chemical sense, that the equivalent molecular computer based on interaction combinators reactions (though not the translations) works for implementing a big enough class of reactions which are Turing universal in particular (Lafont  shows concretely that he can implement Turing machines).

(continues with the part IV)

computing with space | open notebook