Asemantic computing draft

here. As this is a draft, probably has parts in need of rewriting. Or criticized. Some will certainly dislike it 🙂

Also archived.

UPDATE: a related subject, not touched in the draft, is the composability bloat. Composability of computations is often presented as desirable and it is a feature of an easy to program system. However, in Nature there are no examples of composable computations. It is always about the context. In programming, even if at small scale composability is desirable, at large scale it produces bloated computations.

Composability should not be enforced at the fundamental level, instead it should be a welcomed, side effect, of a polite manner to treat the participants at a distributed computation.

Unrelated: you may ask why do I use Because is pure freedom 🙂 Previous uses: (internet of smells) (archived)   (home remodeling) (archived) the chemical sneakernet stories.

2021, a spiral around the sink

Shall we stay home until 2022?

2020 was like a ship which navigates through a strong current. There was no way to get out, the only direction was forward. The only choice was speed. New ways to work, new things to do. More of them.

And then, in 2021 we seem to understand that we are in a vortex. We advance only to go back, to go around.

It looked that in 2021 we shall find the escape. Instead, is even not more of the same, is turning around.

In the Western world, with the exception of Israel in my opinion, we are stalled, spiralling around the sink. Last to have vaccines, we can’t produce enough of them. We are masters of propaganda instead.

Why not be kept in the spiral one more year? In 2022 there will be other elections. If the pandemic ends then the problems we had before will appear in their true stature. They are huge. We are done! There is no future in the race for more inequality, for unethical laws (like copyright), for more power to corporations, for more objects and no discussions. We are past reality, we deny reality. In the West there is no future for the old ways. All this will blow out when the pressure of the pandemic will go down.

So let’s go very, very slowly with those vaccines. Let’s be afraid of the new mutations, which exist and they are a real reason of concern. But let’s favor them by not being stern enough.

Let’s keep the lid on the pot, maybe the pressure will not build. Maybe we shall use the bad examples, the deluded persons extreme behaviours, as the reason to cancel more, to censor more.

It won’t work like that, I’m sure, but the people in power have the same kind of short term thinking which rotted our societies. They will try and I am afraid that 2021 will be spent in a spiral, around the sink.

A statement of Indian academics on the ongoing petition to shut down Sci-Hub in India

If you care about Open Access then support our academic fellows from India who urge the Delhi High Court to rule against the big corporate publishers petition to block Sci-Hub in India.

Don’t believe me. Read their statement and tell if they have a point. An ethical point.

Read also the recent Interview With Sci-Hub’s Alexandra Elbakyan on the Delhi HC Case.

Previously here: Breakthrough Science Society statement: Make knowledge accessible to all. No to banning Sci-Hub and LibGen

[The following Statement is reproduced from the source Sci-Hub Case: Academics Urge Court To Rule Against ‘Extortionate Practices’ and put here as is, for the benefit of the readers of this blog]

The ongoing attempt by an international conglomerate of highly profitable publishers to shut down Sci-Hub and LibGen is an assault on the ability of scholars and students to access knowledge.

We urge the Delhi high court to consider that Sci-Hub and Libgen have thrown open the world of knowledge and helped to fire the imagination of students in India. Universities in the global south have much fewer resources than their counterparts in the north. Sci-Hub and Libgen have played a vital part in enabling Indian universities to keep up with cutting edge research the world over. Open access to scholarly knowledge points the way to the future.

We also urge the courts to recognise that scholarly publications are the result of research that is not funded by private publishers. Moreover, crucial components of the publishing process – peer review and editing – are performed for free by scholars on the understanding that they are helping to further the cause of rigorous knowledge production.

Yet, publication houses charge as much as $30-$50 dollars per article and $2,000 to $35,000 per journal title. Based on such exorbitant pricing, big academic publishers make large profits. In 2019, for example, Elsevier’s operating profits were $1.3 billion and Wiley’s were $338 million. Elsevier’s profit margins amount to an eye-watering 35-40%.

Given this, leading US and European universities are currently refusing to subscribe to Elsevier for their extortionate practices. Increasingly scholars, government funders, and large foundations have felt that these conglomerates are holding back scientific progress.

Websites like Sci-Hub and LibGen, on the other hand, widen access and scientific progress. About 66% of respondents of a survey at top-tier Indian universities said that they are highly dependent on Sci-Hub. During the pandemic, this has risen to 77%. A 2016 analysis found that Indian scholars downloaded 3.4 million papers over a six-months period from on Sci-Hub. If these were downloaded legally, it would have cost $100-125 million. This is more than half of what all research institutes in India cumulatively spend on subscriptions to paywalled scholarly literature.

To find out more about the information cited in the petition, visit:


As participants in the global community of scholarship, we urge the publishers to withdraw the lawsuit and the court to stand against the extortionate practices of publishing companies who are profiting off the unpaid labor of the global scholarly community and impeding the free-flow of knowledge and vital new discoveries.

Parenthesis hell, nonlocality of parsers: semantics delusions

A binary tree can be seen as a term built from an alphabet Leaf of leaves and the rules:

  • a in Leaf is a tree
  • if A, B are trees then (A.B) is a tree

When given a family of binary operations, a term built from these operations is a tree (in the sense just explained) with colored nodes (just replace the “.” with the name of the operation as a color).

When we use this definition of a tree as a term, we need a parser to transform this term into a tree as a trivalent graph. Any parser we use, it has to make repeated nonlocal passes through the term, in order to parse the parenthesis. Here nonlocal means not a priory bounded.

Or, this nonlocality makes the tree term unfit for decentralized computing. If we would use instead the tree graph then this nonlocality barrier would not exist.

Then the question seems to be: how to encode (as a term) the tree graph? Any naming of edges schema would induce another problem, that potentially somewhere else somebody uses the same name for another edge of the graph. A randomness solution (use a big enough random number as a name) would make this naming clash improbable. Another solution would be to treat these names as money, as suggested in the hapax project.

Nature has an alternative for the parenthesis hell: chemistry.

But computers don’t have this alternative, unless we build new ones where magical but physical trivalent nodes are used instead of bits.

An advantage of the use of tree terms and parentheses is semantics, in the sense that any tree term has a meaning coming from a local algorithm of decoration of terms from decorations of the subterms.

It seems that we live as if there is no alternative than terms because of this.

But this is a delusion: semantics matters only to humans, because it allows to reason about these terms. Our computers don’t need semantics to function, we historically used semantics to (build computers and to) program on them. But we, humans, are lousy at keeping track of parentheses. Witness the superhuman and therefore unused or unpopular capacities of functional languages.

The concrete, punctual reason of my rant is that I write (write! not program) an article in latex where I use reasonings involving trees of depth 4. On paper, with my pen I can easily draw these trees and the reasoning is fluid and natural, but in latex? I could use subscripts and superscripts (not lisible enough!), or I could use tikz to draw the trees (not readable in the .tex file by a human, or I could embedd pictures of trees (not the same as the tree graphs themselves). Or I could use the term trees, unreadable by humans (and in particular by interested readers).

UPDATE: after some fiddling I hope this musically like notation (first line) is bearable in latex produced documents compared say with the third line, what do you think?

Alternatively, and probably in the future when I’ll have a parser for Pure See, I’ll just write programs in that language instead of proofs.

But you see, it’s so easy to make sense with pen and paper and eventually impossible in the present computers and languages. Even in Pure See, we write essentially terms and under the hood there is a nonlocal parser.

Sometime in the future, when we shall program in chemistry, we shall no longer have this semanic delusion.

Another speculation is that if our ancestors were squirrels , then probably we would write naturally on trees.

Space fabrics in the chemlambda collection

Although the chemlambda project has a different use of graphs compared with the Wolfram Physics project, we can produce space fabrics which look alike some of the notable universes in the WP project.

Here are some of them (source: the chemlambda collection):

Maps are intrinsic, charts are model theoretic

Even in the commutative world of Pure See, and using that language, maps are intrinsic and charts are model theoretic.

With only the 6 instructions available, like

from a see b as c;

we can build maps of finite terms. A finite term is like a place which can be attained. We build finite terms with only

  • variables a, b, c, …
  • if (from A see B as C) and A, B are finite then C is finite,
  • if (from A see B as D; from A see C as E; from D as E see F) and A, B, C are finite then F is finite (called the difference from C to B, based at A).

If it weren’t weird, in Pure See we could define finite terms as a sort of wrong lambda terms, built from abstraction operation

see B from A as λA.B

and from application operation

as B from A see BA


  • either variables a,b,c,…
  • or λA.B with A, B finite,
  • or λA.(BC) with A, B, C finite.

There would be syntactic equality of terms (not only finite ones), up to rewrites corresponding to R1 and R2 (in this Pure See version), which I don’t elaborate on. I would not use emergent rewrites, nor SHUFFLE.

Then there is the interesting equivalence of finite tems defined as A = B if the term C defined by (from A as B see C) is finite. Or in the wrong lambda terms notation, if AB is finite (up to the R1, R2 rewrites).

This equivalence of finite terms would deserve the name “appproximate equality of places”.

A map is then made from the atomic difference: it associates to a finite term a “proof of attainment”, namely to finite F associates

map(F) = (A,B,C)

such that

(from A see B as D; from A see C as E; from D as E see F)

Therefore to a finite term F is associated the information that “based at A, you find F as the difference between C and B”.

Think about it. A map is a collection of instructions: from here, take this road to go there. While instructions are very useful to move around, you certainly cannot know if two different strings of instructions lead you to the same place.

As if we steer a ship on the see at large, we need some times to make the point.

That is why we use charts.

A chart is a model of Pure See (or more general). Models of Pure See are linear spaces and a chart associates to variables points in the linear space and to

from a see b as c

the fact that c is the homothety, or dilation based at a applied to b (of a generic or fixed, depends on the model, scale parameter). All Pure See instructions correspond to dilations in the way explained in the first link.

With a chart available, we can say that A = B model theoretically if

chart(A) = chart(B) + O(ε)

Mind that the syntactic A = B implies the model theoretic A =B but not the other way around, outside finite terms as defined, because the syntactic A = B imples the stronger

chart(A) = chart(B) + ε O(ε)

where ε is the scale parameter. In the Pure See world finite terms are polynomial in ε therefore syntactic A = B is equivalent with model theoretic A = B. But just consider an externally given function from finite terms to finite terms, then

f(A) = f(λA.B) syntactically for any finite A,B

is equivalent with f derivable.

Transcript of “Zipper logic revisited” talk

This is the transcript of the talk from September 2020, which I gave at the Quantum Topology Seminar organized by Louis Kauffman.

There is a video of the talk and a github repository with photos of slides.

My problem is if we can compute with tangles and the R moves. I am going to tell you where does this problem comes from my point of view, why it is different than the usual way of using tangles in computation and then I’m going to propose you an improvement of the thing called zipper logic, namely the way to do universal computation using tangles and zippers.

The problem is to COMPUTE with Reidemeister moves. The problem is to use a graph rewrite system which contains the tangle diagrams and as graph rewrites the R moves.

Can we do universal computation with this?

Where does this problem come from?

This is not the usual way to do computation with tangles. The usual way is that we have a circuit which we represent as a tangle, a knot diagram, where the crossings and maybe caps and cups are operators which satisfy the algebraic equivalent of the R moves. The circuit represents a computation. When we have two circuits, then we can say that they represent equivalent computations when we can turn one into another by using R moves.

In a quantum computation we have preparation (cups) and detectors (caps) (Louis).

R moves transform a computation into another. Example with teleportation.

R moves don’t compute, they are used to prove that 2 computations are equivalent. This is not what I’m after.

The source of interest: emergent algebras. An e.a. is a combination of algebraic and topological information…

[See as a better source for a short description of emergent algebras]

We can represent graphically the axioms. This is the configuration which gives the approximate sum. It says that as epsilon goes to 0 you obtain in the limit some gate which has 3 inputs and 3 outputs.

We say that the sum is an emergent object, it emerges from the limit. We can also define in the same way differentiability.

We can define not only emergent objects, but also emergent properties.

Here you see an example: we use the R2 rewrites and passage to the limit to prove that the addition is commutative.

The moral is: you pass to the limit here and there, then this becomes by a passage to the limit the associativity of the operation.

Another example: if you define a new crossing (relative crossing) then you can pass to the limit and you can prove that, based on e.a. axioms. Moreover you can prove, by using only R1 and R2 and passage to the limit, that the R3 emerges from R1, R2 and a passage to the limit, for the relative crossings.

With e.a. we can do differential calculus. We use only R1, R2 and a passage to the limit. It is a differential calculus which is not commutative.

There are interesting classes of e.a.:

  • linear e.a. correspond to calculus on conical groups (Carnot groups, explanations)
  • commutative e.a. which satisfy SHUFFLE (calculus in topological vector spaces) In this class you can do any computation (Pure See )

What I want to know is: can you do universal computation in the linear case? This corresponds to the initial problem.

What means universal computation? There are many, but among them, 3 ways to define what computation means. They are equivalent in a precise sense.

Lambda calculus is a term rewrite system (follows definition of lambda calculus).

Turing machine is an automaton (follows definition of TM).

Lafont’ Interaction combinators is a graph rewrite system, where you use graphs with two types of 3valent nodes and one type of 1valent node. Explanation of rewrites. These are port graphs.

Lafont proves that the grs is universal because he can implement TM, so it has to be universal. There is a lot of work to implement LC in a grs, but the reality is that this is extremely dificult, in the sense that there are solutions, but the solutions are not what we want, in the sense that you can transform a lambda term into a graph and then reduce it with the grs of Lafont, say, and then you can decorate the edges of the graph so that you can retrieve the result of the computation. But these are non-local. (Explanation of local)

We have 3 definition of what computation means, by 3 different models, which are equivalent only if you add supplementary hypotheses. For me IC is the most important one, but we don’t know yet how to compute with IC.

Let me reformulate the problem of if we can compute with R moves in this way.

Notation discussion. We can associate a knot quandle to a knot diagram, simply by naming the arcs, then we get a presentation of a quandle. The presentation of a quandle is invariant wrt the permutation of relation or the renaming of the arcs. There is a problem, for example when an arc passes over in two crossings, we have a hidden fanout. The solution is to use a different notation and FIN (fanin) and FO (fanout) nodes. This turns the presentation into a mol file.

Can we compute with that?

Theorem: If there is a computable parser from lc to knot diagrams, such that there is a term sent to a diagram of the unknot, then all lambda terms are sent to the unknot.

We can compute with knot diagrams, but in a stupid way: if we use diagrams as a notational device. Example with the knot TM (flattened unknot, you may add the variations with the head near the tape, or the SKI calculus, to argue that it can be done in a local way. Develop this.)

Conclussion, you need a little something more, by reconnecting the arcs.

Let’s pass to ZSS

For this I introduce zippers. The idea is that a zipper is a thing which has two non-identical parts, so it’s made of two half-zippers, which are not identical.

We can use zippers like this: we have a tangle 1, a zipper, then tangle two. Explanation of the zip rewrite.
This move transforms zippers to nothing.

The move smash create zippers. Explanation of smash.

Then we have a slip move.

Here is the new proposal for ZL. The initial proposal used tangles with zippers, but there were also fanins and fanout nodes.

The new proposal is that we have 4 kinds of 1valent nodes, the teeths of the zippers, then we have 4 kinds of half-zippers, and then we have two kinds of crossings.

Explanation of the new proposal.

Theorem: ZSS system turns out to be able to implement dirIC, so it is universal.

Explanation of dirIC:

Chorasimilarity 10 years anniversary

I missed that: chorasimilarity (name and) blog had the 10 years anniversary on Jan 2nd: link to first post.

UPDATE: and this is the 801 post. There are more than 800 posts written, but I tend to trash the personal posts after a while. What remains appears to be read many years after the writing date. May be happening because there is value in these posts, or by time circularity phenomenon, which says that because of strong intuition, I start to explain something from the conclusion (far in the future) to the beginning (in the near past). As the future of a past post is in the past of a future post, you get time circularity.

As concerns the advancement and sometimes curious time circularities, read the recent Logic in the infinitesimal place.

I folded and got my first locked smartphone

I want to keep it in pristine, normies standard state. Makes me say yikes several times every time I install something.

What to put on it? I put telegram, where you can find me as xorasimilarity.

I also put revolut, what else?

Warning 🙂 from personal experience, every time I was a very late adopter of anything, it crashed in short time. So locked smartphones, beware!

UPDATE: I know I am comical, but for the first time I listened music on spotify (or from a smartphone, more generally). I mean listened some music I know well and respect. Never in my life, included the dark years of living under a dictatorial regime, never ever I listened before such a crappy pamperized collection of crap. Linux and free software preserved me from such a shame, to be a protected slave. Many probably don’t know that even their ears are protected from wrongsound, not only from wrongthought, by the mighty corporations who, when you ask them a lobster and champagne, they give you a mcdonalds menu. The same who make you, rebel, click on “no, thanks”. What are the thanks for? OMG, in the world which is not free things are far worse already than I thought.

Example of Google and Duckduckgo censoring

Now everybody talks about how the corporate media companies practice overt censorship, be it Google, Facebook, Twitter or the delicious Gamestop and r/WallStreetBets banning story at Discord. Continued by Robinhood is limiting purchasing of stock, (archived). What’s that about? Read the Open Letter (archived).

UPDATE: And now Google deleted over 100000 negative reviews of Robinhood. Disgusting.

Internet exists since 30 years. The world greatest fortunes formed in relation to the net. Still, the world organization is not ready for it.

More than ever is clear, as Carlin said, that it’s a small club and you ain’t in it.

As many others, I became aware of this some years ago, in relation especially with Google and Twitter (never was a real FB user), and in relation to the fact that these companies can never be trusted to well handle the precious scientific information. Although I had, for a time, entertained the illusion that such corps can afford to engage into the preservation of the tiny (bitwise) ammount of scientific information, for the sake of the humanity.

When I criticised these companies, in the past, it was always from this point of view (which I consider very important in the long term). These behemots not only censor individuals, say by controlling the info bubble an individual has access to. They do much more damaging work: they classify people according to random criteria and then they censor the visibility across these arbitrary boundaries. (So they build infinite walls, not only designed bubbles.) Suppose for example that A Somebody invents new bike models, but Somebody sometimes posts his criticism of a big corp. The effect is not only that Somebody’ criticism is censored, but also Somebody’s bike models are very hard to find by anybody on the other side of the wall.

What about decentralized solutions to the centralization of power problem? Tough luck, not one of the corporations will show them to everybody.

OK, enough generalities. I noticed in October 2020 that Google search results re chorasimilarity blog changed abruptly. I looked at the (fake) alternative Duckduckgo and noticed the same happened. I then updated my About page with advice on how to check this.

Today I archived a google search page which shows that Google has my recent post, but they only show them if you search much harder, otherwise almost everything from chorasimilarity dissapeared.

Incidentally, this archived search page is almost all about critics of Google. If you go in there then you’ll find much older examples of Google censoring.

I’m not an important person, so why would you care? By now, it is more and more obvious why everybody should care.

Objective un-reality: quantum automata and decentralized computing

UPDATE: Questions? Just ask.

In his book The Cellular Automaton Interpretation of Quantum Mechanics, Gerard t’Hooft proposes a class of toy models of physics which are based on cellular automata and which are somewhere in between classical and quantum mechanics.

The idea is that for a synchronous reversible cellular automaton we may arrange that the update of the state of the automaton is obtained from a permutation of the previous state. As a permutation is a particular example of an unitary transformation, we can deduce formally a Hamiltonian from the chain of permutations which make the evolution of the automaton, with the property that it makes sense also for continuous time variables.

This idea is new for me, therefore I learn more about it as time passes. Therefore the choice of the most relevant references is more a reflection of my learning than an authoritative account of the state of this research field.

Previously I tried to study the work of Pablo Arrighi on quantum cellular automata. See An Overview of Quantum Cellular Automata, which is (to my limited knowledge) a development of t’Hooft proposal.

Suppose that I want to know what an asynchronous graph rewriting automaton looks like from this quantum point of view. Then I could take inspiraation from Arrighi, Martiel Quantum Causal Graph Dynamics, or I could take inspiration from a particular graph rewriting (from knot theory) as described by Kauffman, Lomonaco Quantizing Knots and Beyond or other articles about “mosaic quantum knots” as Quantum Knots and Knotted Zeros.

Be it in t’Hooft cogwheel models, Arrighi quantum cellula automata or Kauffman, Lomonaco mosaic quantum knots, there always is a global unitary (for example permutation) transformation. This is problematic because the same rewrite type, which acts locally, has to be associated with lots of global unitaries, depending where the rewrite is done.

It does not look physical to me. Or not geometrical, if you like.

Why do we need a global unitary transformation?

And then, there is the more subtle time serialization, which first makes us declare that we always consider a “single thread” time evolution and later we impose as a constraint that the Hamiltonian or other constructs have to lead to the same end result for any equivalent serialization.

I notice then exactly the same problem occurs when we speak about decentralized computing.

Which make me believe that here there is something cultural going on.

In quantum mechanics we need to know what is real and in decentralized computing we need some form of truth to save us from race conditions.

It is then, perhaps, useful to be clear about what is real and what is objective. I insisted on this historical solution, with roots in Ancient Greece and in Nordic cultures as well.

Reality is a discussion, or a trial, which happens in a public place, like a thinkstead. Reality is a trial which uses evidence. The past agreements are the evidence and they are objective.

In quantum mechanics the measurement is a trial and the evidence after the trial is prescribed by the Born rule. That we know.

In quantum automata and in decentralized computing we only have a limited form of reality, the one-to-one exchanges of messages and the problem is that we don’t have a clear way to obtain evidence. Here evidence is the global state of the system, which might not exist. For decentralized computing the trial rules, the rules of discussions, are too simple to cover all the needs of a real decentralized computing.

In all approaches we lack the thingstead. We either pass to the existence of a global state and of a global serialized transformation, or we require some form of a proof of objectivity, like in the blockchain technologies.

In my Artificial physics for artificial chemistry talk, I show how can one transform chemlambda into a t’Hooft like automaton, see also Chemlambda and hapax. Depending on the pattern detected, there is an unitary and local transformation (actually a pair of permutations). The evolution of the system is then not constrained to be described by a global unitary.

It lacks as well as thinkstead, because it has instead a mechanism based on tokens, like in the project hapax, or “made of money”. We can imagine a rewrite as a sort of Feynman diagram, where a photon (the token), indistiguishable from others, interacts with the system, which changes (locally) the state and perhaps expells another particle (token). The thinkstead is the diagram here, in a sense. In terms of decentralized computing there is also an interpretation possible, where we attribute property to tokens and local states, and because the input token becomes part of the state (after the rewrite), everything is made of money (tokens). We need here at least 6 one-to-one commnications to implement this, therefore a thinkstead begins to become visible.

We can conclude by saying that in the classical approaches we have models of objective un-reality. There is a cultural aversion of the discussion. There is evidence, therefore we may speak about objective, but the evidence does not come from a discussion. Objective, but not real. A sign of the times.

Logic in the infinitesimal place

This is the most extreme form of time circularity I witnessed. As I contemplate the page where I put together the anatomy of an infinitesimal emergent beta rewrite, I am amazed to see how edges turn into differences and how the beta rewrite is nothing but a difference which makes two differences.

Or, just look at this almost 10 years old post: Entering “chora”, the infinitesimal place. It was written as a support of the article Computing with space: a tangle formalism of chora and difference.

This whole blog (and it’s name) turns around the effort to understand computing with space.

Recently I finally understood that the limitation to tangle diagrams is (only a bit) misleading and I succeeded in the proof that (at least the multiplicative part of linear) logic is in a precise sense emergent from the geometric, algebraic or analytic content of emergent algebras, in the particular commutative case. Or, in the general case this emergence can be justified only infinitesimally, i.e. in the same sense as the Reidemeister 3 rewrite emerges infinitesimally from the other two (an old subject here).

Just put everything inside a chora and repeat the emergence of the beta rewrite. It works greatly,it involves some curvature correcting terms, which is as expected.

But in the end, after the passage to the limit, everything converges and, surprise! arrows are differences (in the emergent algebra sense).

What to make of these? An interactive version of the theory, of course!

Or, by the time circularity phenomenon manifest in this blog, this is clearly reminiscent of this quote from Bateson

Or we may spell the matter out and say that at every step, as a difference is transformed and propagated along its pathways, the embodiment of the difference before the step is a “territory” of which the embodiment after the step is a “map.” The map-territory relation obtains at every step.

which I took now fro the old “entering chora” post. Read also the supplementary material (section 9) in the “computing with space” article, for more.

So, while I am still amazed about the prescience of old, valuable thinkers, I am even more amazed because: (1) the rigorous math I do works, (2) how did they found out, without the math?

I hope that this year 2021 will be serene enough so that I will be permitted to write a clear, step by step account of the past research and a similarly clear description about how you get to the point where logic in the infinitesimal place is physics.

Fun with the non-commutative geometric series (II)

I was doing a lot of particular examples computations lately, because I want to understand better the quantities (curvature and co-linearity, see this) which have to be added in the non-commutative version of pure see emergent rewrites. These examples are fun by themselves and useful to do. Here is one computation which may help understanding a previous post.

[Today is also the 1 year anniversary of the posting of the salvaged chemlambda collection of animations, see this post from a year ago for more fun.]

In the post Fun with the non-commutative geometric series I wrote a proof of the convergence of an a priori non-commutative version of the geometric series. Here “non-commutative series” means the limit of a sequence of finite sums, where each sum is done with respect to a non-commutative addition operation. Part of the fun is that the proof is new even in the commutative case. But what about non-commutative cases, can we compute some examples?

One non-commutative case would be the Heisenberg group, but at closer inspection we see that we can reduce it to a commutative series computation. (We say, unpublished yet, that it has “commutative emergent numbers“, and there is a reason the H group is like this, to be explained).)

Here is a true non-commutative example, I don’t know if the expression of the convergence of the geometric series is straightforward or not in this particular case. Please tell me if you have alternative proofs of the final result. Here is it.

We are in the group N of real n \times n upper triangular matrices, with 1’s on the diagonal. This is a subgroup of real linear group.

For any scalar e \in (0,+\infty) we define a diagonal matrix

\mathbf{e}_{ij} = e^{i} \delta_{ij}

(here \delta_{ij} = 1 if i=j, otherwise \delta_{ij} = 0; please don’t make a confusion with the dilation, denoted also with the letter \delta, which is introduced in the following).

Conjugation with diagonal matrices is an automorphism of N, in particular for any e \in (0,+\infty) and any \mathbf{x} \in N we have

\mathbf{e}^{-1} \mathbf{x} \mathbf{e} \in N

This allows us to define the dilation:

\delta^{\mathbf{x}}_{e} \mathbf{y} = \mathbf{x} \mathbf{e}^{-1} \mathbf{x}^{-1} \mathbf{y} \mathbf{e}

Prove that:

(a) this defines an emergent algebra over N

(b) that this is a linear emergent algebra.

We can define now the geometric series with respect to the “addition” operation which is simply the matrix multiplication in the group N: (here I is the identity matrix, the neutral element of the group)

\sum_{k=0}^{\infty} \delta_{e^{k}}^{I} \mathbf{x}

and by the previous post, it does converge if e \in (0,1).

Let’s compute the finite sums. First notice that

\delta^{\mathbf{x}}_{e} \mathbf{y} = [\mathbf{x}, \mathbf{e}^{-1}]  \mathbf{e}^{-1} \mathbf{y} \mathbf{e}

where the square bracket denotes the commutator.

We then have:

\sum_{k=0}^{m} \delta_{e^{k}}^{I} \mathbf{x} = \mathbf{x}\mathbf{e}^{-1} \mathbf{x} \mathbf{e} \mathbf{e}^{-2} \mathbf{x} \mathbf{e}^{2} ... \mathbf{e}^{-m} \mathbf{x} \mathbf{e}^{m}

or equivalently:

\sum_{k=0}^{m} \delta_{e^{k}}^{I} \mathbf{x} = (\mathbf{x} \mathbf{e}^{-1})^{m} \mathbf{x} \mathbf{e}^{m}

All in all, our convergence of the geometric series result says that:

if e \in (0,1) and

[\mathbf{y}, \mathbf{e}^{-1}] = x


\mathbf{y} = \lim_{m \rightarrow \infty}  (\mathbf{x} \mathbf{e}^{-1})^{m} \mathbf{x} \mathbf{e}^{m}.

Remark that for n=2, i.e. for the case of 2 \times 2 upper triangular matrices with 1’s on the diagonal, the convergence result is the classical geometric series convergence. Compute by hand the partial sums in tis case to convince yourself that indeed the usual geometric (partial) sum appears in the (1,2) position in the matrices.

Enemies of knowledge: Twitter suspends Sci-hub account

On Hacker News is discussed Torrentfreak article Sci-Hub Founder Criticises Sudden Twitter Ban Over Over “Counterfeit” Content, I learned from this article that Twitter suspended Sci-Hub account.

Twitter rules are rules of a corporation. They don’t have any moral precedence over human rights. In a precedent post I collected links and text about

The Universal Declaration of Human Rights, adopted in 1948, formulates in the Article 19 the freedom of opinion and expression:“Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.”

Now I find that the proposal

Any provider of user created content is under the same constraints concerning free speech as the state is by the First Amendment.

is actually weak. Truth is that corporate media performs a cultural genocide at a global scale, after they started by providing us a better medium of communication.

It was a trap: promise better communication medium, replace the commons by a private version, then apply commercial rules.

I am not a lawyer, but on moral grounds they tricked us into believing we are using a common place of discussion. They got rich and they turned into a reality making machine.

Or now they just tried to erase the biggest collection of free scientific works. They side with Elsevier and others lawsuit in India (see also my previous post Breakthrough Science Society statement: Make knowledge accessible to all. No to banning Sci-Hub and LibGen).

Corporate media is now the enemy of the research people.

Corporate media is the enemy of knowledge.

There is no excuse, no way to deflect that.

It is very sad not only for Indian people, it is also extremely sad for their American fellow researchers. And for the image Twitter projects to the world, in their names, by trying to burn the Library of Alexandra.

Breakthrough Science Society statement: Make knowledge accessible to all. No to banning Sci-Hub and LibGen

UPDATE: Read An Interview With Sci-Hub’s Alexandra Elbakyan on the Delhi HC Case (or archived version).

The Indian Breakthrough Science Society made an online petition to protest against, quote:

“We are shocked to learn that three academic publishers — Elsevier, Wiley, and the American Chemical Society (ACS) — have filed a suit in the Delhi High Court on December 21, 2020, seeking a ban on the websites Sci-Hub and LibGen which have made academic research-related information freely available to all. Academic research cannot flourish without the free flow of information between those who produce it and those who seek it, and we strongly oppose the contention of the lawsuit.”

Read the whole petition and consider to sign it, if you think that this helps your fellow researchers and the whole society.

UPDATE: Thanks to my friend S for this link: Delhi High Court agrees to hear scientists, organisations in piracy suit by Elsevier and others against Sci-Hub, LibGen. You can see the list of the intervening scientists and their opinion about the copyright in relation to science.

After the IoT comes Gaia, version 2021

I found, via Scott Aaronson post My vaccine crackpottery: a confession, the very interesting Reverse Engineering the source code of the BioNTech/Pfizer SARS-CoV-2 Vaccine from

In that post there is yet another funny link: DNA seen through the eyes of a coder. Generally, the whole post is very well written, with lots of other links to explore further.

While reading the post I had a curious sensation: I’ve seen something like this before 🙂 Of course, it reminded me about one of chorasimilarity what if? posts: After the IoT comes Gaia.

Long story. After the Iot comes Gaia contains the seed of the story which lated became The Internet of Smells, which in turn was used as the skeleton of the failed TED presentation Chemlambda for the people, whose slides came into my mind by looking at the Reverse Engineering… post.

UPDATE: Here’s a detail with the Eve’s sniffer ring (from an unreleased photo, the ring is done by me)

And here’s a hypothetical image (unreleased code used) which was used in the post Data, channels, contracts, rewrites and coins


There is no causal connection between the imagined world of what if? and chemlambda, and the real world where real biochemists build the real vaccine.

It is troubling though… I may say I was right and that Internet of Smells is still in the (near, perhaps) future.

I noticed the ressemblance in a previous post from last October: Pharma meets the Internet of Things (2020 update), but that post is less precise than this one as concerns tracking of the sources.

Sometimes mathematicians are efficient armchair biochemists, or so they wish they were 🙂

The sources in the chemlambda for the people presentation are different, whatever pointed to the chemlambda collection of animations should go now to the saved collection on github. For example this animation which has a javascript simulation available for you to try.

Fun with the non-commutative geometric series

The geometric series is

\sum_{n=0}^{\infty} \varepsilon^{n}  = \frac{1}{1-\varepsilon}

for any \varepsilon \in (0,1).

With the notation for dilations:

\delta^{x}_{\varepsilon} y = x + \varepsilon (-x+y)

we can rephrase the geometric series as an existence result. Namely that the equation:

\delta^{S}_{\varepsilon} 0 = x

has the solution

S = \sum_{n=0}^{\infty} \left( \delta^{0}_{\varepsilon}\right)^{n} x

for any \varepsilon \in (0,1).

The non-commutative version of this fun result is given in proposition 8.4 from (journal) (pdf) (arxiv) – Infinitesimal affine geometry of metric spaces endowed with a dilatation structure, Houston Journal of Mathematics, 36, 1 (2010), 91-136

In that article is proposed a non-commutative affine geometry, where usual affine spaces (over a vector space) are replaced with their non-commutative versions over a conical group. The work uses dilation structures, which are metrical versions of emergent algebras.

I shall explain with few words what is the result in the new frame. We have, instead of the usual dilation, a more general one, from which all the other algebraic structure is deduced, sometimes by a passage to the limit.

One deduced, or “emergent” we say, mathematical object is a non-commutative replacement of the usual addition operation in vector spaces. This is a non-commutative group operation.

Instead of multiplication by scalars we have application of dilations. And moreover, we have distributivity of dilations over the addition operation. This is an emergent property (i.e. we can prove it by passsage to the limit) from a more basic one, namely that dilations are left-distributive (denoted as LIN, see the notes A problem concerning emergent algebras)

In few words, these are like vector spaces, but with a non-commutative addition of vectors. Therefore the geometric series, as seen in the reformulation with dilations, becomes a non-commutative series, which converges like the commutative version does.

But why?

In the mentioned article the proof uses the existence of a metric on the (non-commutative affine) space. We can do a simple proof without it.

Of course that the condition \varepsilon \in (0,1) will be reformulated as \varepsilon^{n} \rightarrow 0 as n \rightarrow \infty.

Pick a base point e , which will play the role of the 0. Then we want to solve the equation in S

\delta^{S}_{\varepsilon} e = x

for a given, arbitrary x and for an \varepsilon with the property that \varepsilon^{n} \rightarrow 0 as n \rightarrow \infty.

We want to prove that

(*) S = \sum_{n=0}^{\infty} \left( \delta^{e}_{\varepsilon}\right)^{n} x

where the sum is with respect to the non-commutative addition operation based at e. More precisely this operation is “emergent”:

\Sigma^{e} (v, w)  = \lim_{\varepsilon \rightarrow 0} \Sigma^{e}_{\varepsilon}(v,w)

where the approximate sum is

\Sigma^{e}_{\varepsilon}(v,w) = \delta^{e}_{\varepsilon^{-1}} \delta^{\delta^{e}_{\varepsilon} v}_{\varepsilon} w

Recall that the dilations satisfy the LIN property:

(LIN) \delta^{e}_{\varepsilon} \delta^{x}_{\mu} y = \delta^{\delta^{e}_{\varepsilon} x}_{\mu} \delta^{e}_{\varepsilon} y

The conclusion (*) can be reformulated as: define S_{0} = x and

S_{n+1} = \Sigma^{e}(x, \delta^{e}_{\varepsilon} S_{n})

Then S =\lim_{n \rightarrow \infty} S_{n}.

But this is simple, due to the identities coming from LIN, which I invite you to prove 🙂

The first identity uses the fact that, once we defined the addition from dilations and a passage to the limit, then we can prove that dilations themselves express via addition. This gives the first identity:

\Sigma^{e}_{\varepsilon}(v,w) = \Sigma^{e}(\delta^{v}_{\varepsilon} e, w)

The second identity is easiers, just use LIN, there is no passage to the limit involved.

\Sigma^{e}_{\varepsilon}(v, \delta^{e}_{\varepsilon} w)  = \delta^{v}_{\varepsilon} w

With these identities, the recurrence relation of the non-commutative geometric series becomes:

S_{n+1} = \Sigma^{e}(x, \delta^{e}_{\varepsilon} S_{n}) = \Sigma^{e}_{\varepsilon}(S, \delta^{e}_{\varepsilon} S_{n}) = \delta^{S}_{\varepsilon} S_{n}


S_{n} = \left( \delta^{S}_{\varepsilon} \right)^{n} S_{0}

and the proof ends by recalling that \varepsilon^{n} \rightarrow 0 as n \rightarrow \infty, which implies

\lim_{n \rightarrow \infty} S_{n} = \lim_{n \rightarrow \infty} \delta^{S}_{\varepsilon^{n}} S_{0} = \lim_{\varepsilon \rightarrow 0} \delta^{S}_{\varepsilon} S_{0} = S

So, secretly (or infinitely recursive-ly) the limit candidate S attracts the finite terms of the geometric series S_{n}.

KamiOS, a more AI version of MicrobiomeOS?

Keiichi Matsuda posted on Twitter a very interesting article about KamiOS. I quote from the article:

“Today, the big tech companies are represented on this earth by their supposedly all-powerful prophets (Siri, Alexa, Cortana, Google Assistant). Joining one of these almighty ecosystems requires sacrifice, and blind faith. You must agree to the terms and conditions, the arcane privacy policy. You submit your most intimate searches, friendships, memories.

From then you must pray that your god is a benevolent one. The big tech companies are monotheistic belief systems, each competing to be the One True God.

KamiOS is different. It is based in pagan animism, where there are many thousands of gods, who live in everything . You will form tight and productive relationships with some. But if a malevolent spirit living in your refrigerator proves untrustworthy or inefficient, you can banish it and replace it with another. Some gods serve corporate interests, some can be purchased from developers, others you might train yourself. Over time, you will choose a trusted circle of gods, who you may also allow to communicate with and manage one another.”

I like this proposal a lot, especially because in the tweet it is presented as a form of spatial computing.

It also recalls me the older MicrobiomeOS proposal. There is also this post at chorasimilarity, with the same name. Quote:

“The programs making the operating system of your computer are made up of around ten million code lines, but your future computing device may harbour a hundred million artificial life molecular beings. For every code line in your ancient windows OS, there will be 100 virtual bacterial ones. This is your unique MicrobiomeOS and it has a huge impact on your social life and even your ability to interact with the Internet of Things. The way you use your computer, in turn, affect them. Everything, from the places we visit to the way we use the Internet for our decentralized computations influences the species of bacteria that take up residence in our individual microbiome OS.”

Mind that my project moved to a new official page.

So, where I propose asemantic molecular computing, Keiichi wants an artificial animist kind of computing. A lesser, but more human form of AI.

Logarithm: the last piece

In mol notation, logarithm is a (graph) term which turns the pattern

FO e d b, A d a c

into the pattern

FO a d b, inv d e c

with inv emergent. Depending on the family of graphs considered, this can be embodied in lambda style (turning Church numbers into emergent numbers) or as a rewrite in itself (or even as a heuristic for the correct translation from a formalism to another).

This is the last piece, which together with dirIC, em-convex, pure see and zzh, gives us now all that is needed for a new, non-commutative, linear logic 🙂 Happy 2021!

Dear Santa, 2021 wishlist: search engine for patterns, asemantic computing, Open Science system driven by researchers needs

First of all, good health for all my family and dear friends! Then, in any possible order, I wish that 2021 brings:

  • a search engine for patterns in real chemical reactions. In order to invent molecular computers, or probably more likely, to discover them in real biochemistry, I need to be able to search in databases of chemical reactions (not only in databases of chemical molecules). There are only a handful of very general patterns which appear everywhere in mathematics, which would allow for universal computation with the most simple and real algorithm: random and local. I know that the problem is not trivial, but it is totally possible and it is mostly a problem of scale.
  • a viable proposal for asemantic decentralized computing. There are many such semantic proposals, but they are all doomed to fail. I wish for such an asemantic proposal because we have to get rid of centralized computing as soon as possible, projected until 2024. If you can’t find such a proposal then maybe a little replacement, a token system, would do for the moment.
  • an Open Science movement driven primarily by researchers needs. I think that we, researchers, were left behind because we were too polite. Everybody else has all sorts of needs: publishers have to make a profit, managers have to escape responsibility of their decisions, librarians are afraid of their future. I hope that in 2021 researchers will decide what is most important for them first, because the actual situation is not OK.

Again, about Bizarre wiki page on ISI (and comments about DORA and the Cost of Knowledge)

There is this post from 2013, where I repeat my mantra regarding Open Access, that the stumbling block of Open Access is in the academic realm, more precisely in the management branch: Bizarre wiki page on ISI (and comments about DORA and the Cost of Knowledge).

More than 7 years passed since then, so I revisited the bizarre page on ISI, where at the time there was this strange claim which I cited in the previous post: (I boldfaced some significant words)

“The database not only provides an objective measure of the academic impact of the papers indexed in it, but also increases their impact by making them more visible and providing them with a quality label. There is some evidence suggesting that appearing in this database can double the number of citations received by a given paper.”

This is BS, right?

I identified now the two edits [1] [2], from 2015, which changed the text into:

“The database provides some measure of the academic impact of the papers indexed in it, and increases their impact by making them more visible and providing them with a quality label. Some anecdotal evidence suggests that appearing in this database can double the number of citations received by a given paper.”

Now this is more likely to be true. In the present, the ISI service morphed into a more modern one, which is used worldwide. Many researchers are still judged by this service, which provides some measure of the academic impact. Despite DORA, which I invite you to sign!

The anti-chemlambda tag

As a light December posting, I introduce you to the anti-chemlambda tag. These are some public events which are slightly weird, against the chemlambda project.

There are far more strange events which are not public, moreover I believe that the disclosure of these series B movie happenings would be weighted against the project 🙂

Anyway, I just browsed a part of the list of posts and added the “anti-chemlambda” tag to some of them which fit the subject. Enjoy!

2016 Elbakyan’ Open Access talk: Robin Hood and Fair Trade

UPDATE: Alexandra Elbakyan recent 2020 talk is a more interesting read than her 2016 talk. Maybe I shall comment on it later.


I took the time to read (via google translate) the text of a talk given by Alexandra Elbakyan on Open Science and Open Access in 2016. The talk (in Russian) is available from Alexandra page. (here is the archived version of the page, just in case.)

The text left me with the impression that in 2016 Elbakyan was not very aware about the importance of Sci-hub, or why her (and her team) solution is so radical, far beyond others.

A short version of her talk would be (with my excuses for any misrepresentation): in the past knowledge was regarded as necessarily secret, until Bacon and later Oldenburg times. From then on the scientific journals system made knowledge available to anyone, but at some point, from capitalistic reasons, an elite confiscated again the knowledge by hiding it behind publication paywalls. BOAI comes to the rescue and the rest we know.

To me this is like watching Robin Hood pretending that he tries to support fair trade. I understand now why Elbakyan seemed surprised in the past that exactly the proponents of OA were among her critics.


Scientific journals were great, at the time, but they are no longer sufficient. The scientific method is not peer-review, but independent validation. The invention of scientific journals was better than private correspondence among scholars. Articles and peer-reviews are no longer sufficient, as Open Science shows.

A special moment in the OA movement was the start of arXIv. Later, BOAI proponents misrepresented this great step forward as “green” OA, which is archival, not publication, like “gold” OA is.

Just as in fair trade, weird lapses in the BOAI style OA definitions allowed publishers to charge researchers large sums of money for their own work.

This is not just another example of capitalistic perversion of the ideal communist research system.

Sci-hub, Alexandra’ creation, shows by example that the whole two decades in the BOAI false OA fight were completely unnecessary.

It is of course not the last word of OA, for several reasons: it is illegal in some ways in some countries, it is illegal because it infringes copyright, it is useful for reading paywalled articles, not any article.

But it is blindigly obvious that technically Sci-hub is a 2 clicks solution to OA, something no other system achieved, and that it levels the field for legacy journals and gold OA journals, that new capitalistic perversion.

So we have two elegant technical solutions: arXiv as input, Sci-hub as output. Let’s put them together, legally, can’t we?

Plan U comes to mind, but it is not enough! because it relies on the same old, unnecessary format of articles and system of peer review.

We should jump directly to Open Science, based on independent validation means, which is public and has fees for bit hosting no greater than for any other bit.

Nature $10K article processing charges: send the bill to the Gold OA proponents

Bjoern Brembs asked for political endorsement [archived] some years ago. I was among the people who gave him such support, as a scientist. I would like to retire this endorsement, because in my opinion the years which followed did not bring anything beneficial for OA.

Bjoern has two recent posts: Are Nature’s APCs ‘outrageous’ or ‘very attractive’? and High APCs are a feature, not a bug where he describes and react to the last surprise coming from the gilded Gold Open Access realm. Please read these comical posts.

Fact: Nature demands approx. $10K as article processing charges (APC) per unit of publication.

Reaction: proponents of green and gold OA are now surprised or they are not really surprised, even if they publicly supported the BOAI flawed definition of OA.

Of course this is no surprise! Authors, send your bills to the proponents of the Gold and Green OA movement.

The proponents of the OA system in the form of green (archival) and gold (publication) Open Access were predictably wrong. Over the years, their actions resulted in advantages for the publishers. You can see that by looking at the outcome of their fight. Isn’t is surprising? Not at all!

Moreover, even if, predictably, the ridicule of their proposals will fade into oblivion, these proposals became part of state policies.

But this is not over.

Notice that already there are similar preparations for Open Science which look to turn the hosting of the scientific bit into a big affair. Again, you will find about the same people who gave us this sad perversion of OA. You can find definitions and policies proposals in OS, with strange lapses concerning the applications of DRM, prices for bit hosting, and so on.

So next time send the hosting bills to these propagandists.

More details. I explained several times that the separation of OA into green (archival) and gold (publication) is a move against open access. A whole generation of researchers was betrayed by a coalition of publishers, librarians and academic managers.

This is not unseen. At the end of the 19th century, academic art passed by a revolution. Like in the actual science academic world, it was not important what the artist [researcher] create, but where it was published. The management of art created a monster: l’art pompier.

Presently, researchers are turned into content creators for publishers. The academic management selects the researchers by using numbers designed for the evaluation of journals.

The metadata (like the name of the journal) is more important than data (content of article).

Research is not this! Research is discovery.

Why should we be surprised, when a publication in Nature helps a lot with the researcher evaluation, which affects the distribution of research funds? Nature could ask $100K, could ask any price that you, but mostly your manager, agree to pay.

Why do you force researchers to pay them? this is the real question.

Combinators: A 100-year Celebration

I watch right now Wolfram’s Combinators: a 100-year celebration livestream. I did not receive the zoom link to participate… Stephen invited me some days ago. See this previous post which is related.

As I watch the very interesting presentation, there are many things to say (some to disagree), some said in private conversations, some here.

There are several points where I disagree with Stephen’s image, but the main one seems to be the following (or maybe I don’t understand him well). He seems to say that the universe is a huge graph, which is the fabric of the objective reality, which evolves via graph rewriting. So the graph exists, but it is huge. There exists a state of the universe which is the huge graph.

I believe that Nature consists of similar asynchronous graph rewrite automata. There is no global state of the universe. The rules are the same, though, and a good solution emerges at different scales. For example the same explanation of what space [programs] is [are] appears again in chemistry, where we encounter it in the way life works.

Big claims, but here is an example.

Stephen says that, for example, the universe is a huge combinator. I just say that anything which happens in the universe can be expressed with combinators.

Then, there are several technical places where I disagree.

A term rewrite system with a reduction strategy is a very different thing than a graph rewrite system with a reduction strategy. There is no bijection among them, in the sense that one can’t translate back and forth between the term rewrite and the graph rewrite sides, by using only local rules and algorithms.

Another place where I disagree is the importance of confluence. When you have a rewrite system which is confluent, if a term (or graph) can be reduced to a state where there is no further reduction possible, then that state is unique.

Confluence can be encountered in lambda calculus (as an example of a term rewrite system) and in interaction combinators (as an example of a graph rewrite system).

Confluence does not say anything about what happens when there is no final state of a term, or graph.

Even in confluent systems, like in interaction combinators, one can find examples of graphs which don’t have a final state, and moreover, may have vastly different random evolutions. This is like a nature vs nurture phenomenon, where the same graph (nature) may evolve (nurture) to two different states, say one of then which has periodic evolution and another one which has a different periodic evolution.

As concerns the large scale of the universe-graph which turns to be related to general relativity, recall that apparently innocuous assumptions (for example what is a discrete laplacian, or what is a connection) beg the question. These contain inside the conclusion, that at large scale we get a certain behaviour, or equation. One reason I know this is from sub-riemannian geometry, which offers plenty of examples of this phenomenon.

This is not the place to discuss this. Go to the chemlambda page for more.

If you are interested in what I do right now: I try to understand a problem formulated in these notes, as well as in this post here at chorasimilarity. Even more details in the wonderful group of emergent numbers post.

Back now to the main subject of this post. I shall not even enter in the semantics aspects. As is, any semantics based project will not deliver, just as it happens with all classical ones, like category theory or types hype. Time will tell.

Time tells it already. China and India are very interesting places.

The wonderful group of emergent numbers: em-convex and COLIN

As I examine the proof announced in the post There is no pure commutative structure except the trivial one, I realize that the numbers introduced and studied in em-convex are in the same frame. Namely COLIN is equivalent with the (convex) axiom.

To better understand this statement, read the paper

(arxiv) – The em-convex rewrite system,

and these notes for a formulation of COLIN, as well as this post here at chorasimilarity.

Let me give an analogy with lambda calculus to understand why I am puzzled. In lambda calculus natural numbers appear from nowhere, in the form of Church numbers. Likewise, in emergent algebras, we can define (as in the em-convex paper) naturals in a similar way. The formal parameters of emergent algebras disappear by passage to the limit (or better said, by using the (em) axiom and construction) and so we have emergent naturals. Finally, the (convex) axiom has the same consequences as COLIN in the announced proof. Moreover, by examination of the proof of the COLIN statement (that there are no nontrivial emergent algebras which satisfy COLIN than the commutative ones), the same object appears as in em-convex.

This object is algebraically a group generated by dilations and the difference operation introduced in em-convex, or graphically the collection of trees with nodes colored with formal parameters from a commutative group and with leaves colored with two colors (I use “e” and “x” in the em-convex paper). There is still more to be said, namely that this object is formally equivalent with (a countable version of) a 1-dimensional vector space. If we allow n+1 colors “e”, “x1”, … , “xn”, instead of the two “e” and “x” then we have (a countable model of) a n-dimensional (non-commutative in general) vector space. If we suppose moreover COLIN, or equivalently (convex), then we get usual vector spaces. If we suppose instead LIN, then we get usual linear (but non-commutative) vector spaces, which appear in the algebraic form of Carnot groups, encountered as (models of) metric tangent spaces in sub-riemannian geometry.

The puzzle is this bootstrap, which uses a formal commutative group to decorate the nodes, (or equivalently the R2 rewrite), then a passage to the limit (or (em) axiom) to get rid of the formal parameters, to finally obtain computable numbers. Once we get them we see that the group of those could be used as decoration of the nodes (as in (convex) axiom) and we are back to the beginning. Bootstrap puzzle.

To use combinators for chemical computation

This is a reaction to Wolfram Physics Project: Working Session Tuesday, Dec. 1, 2020 [Combinators], a very interesting session!

I was noticed by several people, among them: Richard Assar, [mail_me_if_you_agree_to_put_your_name_here], that Stephen Wolfram mentions graphic lambda calculus and chemlambda and indeed this happens when the discussions turns towards chemistry. The time tag for this turn is


I find fascinating that Stephen says “to use combinators for chemical computation…” line, because this is part of my dreams too, with chemlambda.

The glc paper is OK, but the chemlambda pages are not the last versions.

I added a comment which is hidden by youtube, here. You have to click on “sort by”, then “newest first” to see it, perhaps 🙂 well done G!

There is a lot of more updated and recent material, which I’ll list here:

As Wolfram Physics is very interesting, I am glad of this mention. Wish I could talk more, I did this in private and there are many points of interactions and differences with my projects.

UPDATE Dec 15 2020: Stephen Wolfram has the post Combinators: A Centennial View, which is associated with his video presentation. I tried to play with some of his examples in chemlambda or chemSKI. Eventually that triggered me to write a comment which I save here too. Just scratching the surface, but it might be interesting for the readers of this post.

“Very interesting and detailed post. Two remarks:
– if you allow the I combinator as well, then (S I I) (S I I) with length 6, is a combinator which does not terminate, without growing too much. Of course it corresponds to the omega combinator. In chemSKI (a chemlambda style graph rewriting for the SKI calculus) it looks like that (down the page)

Without the I combinator, one may use (S (S K K) (S K K)) (S (S K K) (S K K)) which also, modulo some accumulating debris, has a periodic evolution. You can try it as input in the λ or SKI -> mol window.

More generally, one can define a (graph) quine (relative to a given graph rewriting system) as a graph with a periodic evolution in [your choice of reduction algorithm]. My choice is the algorithm which always selects the rewrite which grows the size of the graph, whenever there is a choice to be made. We may think about such a graph as having a metabolism in the most favorable conditions. It does not die (terminate), nor does it grows indefinitely. Then such a quine graph is interesting to look at during a random reduction algorithm, where it may display lots of different alife behaviours.

Except the combinator quines mentioned here, I tried to find such quines in interaction combinators and in chemlambda. Indeed, in Lafont paper is given such a quine and one can find many other examples here

where one can search among randomly generated graphs, in chemlambda or IC.

– concerning the choice of reduction algorithms, one can parameterize the space of algorithms where at each step a rewrite is picked at random with a given probability (if there is a choice, otherwise proceed with the rewrite at hand). For example, a 1-dim continuous variation of such algorithms would be to group the rewrites according to whether thy grow or slim the graph and attribute equal probability to the rewrites in the same family.

I tried one of your examples in chemSKI, namely S S S (S S) S S K, and indeed with the slider of rewrites moved to SLIM, indeed this combinator terminates. Is not clear to me if there is a correspondence between your classification of rewrites algorithms and what I do here.”

Researcher Behavior Access Controls at a Library Proxy Server are Not Okay

After the reaction to a presentation of an October Scholarly Networks Security Initiative (SNSI) webinar , I learned that Peter Murray tells us that User Behavior Access Controls at a Library Proxy Server are Okay.

They are not OK.

I expressed my opinion about this last invention in my reaction Academics band together with publishers because access to research is a cybercrime. There I argued that academic managers are to be blamed even more than publishers, because they support the publishers in the detriment of the researchers. But this is not all.

Here I want to add that there is a trend, started at least with the inventions of green and gold OA, which were so beneficial to the publishers. Decades of advances were lost because the fight ignored researchers needs.

What do they need? Something arXiv like with a Sci-hub like interface.

What do researchers got? They have to pay $2000 to publish an article they wrote [added: or even $10000 for an article in Nature]. And moreover, they are suspect of cybercrime. But be calm, because user behaviour access controls at a library proxy server are okay.

The trend is to make deals over the heads of researchers.

Publishers with managers, publishers with librarians, IT department with librarians, and so on. If you look at how BOAI started, the initiator of the gold (in the pockets of publishers) open access style, it was librarians with publishers.

We, researchers, understood that librarians were scared by publishers that their important role will decay. We understood that managers want to apply to us the criteria which were designed for journals.

But it is time to understand that researchers have to be at the core of any deal, because without researchers there is no need for librarians, university IT administrators, managers or scientific publishers.

To make deals over the researchers heads is not Okay.

To be clear, please at least return the respect you received from the researchers. Please stop treating researchers as users which have to be herded to the publishers needs. This is not your job.

There is no pure commutative structure except the trivial one

I have a sketch of a proof that there is no other emergent algebra than the trivial commutative one, among those who satisfy the COLIN relation (a sort of right-self-distributivity), see these notes for a formulation of the problem and this post here at chorasimilarity.

Recall that LIN relation implies that we have a structure of conical group, which is one of a non-commutative vector space. The more powerful relation SHUFFLE gives a structure of a commutative vector space. The same result can be achieved by using instead the pair of LIN and COLIN relations, which are therefore equivalent with SHUF.

Geometrically, deviation from LIN is a curvature and deviation from COLIN is the bracket in the non-commutative tangent space. So they look like two independent phenomena.

It is then natural to ask if there are pure commutative structures, i.e. if there are COLIN but not LIN structures.

The answer seems to be “no”. The proof seems to indicate that the reason for this is that COLIN induces more constraints that LIN. Indeed, even if they are symmetric, in the context where we also have the R2 for emergent algebras, COLIN implies a symmetric of R2 and this is what is needed to conclude the proof. Looking in the mirror, by the same proof LIN brings R2, which is already available, and that’s why COLIN is stronger than LIN.

I still might be wrong somewhere in the proof, maybe there still are some exotic pure commutative examples.

Freedom of Speech and Section 230

Section 230 of the Communication Decency Act, 47 U.S. Code § 230, is “one of the most valuable tools for protecting freedom of expression and innovation on the Internet”, as the Electronic Frontier Foundation writes.

The text of Section 230 which is of interest is the following:

“(c) Protection for “Good Samaritan” blocking and screening of offensive material

(1) Treatment of publisher or speaker

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

(2) Civil liability

No provider or user of an interactive computer service shall be held liable on account of—

(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected;


(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1)”

The Electronic Frontier Foundation text explains that:

“The Internet community as a whole objected strongly to the Communications Decency Act, and with EFF’s help, the anti-free speech provisions were struck down by the Supreme Court. But thankfully, CDA 230 remains and in the years since has far outshone the rest of the law.”


“This legal and policy framework has allowed for YouTube and Vimeo users to upload their own videos, Amazon and Yelp to offer countless user reviews, craigslist to host classified ads, and Facebook and Twitter to offer social networking to hundreds of millions of Internet users. Given the sheer size of user-generated websites […] Rather than face potential liability for their users’ actions, most would likely not host any user content at all or would need to protect themselves by being actively engaged in censoring what we say, what we see, and what we do online.”

The Universal Declaration of Human Rights, adopted in 1948, formulates in the Article 19 the freedom of opinion and expression:

“Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.”

These rights (condesated name: “free speech”) are in many countries constitutions.

In US there is the First Amendment to the United States Constitution:

“Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.”

The right of free speech is, in US, protected against the government. The Article 19 of the Universal Declaration of Human Rights has a more powerful form, not being restricted to protect against goverments.

But governments are no longer the most powerful force, the one we need protection for our free speech.

Corporations are! more specifically, any private entity who was formed and grown by the promise of a better public forum.

Section 230, as is, does not protect free speech as defined in the article 19. It does not protect free speech more than the First Amendment, who protects only against goverment. It only protects these rights in an indirect way, namely that without Section 230 corporations “would likely not host any user content at all or would need to protect themselves by being actively engaged in censoring what we say, what we see, and what we do online”, as EFF explains. So even more, when corporations are actually actively engaged in censoring, with the best intentions even, Section 230 does not protect free speech.

What can be done? The simple thing to add to the wonderful and very useful Section 230 would be a form of article 19 or an extension of the First Amendment text, with the effect that:

Any provider of user created content is under the same constraints concerning free speech as the state is by the First Amendment.

In this way, Section 230 will turn from an unfair advantage of monopolies who control our world view, into an explicit protector of freedom of expression and innovation on the Internet.

Plan U, almost 30 years too late, what we need instead

Plan U [site] [article] says that

“Arguably the most effective mechanism for providing free, immediate access to research has been the non-profit preprint server arXiv.”

“If all research funders required their grantees to post their manuscripts first on preprint servers — an approach we refer to as “Plan U” — the widespread desire to provide immediate free access to the world’s scientific output would be achieved with minimal effort and expense.”

Looks like a great idea, only that it is too late. By almost 30 years. A generation of researchers careers were lost by this delay. How was this possible? After all this fight for open access?

The fight was fake. BOAI made arXiv “green OA”, i.e. an archiving place. What a damage.

OK, it seems that now, with Plan U, we finally arrive where we should be. The most powerful point is, in my opinion, that

“because it sidesteps the complexities and uncertainties of attempting to manipulate the economics of a $10B/year industry, Plan U could literally be mandated by funders tomorrow with minimal expense, achieving immediate free access to research and the significant benefits to the academic community and public this entails.”

What could be achieved with an extra $10B/year for research? Right now this is stolen from reseach and the academic managers help the publishers to achieve that goal. Why?

The problem with Plan U is that in practice it is already achieved. Most of new research is available online, free and fast. Most of published research is available via Sci-Hub. I don’t know how Sci-Hub does this, the publishers claim that this is cybercrime, but the outcome, i.e. the easy availability of all published research, can’t be in itself a crime.

So what would be a Plan which is realistical and not another generation loss?

If you don’t like Sci-Hub, make one. It is technically possible. If academics don’t know how to do it then they should ask the professionals.

It should be redundant, scientific results accessible as simple as possible: by DOI or other identifier. It should be resilient in time. It should be decentralized (big problem), because one cannot trust big companies, like Google, who could do it fast and well, to treat respectfully scientific data.

Couple it with Open Science and make sure that we don’t end with another BOAI like fake solution: be sure that a scientific bit is just as expensive to host as any other bit.

All this is possible and fast to achieve, only that the ones who have decision power do not want that. But suppose they will find time for this.

There is a problem with decentralization. Besides redundancy, there is another reason for this. Just imagine that tomorrow academic managers discover the internet and they declare that is publication (or some variant of Plan U). Then every researcher in the world who hunts for promotion points will send a deluge of low quality minimal unit of research publications there. In a year will decay.

Instead, there should instead be many instances of (coupled with Open Science), which interact by scientific arguments one with the other.

Well, dreams. Decentralization is frowned upon. We have algo-human single point of truth now.

The single point of truth is a necessary thing for a centralized mindframe. Probably this is what expect us, researchers, in the next 30 years.

Unless good hackers give us fast, usable, no nonsense technical solutions.

Wanted: Left quasigroups which are right distributive but not left distributive

Excuse me, I am left-handed, which makes me prone to confusions about what is left- and what is right-. Here is the link to the question on mathoverflow.

UPDATE: I reformulated the question and put a description of the problem in these notes. See there how left-distributivity is related to curvature and right-distributivity is related to commutators.

UPDATE 2: It seems that these objects are exceedingly rare in the literature. I can’t locate any such example, there are hints that they exist, but I am not sure if I look at one, perhaps because I am not familiar with the language in the field of quasigroups.

But: I can’t locate any document where “left-quasigroup” and “right-distributive” point to the same object.

Moreover I don’t know of any infinite example in the literature.


I look for as many examples as possible, my preference is for infinite examples, of idempotent quasigroups which have this peculiar algebraic structure:

a set X with an operation denoted by a dot “.”, such that

(idempotent) x . x = x

(left quasigroup) the equation a . x = y has a unique solution denoted by x = a * y

(right distributive) (x ? y) ! z = (x ! z) ? (y ! z) for any choice of the operations ? and ! among the operations . and *

but not

(left distributive) x ? (y ! z) = (x ? y) ! (x ? z) for any choice of the operations ? and ! among the operations . and *

Remark that idempotent left quasigroups which are left distributive are quandles. What are idempotent left quasigroups which are right distributive but not quandles?

Academics band together with publishers because access to research is a cybercrime

This is the world we live in. That is what I understand from reading about the Scholarly Networks Security Initiative. and it’s now famous webinar, via Bjorn Brembs october post.

I just found this, after the post I wrote yesterday. I had no idea about this collaboration between publishers and academics to put spyware on academic networks for the benefit of publishers.

UPDATE: see also Researcher Behavior Access Controls at a Library Proxy Server are not Okay.

What I find worrying is not that publishers, like Elsevier, Springer Nature or Cambridge University Press, want to protect their business against the Sci-hub threat. This is natural behaviour from a commercial point of view. These businesses (not sure about CUP) see their activity atacked, so they fight back to keep their profit up.

The problem is with the academics. Why do they help the publishers? For whose benefit?

I wrote again and again in the past that it is not enough to criticize the publishers for the bad bahaviour. Academic managers are to be blamed because they band with the publishers. Why does nobody asks them why?

Take Elsevier as example. I signed the Cost of Knowledge and I try as much as I can to not fold to the pressure of legacy publishers. But Elsevier does not have any direct way to force anybody to use their products.

At the end of the day, is the academic managers who pressure the researchers in the favor of publishers, is them who manage the libraries who pay big bucks from public funds to the publishers. Now we see that in universities is proposed to introduce spyware in the favor of publishers. Whose fault is this?

Now we get to the gist: it is cybercrime. It is indeed, because research is valuable for states, who fund it and benefit from it. When a piece of US research, say, is made available by a Russian site, say Sci-hub, then the advantage of one state in that research direction is lost in the favor of all the other states. So everybody can copy and produce derivative works.

Science should be free. Do you remember that disclosure of mathematical knowledge was considered cybercrime?

The gist is not really that the threat to the publisher’s business is now branded as cybercrime. The fact is that publishers and universities are naturally together as a part of the state. Powerful states need powerful propaganda and publishers together with universities are an important part of this. When they are in trouble it is only natural that they resort to another state pillar and find together a powerful name: cybercrime.

Every state does that. Does it mean that states are against science? It is also propaganda to say that a state is against science and another state support generous efforts to make science free. It’s so complicated, but it’s all propaganda.

It is propaganda which harms every state! It is the most stupid way to proceed today. Because today is very different from yesterday. If we make the research free to access then we create an evironment where better research appears, and faster. Scarcity is a very bad idea. We all know it.

So I don’t buy that publishers and academic managers banded together to fight cybercrime. I believe academics will produce better results without the artificial scarcity created by legacy publishers.

At best, some state bureaucrats proved they are ready to harm their states, by ignorance. At worst, academic managers have non-declared interest to keep the legacy publisher alive.

Sci-hub provides an easy to use and necessary product to researchers from the world, who want to do their research. This is the strength of it. Now that people saw that it is technically possible, the rest is spin.

A site of the Sci-hub creator Alexandra Elbakyan

I found this site of Elbakyan, (not only) in my opinion the creator of one of the greatest libraries of science ever, Sci-hub.

Here is the site:

and if it will stop then here is the archived version.

I wrote several times about Sci-hub or about Elbakyan here, last time in this post. I found her new site by looking for a replacement of the links which are no longer valid in that post.

Sci-hub is a technical solution (and a solution driven by the contributions from many people) for a problem which should not exist. You may hate Sci-hub, you may think that in some places is not legal, but fact is that it is a very easy to use site, which is indeed used by a huge number of people. They are not using it because they are poor, rich or ideologically motivated. They are using the site because it works, responding to a problem they have. I suspect almost all of them are researchers or students, because of the content of the library available.

Ideologically, the existence of this solution shows that the Open Access is at best a misleading fight. Indeed, who cares if it can be read for free from the publishers (some of them so cynical that they take their money from the article creators), or the article is hidden behind a paywall. The solution makes the article available anyway.

This solution makes obvious why a hacker is better than an army of nerds. With a negligible percentage of funding, compared with Google Books, the hacker made a huge gift to researchers. In the same time, Google sits with its fat nerd ass over millions of books, not making them available to anybody, after they spent a fortune to scan them.

Yes, there is the problem with the copyright. What a shame is it when it comes to scientific works! Because scientific knowledge occupies such a small number of bits, compared with the metadata exhaust which is collected and processed, but the commercial interest leads to keeping it walled.

Open Science, in the pure sense of content which is scientific, i.e. it can be validated independently, is not touched by this solution. It does not have to be, Elbakyan did enough. I noticed several articles in her site, about OS, written in Russian, I shall read (a brute English translation of) them.

It is though remarkable that despite all the very well paid Open Access fight, which birthed the greed monster called Gold OA, the true OA was solved by one, or a few, individuals. Likewise Open Science can be practiced by anybody, individually, without waiting for the whole society of researchers to reform. It is enough that you, researcher, put online as much as possible of your work, so that another researcher, if curious, could reproduce your findings (and contradict your interpretations/opinions).

Pure See expressivity and linear variables

Pure See [draft] in it’s present form tries to be both a graph rewrite system and a term rewrite system. I arrived to split the semantic part from the syntactic part by the introduction of linear variables. (The draft will be soon updated).

As a programming language, I aim Pure See to be expressive for a human. At least that any construct to admit a reading which makes sense for a human.

Here are two examples related to lambda calculus. Recall that we have, in lambda calculus, two operations:

  • application, denoted by AB, defined for any two terms A and B
  • abstration, denoted by λ x. A, defined for any variable x and term A

In Pure See we can write the definition of application as:

as A from B see AB;


apply A over B as AB;

Similarly, In Pure See the definition of abstraction is:

see A from x as λ x. A;

Therefore the term (λ x. A)B appears to be

as (see A from x) from B see (λ x. A)B;

which in lambda calculus reduces to A[x=B] and in Pure See reduces to

A as (λ x. A)B;

B as x;

Chemlambda animated book

I decided it is time to make it. As the name tells, it is animated, meaning that it will be full of animations and of simulations. This is why it cannot be made into an e-book, not with the present technology.

But we may try, right?

For those who followed chemlambda through the years, it will be a self-sufficient place to enjoy the animations, play with the simulations and dream on. Thus a part of it will be based on the 264 animations with comments, most of them with simulations, of the chemlambda collection.

I intend to use though the original animations, currently available only from my professional page. This is because the animations alone are 1GB, so I can’t put them on Github.

For those who desire to know what is the relation between chemlambda and a lambda calculus graphic reducer, there will be a reducer of lambda terms, which is also available here, Notice that you may reduce lambda terms not only with chemlambda, but also with dirIC, a version of directed interaction combinators.

The story of the ouroboros (which is proved that is NOT immortal) makes the transition from a limited but interesting lambda calculus perspective to chemlambda quines.

You will be able to play with quines and discover more of them with a combination of the quine lab and the page on how to test a quine.

There will be an introduction concerning the history of chemlambda, what it is and how is related to other graph rewriting formalism. I shall base on the history reference page, but I shall rather concentrate on chemlambda and dirIC, perhaps also on Interaction Combinators.

For the algorithms used for graph rewriting and why are they interesting for models of real chemistry, there will be a more extended version section 6.5 “Local machines”, arXiv:2003.14332 [cs.AI] . Perhaps combined with the Molecular computers with pdf version on (figshare) or arXiv:1811.04960 [cs.ET] article.

The introduction will adapt the Chemlambda for the people presentation and I will consider if I add the two chemical sneakernet stories (internet of smells) (archived) and   (Home remodeling) (archived).

All in all, it will make into a ~ 350 pages of text with animations (264) and simulations (~ 200).

Depending where I share it, if it will be put on Zenodo then I may add many hours of raw movies from my personal collection. The simulations used in the animations (i.e. the old ones, not the new, js only ones) are already archived in the Chemlambda collection of simulations (1GB) here:

Buliga, Marius (2017): The Chemlambda collection of simulations. figshare. Dataset.


As you see, in this projected book there will be the part of chemlambda which appeared in relation with artificial life and molecular computers.

There will be nothing in the book about the unlikely source of this, namely the computing with space project. There will not be anything about pure see, or about medial emergent algebras, or about emergent graph rewrites. Not even about Zip Slip Smash. This part is ongoing research, very exciting to pursue, independently of the possible applications of the computing with space in chemistry.

Maybe I will mention chemski though, because it is in the same vein as chemlambda.

Where we are now with the computing with space project (nov 2020)

I take this information apart from a previous post, to show where we are now with the computing with space project. Please read that post because it contains more than what I extract and slightly edit here.

While I try to increase the readability, it becomes clear to me that even if most of the composing pieces are available online it is rather hard for the non-specialist to compile a up to date unitary version. I shall write such a version soon, moreover with parts which are still on paper.

So here is the global image, as it is available today.

All in all we have the following formalisms, in a decreasing order of generality:

  • the formalism of emergent algebras, or dilation structures, which can be turned into a graph rewriting formalism over decorated graphs (with nodes and edges which are decorated) and with graph rewrites which take into account the decorations. Various attempts for this graph rewriting formalism are: Braided spaces with dilations [1] section 4, the Computing with space [2] sections 3-6, lambda-scale calculus [3], em-convex rewrite system [4], with a satisfactory final version to appear.
  • less general is the formalism of linear emergent algebras, which can be used to (or are compatible with) a pure graph rewriting formalism over (non-planar) diagrams of tangles, because (LIN) in this case is the rewrite (R3). It is not known if this graph rewriting formalism is Turing complete though, or more precisely if there is some natural correspondence with Interaction Combinators, say. The purpose of the Zip Slip Smash formalism is to provide this correspondence, where we add to the Reidemeister rewrites some rewiring rewrites in order to achieve this.
  • even less general is the formalism of medial emergent algebras, which turns out to be capable to serve as a semantics for dirIC, thus for the Interaction Combinators.

Therefore, the image is, in increasing order of generality:

multiplicative linear logic (SHUF) < graph rewriting on decorated knot diagrams (LIN) < differential calculus in emergent algebras

The interest would be to understand the implications of these inclusions, but first to internalize that linear logic is, as I said repeatedly, a commutative version of a more powerful formalism. This seems hard to believe, but it is indeed so. A price to pay for passing to a more powerful level is to renounce at the false generality of cartesian etc categories, because this is a generalization from a too particular example. From here we may look to provide versions of linear logic which are not commutative, but they are still LINear, therefore at the level of (LIN), not only (SHUF). Another effort would be to decrease the differential calculus from the most general level provided by dilation structures to the level of (LIN), not only (SHUF) (where it is the usual one known to anybody), so that we can compare it on common ground with the linear logic, in a natural, unforced, less naive way.

There are many other directions to explore, some of them in a more evolved state than others, but with patience we can do great things.

Election result

I was not right with my Election prediction (with proof). I hope that I was right with this other more important prediction 🙂

This post is just a mark in time, that at a certain moment I believed a certain thing.

EDIT 5 (Dec 12th): Let me be clear about why those elections were mentioned here: because they mark an important time in history where a corporation was stronger than the strongest state. It is not unseen, the previous example I am aware about is the East India Company. You may tell me that it is an obvious result and I’ll agree that the underdog does not deserve much attention in this real movie. This is not my country, instead this is my internet, affected by these elections, again.

EDIT 4 (Nov 20th): After the Most Favored Nation rule, maybe something about Section 230?

EDIT 3 (Nov 12th): After legacy media and the media protected by the section 230 weighted on it, the scale didn’t move in the expected direction. The problem of ruling by making the map is that when the map is very far from reality, for a very long time, it becomes useless.

EDIT 2 (Nov 5th): It looks that I guessed wrong. My question is then: who will repeal a section 230 😉 ?

EDIT: Today Nov 4th, searching for informations…

ha, ha, ha, who’s the boss? who makes the map.

Entropic algebras and the shuffle trick

I just discovered that in the world of quandles the algebraic condition which corresponds to what I call “the shuffle trick” (or presentation or the video of this presentation at min. 45:00) has the name “medial” or “entropic”. In this language, what I can prove is that entropic emergent algebras are affine spaces, structure which I use in the Pure See to decorate chemlambda or directed interaction combinators graphs. Recall that the existence of the shuffle trick is what I argue shows that (at least the multiplicative part of) linear logic is actually commutative. There is a non-commutative version, which goes outside of using the shuffle trick, or algebraically the medial property, but it is so beautiful how things connect.

Looking forward to learn more about this, to see how my “emergent” part blends with the large literature on modes. I wouldn’t be very surprised if there exist semantics of linear logic built from modes.

I have a curious feeling by reading for example this article. I realize that I need some time time get used with the notations, even as a professional mathematician, but otherwise the article is full of intriguing words, like “distributed computation” and “theoretical biology”… Same dream.

Update. Here is a reasonably short, all-in-one description of the story.

Recall that an emergent algebra is a family of idempotent quasigroup operations over a set X, indexed by a parameter in a commutative group. The parameter is called “the scale”. The operations are called “dilations”.

I shall use in this post the letters a,b,c,… for scale parameters and x,y,z,w,… for the points in X.

I use a sort of polish notation here:

x y a is the dilation operation of x with y, at the scale a

The group operation over the scale parameters is denoted multiplicatively: ab, and the neutral element is 1. The inverse of a scale a is a’.

So a a’ = a’ a = 1.

We have also a (filter) 0 which is not element of the group of scales, but which is used for statements of the form:

“when a –> 0 the function f(a) –> F uniformly”.

To be able to say “uniformly” we need an uniformity over the set X. For example when X is a metric space, it will be the uniformity associated to the distance.

The algebraic axioms of emergent algebras are, with this notation:

(R1) x x a = x

(R2) x (x y a) b = x y (ab)

x y 1 = y

and as topological (or analytical) axioms: when a –> 0

x y a –> x uniformly wrt x,y in compact sets

define Delta_a (x,y,z) = x (x y a) (x z a) a’
then Delta_a (x,y,z)–> Delta(x,y,z) uniformly wrt x,y,z in compact sets

This is an emergent algebra.

An emergent algebra is linear if moreover we have this (scaled) distributivity

(LIN) x (y z a) b = (x y b) (x z b) a

An emergent algebra is medial, aka it satisfies the shuffle trick, if

(SHUF) (x u a) (y v a) b = (x y b) (u v b) a

With this we have:

  • (SHUF) implies (LIN), so any medial emergent algebra is linear
  • any linear emergent algebra comes from a conical group: for any
    element x in X there is a group operation * on X, such that x is the neutral element of *, with y’ denoting the inverse of x wrt the group operation *, and such that
    y z a = y * ( x (y^{-1} * z) a)

In particular if we satisfy some Lie group hypotheses (like in the solution of the Hilbert 5th problem) then the conical group is actually what a Carnot group.

Among Carnot groups, we have commutative ones, which are equivalent with vector spaces (suppose moreover that the group of scales is the multiplicative group over a field), and we have many other non-commutative ones, the most trivial example being the 2-nilpotent Heisenberg group.

  • any medial emergent algebra is linear, by the previous result, but the conical group associated is commutative. Conversely, the emergent
    algebra of a commutative conical group is medial.

So this is the relevance of the medial property (SHUF) for emergent algebras. Here is the relevance for chemlambda, directed interaction combinators or interaction combinators.

Let’s rewrite the operation

x y a = z

as the following statement:

from[a] x see[a] y as[a] z

If and only if the emergent algebra is medial there are other 5 medial emergent algebras, which are obtained by one of the other 5 permutations of 3 elements, given by any of the statements obtained from a permutation of (from, see, as).

This is not difficult to check for the medial emergent algebra of a vector space, because it just tells you that from

x y a = z

for a generic scale parameter a > 0

you can find other 5 expressions

y x (1-a) = z

x z (1/a) = y

y z (1/1-a) = x

z x (1 – 1/a) = y

z y (1 – 1/1-a) = x

which are all valid dilations!

Now, this can be used to decorate the nodes of chemlambda, directed interaction combinators (i.e. what I call dirIC) and by consequence interaction combinators.

The decorations are given in the Pure See draft

The 6 nodes of chemlambda or dirIC are named,

D (dilation)
L (lambda)
A (application)
FI (a sort of fan-in)
FOE ( a sort of fan-out)
FOX (another sort of fanout)

To these, we may add

FO (fan-out) which is decorated with x,x,x
FIN (fan-in) which is decorated with x,x,x

which represent the fact that by (R1) x x a = x for any scale parameter a

chemlambda uses L, A, FI, FOE, FO nodes only.

dirIC uses L, A, FI, FOE nodes only.

This decoration of nodes which I just described has the property that any rewrite (of chemlambda or dirIC) we take, be it a beta-like
rewrite (i.e. like in the graphical beta reduction), or a DIST rewrite (as those used for duplication), if we decorate the LHS and RHS of the rewrite according to the rules explained, then we can always show that this rewrite is “emergent” (i.e. obtained from some scale parameters –>0) from a sequence of rewrites involving only (R1), (R2), and (SHUF).

Conversely, if we decorate chemlambda, dirIC graphs according to the rules just mentioned, we can prove (SHUF), in the sense that we can prove that the decoration has to have the (SHUF) property. This is done in chemlambda by the “shuffle trick”, which is a sequence of two rewrites involving 3 nodes.

Therefore (SHUF) is necessary and sufficient in the context,

Going back to the emergent algebras only part, there is more:

  • we can express (LIN) as y z a = x ((x y b) (x z b) a) b’ , which is equivalent with
    z = y (x ((x y b) (x z b) a) b’) a’
    If we have a distance, say d, then
    d(z, y (x ((x y b) (x z b) a) b’) a’ )

measures the difference from having the LIM property, for a generic emergent algebra. This is, I argue, related to curvature!

Likewise, if we have (LIN) then we can define a measure of the difference from having (SHUF). This is, if you compute, equal to the Lie bracket in the conical group!

Conclusion. All in all we have the following formalisms, in a somehow decreasing order of generality:

  • the formalism of emergent algebras, which can be turned into a graph rewriting formalism over decorated graphs (with nodes and edges which are decorated) and with graph rewrites which take into account the decorations.
  • less general is the formalism of linear emergent algebras, which can be used to (or are compatible with) a pure graph rewriting formalism over (non-planar) diagrams of tangles, because (LIN) in this case is the rewrite (R3). It is not known if this graph rewriting formalism is Turing complete though, or more precisely if there is some natural correspondence with Interaction Combinators, say. The purpose of the Zip Slip Smash formalism is to provide this correspondence, where we add to the Reidemeister rewrites some rewiring rewrites in order to achieve this.
  • even less general is the formalism of medial emergent algebras, which turns out to be capable to serve as a semantics for dirIC, thus for the Interaction Combinators.

Therefore, the image is now, in increasing order of generality:

multiplicative linear logic < knot theory < differential calculus in emergent algebras

The interest would be to understand the implications of these inclusions, for example to provide versions of linear logic which are not commutative, but they are still LINear, therefore at the level of (LIN), not only (SHUF). Also, to decrease the differential calculus from the most general level to the level of (LIN), but not (SHUF) (where it is the usual one known to anybody), so that we can compare it on common ground with the linear logic, in a natural, unforced, less naive way.

The antitrust case against Google predicted by the Gutenberg analogy

The last update of the page The Gutenberg-internet analogy is May 2019, as can be checked by looking at the history of the commits here. Based on the analogy, I made a daring prediction, namely that

East India Stock Dividend Redemption Act (1873) – – – Google declines (2020)

The Gutenberg-internet analogy function predicted that.

Almost all of 2020 passed and there was no sign of this in sight. But now, there is a big antitrust case against Google, here is the pdf of the case , taken from the NYT source.

Pharma meets the Internet of Things (2020 update)

After this year we are much closer to the title of the original post from 2017 Pharma meets the Internet of things. More recent information is to be found in the official chemlambda page.  There are now explained relations between chemlambda, interaction combinators, lambda calculus, there are parsers and decorators, there are many new experiments and saved old experiments.

If you look for fast information, but accessible, look at the presentation Chemlambda for the people. If you like a more literary fiction presentation, but just as fast, read Internet of smells.  I would write it a bit different today, like for example the sniffer ring would detect a positive case of viral infection, etc.

But if you are interested in the more general project of computing with space (which spawned this unlikely application into chemistry), then  go  directly and read Pure See

I am available to give explanations or presentations. I am just as interested in the unlikely direction, because it is now mostly a matter of search.

Election prediction (Again, with proof)


Will post proof after it 🙂

UPDATE: I turned the post back to draft, but I archived it before. Actually, there is no reason to keep it hidden, so here is it again.

UPDATE 2: I tried to attach the file prediction.txt as proof, but I can’t. So here is a screenshot of the proof.

UPDATE 3 (Nov 5th): It looks that I guessed wrong. My question is then: who will repeal a section 230 😉 ?

UPDATE 4 (Dec 12th): I was not right, so for the moment the section 230 will modify the internet further.


After the election, I’ll provide a file.txt with the property that if you type

openssl dgst -sha1 file.txt

you’ll get

SHA1(file.txt)= b1981f06189217acc1611125b7660eb07d9a0954

The result will probably bring no big surprise, though. I felt the need to make such a post after reading through comments at the interesting Scott Aaronson post here.

My first (recognized) paper with code

I found it, is here and if you go at the arXiv link and you look down the page you’ll see this nice (screenshot):

Wow, is starting to work!

Actually my first paper with code is this, from 2015, [update: there is the arXiv version from 2018 arXiv:1811.04960, with code added, namely the chemlambda-gui repository]. Mind that if your browser changes without warning http to https then the original won’t show you the simulations which are embedded. That is because, if you look at the source on github of the (so-called) paper, then you’ll see that I use iframe to embedd simulations into the “paper”, themselves called with “http”, therefore blocked by https. Oh sigh, please use http for this or take it from the source, etc,

Usually these papers with code are considered in relation with machine learning or other CS or biology or other applied sciences, but code is a form of proof.

At any moment working code trumps hearsay from anonymous peers 🙂 Recall: transparency is superior to trust.

Episodes I and II

The talk Zipper logic revisited

is the third episode, jokingly like Star Wars first movie. The episodes 1 and 2 would be as described further.

Title: Curvature, commutativity and numbers in emergent algebras

Abstract: In the frame of emergent algebras we define measures of curvature, infinitesimal commutativity and we describe intrinsically one-parameter groups as numbers, analoguously with the Church numbers for naturals. We explain how these measures relate with the usual ones (for example the relation of curvature and Schild’s ladder, with the differentiability of infinitesimal left translations, or the relation between the measure of commutativity with the differentiability of infinitesimal right translation; we explain how can we replace parts of the solution of the Hilbert 5th problem, which relies heavily on the use of one parameter groups, with a more intrinsic, from the point of view of emergent algebras, use of numbers). We explain why several generalizations (like for example cartesian closed categories, or symmetric monoidal categories) do not apply to categories associated to emergent algebras.


Title: Emergent algebras as denotational semantics for chemlambda and interaction combinators

Abstract: Sketched in the Pure See language draft [3], there is a denotational semantics for chemlambda (and by extension to lambda calculus) and for Interaction Combinators (via directed IC, in the form dirIC), which uses emergent algebras which are commutative, in the sense that they satisfy the SHUFFLE. We explain this semantics and more precisely how the graph rewrites (or term rewrites) indeed emerge from SHUFFLE and passages to the limit, in the style of emergent algebras.

[3] Pure See,

Derivative as not a lambda term

In chemlambda or it’s variant dirIC, reductions are all local, so for lambda terms (translated into mol graphs) we might have the phenomenon that a bonded variable escapes out of it’s scope. Then it becomes a derivation!

Space, logic and differential calculus is all semantics. The underlying reality (which we can discuss about, because is real and not objective) is at the level of graph rewrites.

With the space semantics for lambda calculus, chemlambda or dirIC, as described in Pure See, there is a derivative term which can be written as:

D = λ F.λy.(F (λx.y) (F x))

which does not make sense in lambda calculus because of the variable x which appears once in λx.y and then outside the abstraction, as F x.

Mind that we want the two appearances of x to be named with the same x!

For a term which represents a function f:


we have (here “=” means reduction from left to right)

D (λu.f) = λy.((λu.f) (λx.y) ((λu.f) x)) = λy.(f[u=λx.y] f[u=x])


D (λu.f) A = f[u=λx.A] f[u=x]

With the Pure See semantics for application and abstraction, this is exactly a finite difference, as defined in dilation structures, with a formal parameter z, and the derivative of the function u –> f[u] appears as z goes to 0 (i.e. Definition 7.8, Section 7.3. arXiv:1206.3093). In terms of decorated tangles, as explained in the Zipper logic revisited talk, the finite difference looks like this (image available from the zip slip smash repository)

OK, so what’s the point? That D is not a correct term in lambda calculus, but it is correct in chemlambda or dirIC. It makes perfect sense to work with it.


ZSS is the name of the new formalism, which blends zipper logic with dirIC. and the Reidemeister rewrites. There is a working github repository which will be continuously updated, so that is the place to look for more details in the future.

The name comes from the 3 kinds of rewrites:




The search for a name for this formalism started with this post. I think ZSS is concise and amusing as zip-slip-smash!

Zipper logic revisited: Tangles with zippers

Today, in a short time I give a talk in the Quantum Topology Seminar, organized by Louis Kauffman, see this announcement.

Zipper logic revisited

Thursday, September 24, 2020, 16:00 Chicago time

The recording of the talk and some supplementary material at this link.

The zoom link will be transmitted about 5 min before the talk, therefore if you are interested please mail me before.

I’ll add more materials to this post as soon as they will be available. For the moment I only have things like this

or like this

but, as previously, there will be some programs to play with later.

A computability question about quandle presentations

Question 1: Is there a computable function lambda2quandle which, given a lambda term T, returns lambda2quandle(T) which is a finite quandle presentation, such that for any beta reduction of T to another term T’, there is a finite number of Reidemeister moves which turn lambda2quandle(T) into lambda2quandle(T’) (up to renaming of the generators)?

I suspect that there is no such function. Here is a sketch of an answer:

Suppose that lambda2quandle transforms lambda terms into knot quandle presentations. Suppose also that there is a lambda term which is transformed into a knot presentation of an unknot.

Let A be the set of lambda terms which are turned into knot presentations of an unknot and let B the set of the other lambda terms.

By the hypothesis these sets are closed under beta reduction.

By Haken, there is a computable invariant of knots which equals 0 for unknots and 1 for all other knots.

By Scott-Curry there is no computable function f such that f x = 0 if x in A and f x = 0 for x in B.

It follows that lambda2quandle transforms any lambda term into a knot presentation of the unknot.

[if anybody can post this question on mathoverflow then maybe somebody knows something interesting]

But there may be one, so here is my second question:

Question 2: Same as Question 1, with the small change: … such that there exists a natural number N such that for any beta reduction of T to another term T’, there is a finite number smaller than N of Reidemeister moves which turn lambda2quandle(T) into lambda2quandle(T’) (up to renaming of the generators)?

Zipper logic with tangle diagrams

I retook the subject of Zipper Logic [arXiv] [figshare] because it opens a problem I became aware of only recently.

UPDATE: Solved by ZSS.

I am interested into graph rewrite systems (GRS), run with the most simple algorithms, which use graphs in the family of tangle diagrams. To my knowledge there is no such a grs which is proved to be Turing universal. Pay attention: grs which acts on tangle diagrams which is universal. There are many attempts to use tangle diagrams as a sort of notation for computing, or decorations upon such diagrams, or whole physical theories which have associated some tangle diagrams formalism. My goal is simpler: I look for a graph rewrite system over tangle diagrams (and perhaps some enhancements) whose rewrites contain the Reidemeister rewrites and some new rewrites, which is universal for computation, in the sense that any lambda calculus (or combinatory logic) term can be parsed into a graph, which can then be reduced by using the grs, so that the resulting graph can be decorated in such a way so to give the result of the computation as a lambda term (or a combinatory logic term). Therefore I want that the computation to happen via graph rewrites. I don’t want to encode computations (i.e. chains of reductions) into a tangle diagram, I don’t want to use the Reidemeister moves to prove that such two chains of reduction are equivalent. No. I want to use graph rewrites for reductions of terms parsed into tangle diagrams and back (in the sense explained, that of decoration of tangle diagrams).

This observation of the distinction between the use of graph rewrites, when they compute and when they don’t compute, was formulated for the first time as NTC vs TC in Topology does Not Compute vs Topology Computes.

Chemlambda, dirIC or Lafont Interaction Combinators, or chemSKI, are grs which are universal. All of them happen to be of a special blend, namely they all admit as a semantics emergent algebras which are commutative. (I know that except some specialists this is a void phrase, take it as if I say that there is a hidden commutativity in all these formalisms.)

Tangle diagrams with the Reidemeister rewrites admit as a semantics emergent algebras which are only linear, not commutative. This is a larger class of emergent algebras, which may be interesting for quantum stuff. Logic?

So it would be interesting to see if universal computations in the sense explained can be achieved in this linear setting, without using the hidden commutativity.

I believe not, for some reasons, and I believe that in a twisted way it is possible, for other reasons, both too long to be shared in a post.

So I am curious and I took again the Zipper Logic formalism, which I try to reformulate exclusively in tangle diagrams (with zippers) and without trivalent fanouts.

Up to now it appears that I am able to either have, in this linear setting, something akin to the beta rewrite or something akin to duplication, but not both all the time.

I will share soon the result, when it will stabilize 🙂

AMA: decentralized computing with term rewrite systems or with graph rewrite systems?

Last days came into discussion, several times, the making of a decentralized, distributed computer by using lambda calculus (or any term rewrite system), vs. making one based on a graph rewrite system, like the pure interaction combinators of Lafont, or  chemlambda.

AMA about why, from first principles it is impossible to make one based on term rewrite systems, but it is possible to make one based on graph rewrite systems.

Whenever you see a proposal for a decentralized distributed computing system based on  lambda calculus or anything which can be described as a term rewrite system, know that the proposal is false, or else is not local in time or in space.

The problem of using graph rewrite systems instead is that, even if such a system is possible, we don’t quite know how to use it.



If you look for chemSKI in the program described here, you will not see it. That is because is a new idea, which will not modify the main direction, towards pure see.

It just gives you an example of my motto: can do anything 🙂 Now, my question for you is: what are your plans, besides waiting to see how this crisis ends?

chemSKI, described in the previous post, is a purely local artificial chemistry  for SKI combinators calculus.

Now the chemSKI page is modified to admit a variant, called chemSKI+lambda, which allows to freely mix chemSKI with chemlambda. The modification is simple: because in chemSKI the node S plays a double role (combinator S and fanout), when combined with chemlambda the node S does not behave well as a fanout and it has to be replaced with a chemlambda fanout node (i.e. FO or FOE).

In chemSKI+lambda we change S with FOE where we need and we also make the nodes I, K, S to react correctly with the chemlambda FO node. This node appears in the translation made by the parser from lambda + SKI to mol.

All in  all we have now a lambda calculus enhanced with the new constants S,K and I, which themselves reduce according to SKI calculus. A term in lambda + SKI calculus can be parsed to a chemSKI+lambda graph.

We can graphically reduce such a graph either with chemSKI or chemSKI+lambda. Whenever we are outside of pure chemSKI, the new chemistry chemSKI+lambda  reduces correctly the graph.

You can see this by using the decorator “mol>chemSKI+lambda” which shows you the term corresponding to the graph (if any).

chemSKI & chemλ

There is a new page available at the chemlambda page:

chemSKI & chemlambda

The artificial chemistry chemSKI is to SKI combinator calculus as chemlambda is to lambda calculus.

You have now a parser from SKI combinators to chemSKI, which is actually a parser for lambda calculus to chemlambda as well, where “S”, “K” and “I” characters have the special status of being used for the S,K,I combinators.

There is a decorator as well (inverse of the parser, but attention that it decorates the whole graph with terms and term equalities, but it gives you the decoration of all FROUT, i.e. free out half-edges) which can be used to understand what the graphical computation may mean when projected to the more limited space of terms.

There is still work to do in several directions, like:

  • conservative rewrites version using tokens (as in hapax)
  • parser from lambda to SKI calculus to allow comparisons of reductions of lambda terms when translated to chemlambda or to chemSKI
  • for those interested into such things, chemSKI is an answer to a problem mentioned on the page, what’s missing is a chemical reaction network, more “classical” treatment
  • addition of a random rewrite which builds chemSKI graphs, again using tokens which may glue themselves to other graphs.

Should I make a “mock a mockingbird” or “dissect a mockingbird” page with the programs available, chemSKI version? At first I though it would be nice, but the graphical reduction is so varied and smooth that the “birds” phenomena look almost trivial. I don’t know, maybe…

Well, enjoy. Read the readme.

chemSKI was announced in this post.




UPDATE:  SII(SII) in chemSKI. This corresponds to Omega in chemlambda. It uses:

  • two trivalent nodes S and A
  • two univalent nodes I and K

and the usual free in FRIN, free out FROUT and Arrow from chemlambda. Hence the call for a name with the letters A, S, K, I. But now, chemSKI it is.

The node S plays the double role of the S combinator and a fanout. The node K is the K combinator and a termination. I is the I combinator. A is application.

You can see the rewrites in

Let’s use the mol notation for the graph, i.e. S and A are 3-valent nodes with 3 ports, I and K have one port. Each half edge of the graph (i.e. each port node) is decorated with a name and the graph appears as the list of nodes and decorations of the ports.

For example

I a

A a b c

means that (the unique port of) a node I is connected to the port 1 of the node A. It corresponds to the expression

b = I c

in SKI combinators calculus. This should reduce to

b = Ic     –>  b = c

which we express as a graph rewrite by

[I a, A a b c]  –> [Arrow c b]

A conservative version of the rewrite would be

[I a, A a b c] + [Arrow d d] –>  [Arrow c b] + [I a, A d a d]

where [Arrow d d] and [I a, A d a d] are “tokens”, small graphs from a small family, which are the building blocks of the graphs (“made of money”)  and they are consumed and exchanged during graph rewrites.

We still need to apply the Arrow elimination rewrite (i.e. COMB in chemlambda), which produces back an [Arrow d d] token and glues c with b.

Now let’s give quickly the correspondent of

c = K a b –> c = a

It is:

[K x, A x a z, A z b c]    + [I d, I d]  –>   [K x, I d, A d b x, I z, A z a c]

(which then need twice the I a = a rewrite  and two COMB rewrites to finally arrive to the destination).

The corresponding of

d = S a b c –>  d = (a c) (b c)

is the following rewrite which does not use any token:

[S x x u, A u a v, A v b w, A w c d]  –> [S c u v, A a u w, A b v x, A w x d]

That is  [S x x u] plays the role of the S combinator but [S c u v] plays the role of a fanout which will further duplicate c.

But what’s in a name of a node? The rewrites are those which matter. Therefore we need to add the termination like rewrites for the node K and the fanout like rewrites for the node S.

And that’s all.


I find chemSKI cute for a name.  Alternatives are not quite good: Skia means shadow (ghost) in Greek, Saki I like, as the pen name of H.H. Munro, but the source is unclear and not meaningful for my subject, Kais has some potential but not much.

Anyway chemSKI birth is on Aug 12 2020. You will be able to play with the baby soon (see update! working version though, means that it will evolve with a parser, etc)

I guess I am TOC, not TOP

I found a thread by @prathyvsh, which contains many things to explore.  I arrived there because of the same author’ Formal systems in Biology  list  mentioned in a previous post. From this source I started to read more and I found this awesome What we talk about when we talk about computation by pressron.

After reading that post I felt like a vagrant who finds a place to settle. With this occasion I learned that I am TOC, not TOP.  Moreover, even if untyped lambda calculus may be seen as an example of TOP, most of my commentaries about the differences between chemlambda and lambda calculus can be condensed into: chemlambda is a TOC version of lambda calculus.

And Pure See will rule all 🙂

2020, 7 years after the 7 years forecast

2020 is far from over, but 7 years ago I wrote the post Seven years forecast. Among the seven predictions, with indulgence only the 3rd, 4th, 5th and maybe 6th happened. I had no idea that 2020 will be the year of the pandemy and that probably the next years, 2020 to 2024 will be much more interesting and extreme than the last 7 years.

Because the situation we are in accelerates some processes which only in our dreams we thought possible and in the same time slows other processes which are not essential.

Professors don’t teach online even if they have the means, because they don’t have to (by the law) and because they can’t adapt. I saw this with my two kids. Shame on those professors.

The media stalls because they can’t profit enough online. Shame on them, let’s ignore those producers of shit and just open the phones. The real artists, the real creators are generous enough.

Lots of people learned by personal example how bad is the actual copyright system in the online medium, during the explosion of personal creativity forced, or created by the constraints. Remember!

All the discussion about open access, all the new scams where politicians and academic managers pass to the gold oa, all those conventions between big publishers and universities turn out to be obvious ways to throw away money. Sci-hub solved radically the problem and lately all these hard to use systems of access via universities are not used at all.

We don’t need all the shit we are told we need all the time. It was hard to stay at home, it is still hard, but how important do these artificial needs look now? Remember.

Many of the people who work and their work is needed can work from a computer, no matter which computer. Relax psychopats, I’m sure you’ll find a way to extract a bonus from this too. Or maybe not, this time?

Why 2024? Because according to the Gutenberg-Internet analogy, 2024 is a singularity year which marks the end of an era.

We’ll see this in 4 years 🙂

ArXiv, Figshare, Github: some load comparisons

I am a big fan of arXiv and I wrote here several times about this, like for example

or for a less positive experience (the 3rd I had with arXiv, all related to math.LO, among the 64 articles I have with arXiv in the groups physics, cs, math, and q-bio):

Things happened though the last years, with arXiv as with all the web, as you know well. Opinions, like mine, that arXiv became the NASA of Open Access, are opinions and facts are facts.

Here’s what my console says when I download one of the versions or arXiv:2007.10288, available as html (recommended) or pdf from 4 sources hosted  by arXiv, figshare and github.


3 requests
9.20 MB / 9.20 MB transferred
Finish: 10.04 s
DOMContentLoaded: 1.17 s
load: 10.05 s


2 requests
4.57 MB / 4.56 MB transferred
Finish: 774 ms
DOMContentLoaded: 565 ms
load: 776 ms


32 requests
9.56 MB / 6.17 MB transferred
Finish: 3.16 s
DOMContentLoaded: 689 ms
load: 1.24 s


60 requests
4.45 MB / 4.47 MB transferred
Finish: 1.20 s
DOMContentLoaded: 209 ms
load: 939 ms


As for more general pages, look:


27 requests
511.92 KB / 416.98 KB transferred
Finish: 4.18 s
DOMContentLoaded: 1.31 s
load: 1.76 s


122 requests
1.02 MB / 407.98 KB transferred
Finish: 4.61 s


26 requests
860.18 KB / 850.89 KB transferred
Finish: 3.50 s
DOMContentLoaded: 975 ms
load: 2.05 s


4 requests
31.27 KB / 26.17 KB transferred
Finish: 347 ms
DOMContentLoaded: 222 ms
load: 268 ms






Cancel the bottleneck

It looks to me that the cancel culture is starting to produce real new phenomena. I don’t judge them as good or bad, I don’t judge them at all. I had a bad opinion about cancel culture but now I start to be interested. The previous held bad opinion came from the fact that obviously,  cancel culture was a sort or corporate driven unrest. A way to divide. Each of the examples I knew about pointed to real, serious problems, mostly of the american society. But the way it worked was to use a real problem to distract the discussions from a bigger real problem which I generically call “the bottleneck”.

Now this seems to change, for the simple reason that no matter how the spinners spin the public opinion, there is more variety in the public. In the people, who realize that they are not the “public”, passive watchers and yesmens of the corporate media. The people think.

The botleneck is the real problem. In any social organization there is, somewhere, a place where the various things which happen are selected and then used. This is the bottleneck and those who control the bottleneck control the society.

For example those who make (and update) the map control the territory.

Politicians are only a visible aspect of the bottleneck. Media, as is seen by “the public”, offers the output of the bottleneck and censors the rest.  You see the reporter but you don’t care about the producer. You don’t have access to the private channel of discutions betheen those who have the power.

The bottleneck is the way to have power.

Think about much less visible persons, that you rarely think about. If you care about opinions held by academics, then what do you think about academic managers? Maybe a professor from an university makes some bad remarks which hurt a whole category of people. But what about the academic manager who decides which publishers gets the university’ money for the library? which materials should be made accessible to the students? which professor should be promoted?

I am fan of open access and I wrote many times about it. Buried in my post, I found this piece from 2015: Say NO to politicians, OA included.  For those who are interested in OA, the idea is the following. It is true that publishers (of scientific content) are bad because they limit the access to knowledge produced by others. Everybody knows or should know this. But the real bad people are the academic managers who still buy from the bad publishers.

This is the kind of bottleneck that I see and understand now that it had low visibility and huge power.

One of my previous arguments against the cancel culture was the following. Clearly women in programming are a minority and the reason for this is social. On the other hand, it looked to me that this matters only for women from California, simply because Elbakyan, a women programmer from elsewhere, is considered a bad pirate. Everybody (whom I know about) supports the women programmers and simultaneously becomes a lawyer when it comes to Sci-Hub, perhaps the greatest collection of scientific material ever.

But now I see that people start to go close to the bottleneck. Start to understand that there exist real people who make real decisions, harmful for most, who hide behind the bottleneck.

That is why I am willing to reconsider that cancel culture is just corporate driven unrest. It may be the manifestation of much more profound phenomena, out of the control of the bottleneck.

History of chemlambda status: on hold, then arXiv:2007.10288

UPDATE 3: The history appeared today as


Comments: arXiv admin note: text overlap with arXiv:2003.14332

There is no text overlap, see the correspondence with the moderators for their reason to add this comment, namely because a moderator thought that this history should be a part of  arXiv:2003.14332

The history article is full of links though and starts with:

“The chemlambda project context and relevant previous work are explained in Section 4(About this project) of [4] arXiv:2003.14332 See also, for more mathematical background,the presentation [6] Emergent rewrites in knot theory and logic. Here I report about the modifications of the various formalisms which are related to chemlambda. The online version of this article, which will be kept up to date, is Graph rewrites,from emergent algebras to chemlambda. See also the site of all chemlambda projects.”

UPDATE 2: I added in the quinegraphs repository a folder “reviews” which will collect all reviews or correspondence related to the article versions derived from the work in the repository. (Thought since a long time that this is good practice but I was too lazy to do it before.) Here you can see the exchange with arXiv moderation. Up to now 19.07.2020 the submission appears as “on hold” for an article submitted on July 7:

Screenshot from 2020-07-14 17:11:19

You may like to read the exchange because of the description of what I intend to do in the near future. I wrote the answer with the due politeness to arXiv, even if this is an  editorial demand which is applied only to this particular case. Read further to find the last concrete example where the existence of the article would be useful and citeable.

UPDATE: I am asked to bundle this with arXiv:2003.14332. I explained that this is not OK because there are other several pieces missing, besides the fact that nobody would be interested (or competent) in a monolithic article which explains a (truly) linear logic version as a particular commutative case of a formalism discovered in sub-riemannian geometry.

Instead I prefer a modular approach. This piece explains different choices of graph rewrites and their connection, as I witnessed  in discussions a lot of confusion about what exactly is chemlambda, due to the fact that this project started about 10 years ago. Maybe the latest example, I just found the interesting arXiv:2003.07916 which cites chemlambda by pointing to the article joint with Kauffman Chemlambda, universality and self-multiplication, This is misleading because, as explained in this history of chemlambda, the version of chemlambda which is used in all the simulations is chemlambda v2, and moreover chemlambda v1 is introduced in the Chemical concrete machine article [arXiv:1309.6914]. For the people curious about what chemlambda really is, without them trying to understand from reading the programs, this history of chemlambda would be a clear reference.

The story made me understand (again) that we need redundancy, because every source will eventually manifest corporate or commercial behaviour, mannerism, degradation of service (or lack of understanding of their historical role, as even today arXiv remains at best silent about why they let themselves — and their authors — to be misrepresented at the moment of the BOAI inception).

All these big words because I put the pdf in figshare and in github, beter 3 corporate sites than one.


Starting from Today July 9 the status of the arXiv version (also  in figshare and in github)

Graph rewrites, from graphic lambda calculus, to chemlambda, to directed interaction combinators

of the online version

Graph rewrites, from emergent algebras to chemlambda

is “on hold”:

Screenshot from 2020-07-09 09:13:06

I don’t understand what part of the content can be controversial in any way. Truth on hold.

Among my articles on arXiv this is only the third time when this happens. Until now in all cases there was only a delay of a few days or weeks. This made me start using figshare for alternative archiving. One should keep several sources of the same documents in long term archives, to be sure.

It is however significant in my eyes that the other two cases were (with arXiv and figshare sources)

Chemical concrete machine  (arXiv) (figshare)

Zipper logic (arXiv) (figshare)

Mind that on arXiv you can see the submission date but not the date when the article appears.

As you know I consider arXiv a great thing, this is not a critic of arXiv. I just don’t understand what is happening and who and why is upset, consistently.

Pure See working draft

is now available at this link. It is a working draft, meaning that: the exposition will  change, there are  (many) missing parts, there are not enough explanations which link to previous work.

I could finish it in a few days, but I think I shall write at a slower pace, which will allow me to weed it better.

At the end it should be as least as usable as lambda calculus (well that’s not very much compared with languages which are widespread), but better in some respects. If you are  a geometer you may like it from a point of view different than lambda calculus.

If you are interested, please use this working draft as main source of information.

Maps of spaces into other spaces and non-euclidean analysis

I posted to figshare the presentation Non-euclidean analysis of dilation structures (which is also hosted on this blog).  In it is explained how the process of making maps of spaces into other spaces leads to a more general analysis (and differential calculus) than usual. It was the starting point into the direction of “computing with space” which will soon reach a conclusion with the “pure see” construction.

A graphical history of chemlambda

Update: There is now arXiv:2007.10288, with the story told in the post History of chemlambda status: on hold, then arXiv:2007.10288

I made a page


which contains all the graph rewrites which were considered in various versions of chemlambda and  glc. I should have done this a long time ago and I have no doubt it will be useful.

It complements

Alife properties of directed interaction combinators vs. chemlambda.  Marius Buliga (2020),

which is an interactive version, which provides validation means,  of the article   arXiv:2005.06060


Quarantine garden

During these two months I spent a lot of time in a garden. A year ago there was nothing there. I worked the few patches of earth, threw the debris and started to plant stuff. This year the efforts begin to show.


There are now roses and jasmin, a small grapevine,


and a lot if ivy, in the downtown of a big city


Made also some garden drawings




Lots of things waiting to grow.







16 animations which supplement arXiv:2005.06060

Here is a list of 16 animations (which you can play with!) which supplement well the article

Artificial life properties of directed interaction combinators vs. chemlambda

and it’s associated page of experiments.


From the page of experiments you can travel to other directions, too!

Alife properties of directed interaction combinators vs. chemlambda

UPDATE: see arXiv:2005.06060.

The chemlambda project has a new page:

Alife properties of directed interaction combinators vs. chemlambda

This page allows experiments with graph quines under two artificial chemistries: directed interaction combinators and chemlambda. The main conclusion of these experiments is that graph rewrites systems which allow conflicting rewrites are better than those which don’t, as concerns their artificial life properties. This is in contradiction with the search for good graph rewrite systems for decentralized computing, where non-conflicting graph rewrite systems are historically preferred. Therefore we propose conflicting graph rewrite systems and a very simple algorithm of rewrite as a model for molecular computers.

Here we compare two graph rewrites systems. The first is chemlambda. The second is Directed Interaction Combinators, which has a double origin. In Lafont’ Interaction Combinators is also described a directed version. We used this proposal to provide a parsing from IC to chemlambda,



which works well if essentially one chemlambda rewrite is modified. Indeed, we replace the chemlambda A-FOE rewrite by a FI-A rewrite (which is among Asperti’ BOHM machine graph rewrites). Also some termination rewrites have to be modified.

For the purposes of this work, the main difference between these two graph rewrite systems is that chemlambda has conflicting rewrite (patterns) and Directed IC does not have such patterns. The cause of this difference is that some chemlambda nodes have two active ports while all Directed IC nodes (as the original IC nodes) have only one active port.


Internet of Smells, remember?

Now there is a time of work, very productive. Until next article, let’s remember the Internet of Smells short story from 2017.

Wouldn’t it be nice to have something like this now?  Maybe one day we’ll have it and it will not be only for the riches.

The closest equivalent to a graphical LISP repl I have is this.

There is another short story in this series, Home remodeling, which is a Proust plagiate.

More web-talks, more discussions, a new open revolution?

An unexpected gain of the quarantines is that more people realize that we can collaborate more with the available web tools.

As a small example, together with Ionut Tutu we made the site imaronline to help our colleagues to share and learn about others activities online. We are happy that now the site bloomed and we hope for more.

By the way, I would be happy to explain more about chemlambda or pure see or metric geometry or … or … discuss with you, via a web-conference. Contact me if you want this.

I look around and I see many people who start to seriously consider online communications for work. I think this  will change the way we see work as open science and open access changed the way we think about science communication. Only in this case the number of people pushing for a change will be much bigger!

There are lots of unexpected advantages of the transition from paper scientific journals to online journals. Obvious only in retrospect. One of them, for example, is that in online communication there are constraints which no longer apply. In the case of research articles, the length of the article is no longer a constraint. Moreover, the content of the article is no longer limited to what can be printed on paper.

Of course, these limitations (length, content type) are still used without other reason that they are familiar. Or with reasons which hide some dark patterns. As an example consider the case of bibliographies, or references, which still in their majority ignore the possibilities. References are many times not as links to available sources and they often ignore the sources which are not under the control of parasite publishers.

In the same way, these new web talks will evolve, I believe, into new ways of interactive collaboration.

The human presence in a discussion is a very strong motivator. An online video or chat discussion cannot have an arbitrary length, true. It is also, past to a point, a more shallow exchange than a written one, in the sense that many details in the mind of the participants do not pass to the other participants. Such a discussion should instead be alternated with written, more rigorous ones. Working material should be made available and follow-ups should be considered.

I know people complain that these online activities are not used more. For example, parents would like more from the teachers of their kids. There are many situations where the work is half stunted, although possible via web-talks. We have to remember this when the quarantines will be over. Next time try not to ignore the “foolish” ones who propose new ways of communications for professional purposes (like research, teaching, etc). Now that we need more teachers, researchers, etc, competent with these new means of communications, we need to accept as wrong  the previous general indifference or even hostility towards those who tried.


Zoom in between

Work progresses well and soon will come continuations of  arXiv:2003.14332, but now there is a time in between. What to do, other than have some talks, look at some links?

Re talk I’ll finally install zoom (I know) and if you want to organize an AMA or if you want to talk about computation, hamiltonians or bipotentials, let me know.

Re links, two which I think are funny:

  • at the gossip blog Not even wrong there is a new post about Mochizuki. The good part of the post are the links to articles. Fesenko is a realist. The  post is aggressively against Mochizuki, but there  is no knowledge to back up the tone. Among the comments, though, there are 2 or 3 from people who are competent, therefore they are interesting to read. [update: more interesting comments appeared.]
  • I didn’t know about this when I wrote a post about Alexandra Elbakyan back in February, but I learned about news concerning Alexandra’s fight against her bullying during her studies. Shortly, after she appealed to the ethics commission, during the audience she spat on them. Not something I approve as a behaviour, but the pressure on her is so huge probably, that she made a public relations mistake. Let us not forget that tens or hundreds of thousands of academic researchers, among them most of the academic managers, agree with the parasites who suck the blood of research for short term commercial reasons. Shame on them! Maybe this pirate gesture (she is a pirate, right? according to parasites) awakes some of them to the realisation of the gravity of their complicity.


lambdalife.html, a local version

lambdalife.html is the locally hosted version of . There you can find the same pages, excepting the animation collection, which has bigger animations. You can also download the source scripts which fuel the pages.

So, for those who wait for the final concerning anharmonic lambda or the alternative linear logic, is the same base to build on it. The new ones will come.

Here is why I hosted it locally as well (1) and here is why I insist on making clear the basis before going further (2).

(1) Because of so many weird happenings around this project, I checked these days to see if my gmail messages which contain related stuff arrive (by using two channels). They don’t or they do arrive half a day later, after I used the second channel. So I basically, in general, have 0 trust in any corporate channel. If you think I overdramatize then how do you explain this or  this, sorry that I can’t speak about private exchanges?

The alternative to filtering is open science.

(2) During these years when I tried to absorb previous knowledge in the field, I was very  rarely helped by one of the professionals in the field. I am a professional too, but in other fields, like geometry or convex analysis. As far as I am aware in my general mathematical areas, which are huge compared with this new scientific field, open fair collaboration is the rule. Ask me something and to my best I’ll answer you. It is true that when it comes to walking on another’s lawn, things are not nice in mathematics as well. But with time we are confident we shall  be better, more human, less ape.

OK, so why tf nobody among hundreds of (very good in their field) specialists mentioned Bawden, for example. Instead, all sort of questions, down to the “I don’t know what am I looking at”. When you have the programs and you know your field? I know that in CS the wheel is reinvented all the time, but probably the wheel is proprietary most of the time, because this science is so young compared with mathematics. People don’t yet behave as scientists, they need to educate themselves in these matters.

So next time when you think why tf I don’t proceed further with explanations, please ask these categorical or linear logic snake-oil sellers if they understand what they are looking at.

(Don’t get me started with linear logic! WTF is linear in your version? Nothing. Any mathematician  can embedd anything into a Banach space, come on! Mathematicians to the  rescue soon. Oh, but wait, nobody gives a fuck on this subject, except the practitioners and the programmers who want to look intelligent. They should, despite this selected public, because this is a subject at the core of mathematics. As everybody knows and don’t like, mathematics is in everything and the best way to produce new science.)

Nature vs nurture in lambda calculus terms

This is one of the experiments among those accessible from arXiv:2003.14332 ,  or as well from figshare.

Btw if you want to know how you can contribute contribute read especially section 2.

Imagine that the lambda term

(\a.a a)(\x.((\b.b b)(\y.y x)))

is the gene of a future organism. How will it grow? Only nature (the term, the gene) matters or nurture (random evolution) matter?

Maybe you are familiar with this term, I saw it first time in arXiv:1701.04691, it is one which messes many purely local graph rewrite algorithms for lambda calculus. (By purely local I also mean that the parsing of the lambda term into a graph does not introduce non-local boxes, croissants, or other devices.)

As a term, it should reduce to the omega combinator. This is a quine in lambda calculus and it is also a chemlambda quine. You can play with it here.

I was curious if chemlambda can reduce corectly this term.  At first sight it does not. In this page use the “change” button until you have the message “random choices” and put the rewrites weights slides in the middle between “grow” and “slim”.  These are the default settings so at first you don’t have to do anything about.

Click the “start” button. You’ll see that after a while the graph (generated by the lambda to chemlambda parser) reduces to only one yellow node (a FOE) and the FROUT (free out) node of the root (of the lambda term). So chemlambda seems to be not capable to reduce this term correctly.

You can play with the (graph of the) term reducing it by hand, i.e. by hovering with the mouse over the nodes to trigger rewrites. If you do this carefully then you’ll arrive to reduce the graph to one which is identical with the graph of the omega combinator, with the exception of the fact that it has FOE nodes instead of FO nodes. By using the mol>lambda button you can see that such a graph does represent the omega combinator.

Now, either you succeeded this task, or not. Fo not just erload (by using lambda>mol button) and click “change” to get “older first” and move the rewrites weigths slider to “GROW”. This is the standard setting to check if the graph is a quine. Push start. What happens?

The reduction continues forever. This is a quine!

You’ll see that the mol>lambda gives nothing. This means that the decoration with lambda terms does not reach the root node. We are outside lambda calculus with this quine, which is different from the quine given by the omega combinator.

The two quines have the same gene. One of the quines (the graph of omega) never dies. The other quine is mortal: it dies by reducing eventually (in the “random choices” case) to a FOE node.


Chemlambda, lambda calculus and interaction combinators experiment notes ready

As  arXiv:2003.14332 [cs.AI]    appeared the notes “Artificial chemistry experiments with chemlambda, lambda calculus, interaction combinators”,  which are the access for the library of experiments.

For those interested, I recommend to read  section 1, How not to read these notes, and section 2, How you can contribute.

Next, for one side I’ll continue to modify the draft version according to needs, but on the other, main side, I’ll be cheeky and take a chance to go directly to an alternative linear logic essay.

Asymmetrical Interaction Combinators rewrites

I put asymmetrical IC rewrites in chemistry.js, which look almost like the beta rewrite:

// action modified from “GAMMA-GAMMA” with 1 pair Arrow-Arrow added, asymmetric
{left:“GAMMA”,right:“GAMMA”,action:“GAMMA-GAMMA-arrow1”, named:“GAMMA-GAMMA”, t1:“Arrow”,t2:“Arrow”, kind:“BETA”},
// action modified from “DELTA-DELTA” with 1 pair Arrow-Arrow added, asymmetric

{left:“DELTA”,right:“DELTA”,action:“DELTA-DELTA-arrow1”, named:“DELTA-DELTA”, t1:“Arrow”,t2:“Arrow”, kind:“BETA”},

The previous rewrites, now commented, were symmetrical.

The asymmetry comes from the Arrow elements, which can be inserted in two different ways, depending on the two possible identifications of a GAMMA-GAMMA (or DELTA-DELTA) pattern.

IC graphs are not oriented, but all nodes have distinguished ports numbering. The asymmetric rewrites lead to the curious case of theoretically self-conflicting rewrites.

But this asymmetry is harmless, as you can verify by playing with IC quines.

Molecules laboratory and news about the 10-node quine

There is now a virtual lab where you can input molecules by hand and play with them, supplementary of choosing them from the menu.

When you download the lab page tou see that the text area of input molecules is already loaded with something.

You may of course delete what is written there and just build your molecule: write the molecule line by line and hit input to see it. You may manipulate it by trigerring rewrites with the mouse hover. Then hit “update” to see the new molecule you get.

For example, suppose you delete what’s written in the text area and you start fresh by typing

A 1 2 3^

then you hit “input”. What do you get?

You add another node, say you continue by

A 1 2 3^FI 3 4 5^

hit input, what do you get?

Now you may like to connect two free edges, say the 5 with the 1: you add text to get

A 1 2 3^FI 3 4 5^Arrow 5 1^

hit input, what do you get?

Use mouse hover over the “Arrow” node (white one), what do you get?

hit “update”, what do you get?

And so on.

Now, let’s return to the molecule already present in the text area. There are two molecules in fact, they are the chilfren of a 10-node quine.

In a previous post I announced that the 10-node quine can duplicate. I put an animation movie which shows that. But the lab showed me I am wrong.

Exercise: in the lab choose the 10-node quine from the menu.

With mouse hover try to modify the molecule until you obtain two separated molecules.

If you succeed, then prepare to check if they are quines, by:

  • click on the “change” button to see “older is first” message. This means that the rewrites will be done based on the age of nodes and links.
  • move the rewrites weights slider to “GROW”
  • hit “start”

If the two molecules don’t die and they don’t grow indefinitely, then you get two quines from one.

Hit stop from time to time to see what you have. Hit update to have the molecules code in the text area.

That what I did to obtain the two children of the 10-nodes quine. They look so much alike. They are both 10 nodes quines, they have almost identical links, but they are not the same!

One of them is the original 10-nodes quine, the other one is a quine which is diferent only in some links.

For the moment I have not succeeded to duplicate by hand this second quine.

Maybe you find another way the 10-nodes quine can duplicate?





Chemlambda page 99%

Hey, I need criticism, bugs finding, suggestions for improvements, as concerns the legacy chemlambda.

Thank you, be well!

In the collection there is now added, whenever appropriate, the list of posts where a mol is used. This is a variant of search by mol files. It is dynamical, like all the collection, in the sense that whenever a post is added or retired, everything works without other effort needed.

Another problem is to connect in the same way the other pages with the collection. The difficulty is that the collection is in one place (because is big) and the other pages are in other places. I’ll have to think about that.

What is left is to write a text … something I tried to avoid. But in order to “publish” based on this work, this is needed.

Anyway, for the interested, is in front of you and usable 🙂 For those who wait for the new parts, announced, I’m here to talk, and I prepare stuff for this. The story is big, as you see, only for stuff which I already call “legacy”, there are lots and lots of details which I have to do (which I like) and circles to hop through (which I don’t like) for the sake of … a system which dies basically.

Another two updates:

1. Right now I’m commenting the code, so the text is/will be in the programs. Let’s see where this goes.

2. I have the impression that I arrive to some overengineered result. What I would need to talk about is for example the awesomeness of the Heisenberg picture of mechanics as seen in emergent algebras plus hamiltonians with dissipation framework. But, you see, the problem is, as Robert Hermann wrote:

“… and I am supposed to sit back and wait for Professor Whosits to tell me when he thinks problems are “mature”…
I sent the papers he mentions to very few people … I am also interested to note that he did look at them, since there is considerable overlap in methodology with a recent paper by one of his students, with no mention of my papers in his bibliography …
any money spent by NSF on a Mathematics Research Institute would be down the proverbial rat hole – it would only serve to raise Professor Whosits’ salary and make him ever more arrogant. ”

🙂 So I try to make my position safe from any attack from Whosits of the academic world.

Which is time lost for making new stuff…


Biological immortality and probabilities of chemical rewrites

This post is a continuation of the post Random choices vs older first. There is an apparent paradox involving the probability to die and the probability of a chemical rewrite.

An organism is biologically immortal if its probability to die does not depend on its age. Another probability which matters for organisms made of chemical compounds is the probability of a chemical rewrite, say relevant for the organism metabolism.

Let’s suppose that there is a mechanism for such a chemical rewrite. For example, for each rewrite there is an enzyme which trigger it, like we supposed in the case of the chemical concrete machine. We can imagine two hypothetical situations.

Situation 1. The organism is chemically stable, so in the neighbourhood of a rewrite pattern there is a constant in time chance of the presence of a rewrite enzyme.  The probability of the chemical rewrite would be then independent on time. We would expect, for example, that biologically immortal organisms are in this situation.

Situation 2. There is a more complex mechanism for chemical rewritesm where there is a way to make a probability of a chemical rewrite to depend on time. As an example, suppose that the presence of the rewrite pattern triggers the production of rewrite enzymes. This would make the probability of the rewrite to be bigger for older rewrite patterns. But if probabilities of older rewrite patterns to be transformed is bigger than the probabilities of newer rewrite patterns, that would imply that the general probability to die (no more rewrites available) would be time dependent. It seems that such organisms are not likely to be biologically immortal.

The paradox is that it may be the other way around. Biologically immortal organisms may be in situation 2 and mortal organisms in situation 1.

This is illustrated by chemlambda quines. The page How to test a quine allows to change the probability of the rewrites with respect to time.

By definition a quine graph is one with a periodic evolution under the greedy algorithm of rewrites, which performs as many non-conflicting rewrites as possible in a single step. The definition can be refined by adding that in case of conflicting rewrites the algorithm will choose a rewrite which “grows” the graoh, by increasing the number of nodes.  What is interesting is how such a quine graph behaves under the random rewrites algorithm.

A graph reduced with this greedy algorithm evolves as if it is in the extreme of the Situation 2, where older rewrite patterns are always executed first (maybe excepting a class of rewrites which are alway executed first, like the “COMB” rewrites). This observation leads us to redefine a quine graph as a graph which is biologically immortal in situation 2!

You can check for yourself that the various graphs (from chemlambda or Interaction combinators) from that page are indeed quine graphs. Do like this: pick a graph from the menu. Move the “rewrites weights slider” to the “grow side”. Click on the “change” button to have the message “older is first”. Click “start”. If you want to reload then click “reload”.

A quine graph is an example of an (artificial life) organism which is immortal if the rewrites probabilities depends on the age of the rewrite pattern.

Under the random rewrite algorithm , i.e. Situation 1, such a graph quine may die or it may reproduce. They are indeed alive.


Random choices vs older first

There is now a page to check if a graph is a quine. This is done by a clarification of the notion of a quine graph.

The initial definition for a quine graph is: one which has a periodic evolution under the greedy algorithm of rewrites. The greedy algorithm performs at each step the maximal number of non-conflicting rewrites.

Mind that for some graphs, at one moment there might be more than one maximal collections of non-conflicting rewrites. Therefore the greedy algorithm is not deterministic, but almost.

In the js simulations there is performed one graph rewrite at each step, so how can we modify this to be able rto check if a graph is a quine?

The answer is to introduce an age of the nodes and links of the graph. There is an age (global) counter which counts the number of steps. Each new node and each new edge receive an “age” field which keeps the age of the birth of the said node or link.

The age of a rewrite pattern is then the maximum of ages of it’s nodes and links.

The “random choices” lets the initial reduction algorithm as is. That means the rewrites are executed randomly, with the exception of “COMB” rewrites which are always executed first.

The “older is first” executes always the rewrites on the oldest rewrite patterns (with the exception of “COMB” rewrites which are executed first, when available. The effect is that the algorithm reduces sequentially maximal collections of non-conflicting graph rewrites!

Therefore the definition of a quine graph should be modified to: a graph which has bounded (number of nodes and links) evolution under the “older is first” algorithm.

It is however intriguing the stabilising effect of “older is first”. Just look at the 9_quine, with the “older is first” it never dies, as expected, but under  “random choices” it does.

Similarly, use the new page to look at the behaviour of the 10_nodes quine. It either dies immediately or it never dies. This is understandable from the definition, because this quine has either a maximal collection of rewrites which kills it or another maximal collection of rewrites which transforms it into a graph which ahs periodic evolution from that point.

But what is intriguing is the suggestion that there should be a tunable parameter, between “random choices” and “older is first”, namely a parameter in the probablility of the rewrites which decrease with age (i.e. older patterns are more probable to be rewritten than the newer ones). At one extreme older patterns are always executed first. At the other extreme there is the same probability no matter the age of the pattern.

To have probabilities strongly depending on the age of the pattern stabilize the evolution of a quine graph: it may die quick or it lives forever. Probably it does not reproduce.

On the other side, a lower dependence on age implies a more natural evolution: the quine may now die or reproduce.

The decorator: a glimpse into the way chemlambda does lambda calculus

The lambda2chemlambda parser has now a decorator. Previously there was a button (function) which translates (parses) a lambda term into a mol. The reverse button (function) is the decorator, which takes a mol and turns it into a lambda term and some relations.

Let’s explain.

The lambda calculus to mol function  and the mol to lambda calculus are not inverse one to the other. This means that if we start from a mol file, translate it into a lambda term, then translate the lambda term back into mol, then we don’t always get the same mol. (See for example this.)

Example 1: In this page we first have the lambda term PRED ((POW 3) 4), which is turned into a mol. If we use the mol>lambda button then we get the same term, up to renaming of variables. You can check this by copy-paste of the lambda term from the second textarea into the first and then use lambda>mol button. You can see down the page (below the animation window) the mol file. Is the same.

Now push “start” and let chemlambda reduce the mol file. It will stop eventually. Move the gravity slider to MIN and rescale with mouse wheel to see the structure of the graph. It is like a sort of a surface.

Push the mol>lambda button. You get a very nice term which is a Church number, as expected.

Copy-paste this term into the first textarea and convert it into a mol by using lambda>mol. Look at the graph, this time it is a long, closed string, like a Church number should look in chemlambda. But it is not the same graph, right?

The mol>lambda translation produces a decoration of the half-edges of the graph. The propagation of this decoration into the graph has two parts:

  • propagation of the decoration along the edges. Whenever an edge has already the source and targed decorated, we get a relation between these decorations (they are “=”)
  • propagation of the decoration through the nodes. Here the nodes A, L, and FO are decorated according to the operations application, lambda and fanout. The FOE nodes are decorated like fanouts (so a FO and a FOE node are decorated the same! something is lost in translation). There is no clear way about how to decorate a FI (fanin) node. I choose to decorate the output of a FI with a new variable and to introduce two relations: first port decoration = output decoration and second port decoration = output decoration.

The initial decoration is with variables, for the ports of FRIN nodes, for the 2nd ports of the L nodes and for the output ports of the FI nodes.

We read the decoration of the graph as the decoration of the FROUT node(s). Mols from lambda terms have only one FROUT,

The graph is well decorated if the FROUT node is decorated and there are no relations.

Example 2: Let’s see how the omega combinator reduces. First click the mol>lambda buton to see we have the same term back, the omega.

Now use the “step” button, which does one reduction step, randomly. In the second textarea you see the decoration of the FROUT node and the relations.

What happens? Some steps make sense in lambda calculus, but some others don’t. In these relations you can see how chemlambda “thinks” about lambda calculus.

If you use the “step” button to reduce the mol, then you’ll see that you obtain back the omega combinator, after several steps.



How does a zipper logic zipper zip? Bonus: what is Ackermann goo?

Zipper logic is an alternative to chemlambda, where two patterns of nodes, called half-zippers, appear.

It may be more easy to mimic, from a molecular point of view.

Anyway, if you want to have a clear image about how it works then there are two ways to play with zippers.

1. Go to this page and execute a zip move. Is that a zipper or not?

2. Go to the lambda2chemlambda page and type this lambda term

(\h.\g.\f.\e.\d.\c.\b.\a.z) A B C D E F G H

Then reduce it. [There is a difference, because a, b, … h do not occur in A B C D E F so the parser adds a termination node to each of them, so when you reduce it the zipper will zip and then will dissappear.]

You can see here the half-zippers



which are the inspiration of the zippers from zipper logic.

In chemlambda you can also make FI-zippers and FOE-zippers as well, I used this for permutations.

BONUS: I made a comment at HN which received the funny reply “Thanks for the nightmares! 🙂“, so let me recall, by way of this comment, what is an Ackermann goo:

A scenario more interesting than boundless self-replication is Ackermann goo [0], [1]. Grey goo starts with a molecular machine able to replicate itself. You get exponentially more copies, hence goo. Imagine that we could build molecules like programs which execute themselves via chemical interactions with the environment. Then, for example, a Y combinator machine would appear as a linearly growing string [2]. No danger here. Take Ackermann(4,4) now. This is vastly more complex than a goo made of lots of small dumb copies.





Robert Hermann on peer review

The gossip blog “Not even wrong”, not a friend of Open Science, has an update of the post Robert Hermann 1931-2020. Following the update to an older post, the reader is led to some very relevants quotes from Robert Hermann on peer review.

For those who are not aware, Robert Hermann was far ahead of his time not only in the understanding of the geometrical structure of topics in modern physics, but also in his efforts concerning research sharing.

I reproduce the quotes here,  copy-pasted from the sources in the linked comments.

Before that, some very short answers to potential questions you may have:

  • I don’t think bad about American mathematics or physics, on the contrary, the point is that if bad things happen in that strong research community, then it is expected the same or worse in other communities. I believe these opinions of Hermann apply everywhere in the research community, today.
  • Peer review is better than no peer review, but it is worse than validation.
  • Peer reviews are opinions, with  good and bad sides. They are not part of the scientific method.
  • By comparison, the author who makes all the work available (i.e. Open Science) opens the way to the reader to independently validate the said work. This is the real mechanism of the scientific method.

The quotes, from the sources:

[source] … consider these quotes from two letters he published in his 1979 book “Cartanian Geometry, Nonlinear Waves, and Control Theory: Part B”:
“… I am not the only one who has been viciously cut down because I tried to break out of the rigid shell and narrow grooves of American mathematics. … My proposal was to continue my … work with … Frank Estabrook and Hugo Wahlquist of the Jet Propulsion Laboratory. … I most deeply resent the arrogance of the referee #3 toward their work … typical … arrogance of Referee #3 is his blather about the “prematureness” of our work … Now, we are working in a field – nonlinear waves – which is moving extremely rapidly and which has the potential for the most important applications, ranging from … Josephson junction to … fusion … and I am supposed to sit back and wait for Professor Whosits to tell me when he thinks problems are “mature”…
I sent the papers he mentions to very few people … I am also interested to note that he did look at them, since there is considerable overlap in methodology with a recent paper by one of his students, with no mention of my papers in his bibliography …
any money spent by NSF on a Mathematics Research Institute would be down the proverbial rat hole – it would only serve to raise Professor Whosits’ salary and make him ever more arrogant. It would do more good to throw the money off the Empire State Building: at least there is a chance it would be picked up and used creatively by a poor, unemployed mathematician …
This issue transends my own personal situation …
Most perversely, the peer review system … works as a sort of Gallup poll to veto efforts by determined individuals … As budgets have tightened, the specialists fight more and more fiercely to keep what little money is available for their own interests. Thus, people with a generalist bent are driven out …”.

[source] … Hermann said in letters published in his 1979 book “Cartanian Geometry, Nonlinear Waves, and Control Theory: Part B”:
“… In 1975 … I had essentially quit my academic job at Rutgers (so I could do my research full time), and my main support came from Ames Research Center (NASA) for my work on control theory. I was also starting a publishing company, Math Sci Press, writing books for it to hold out the hope that, some day, I would get off this treadmill of endless grant proposals. (Unfortunately, it is still [March 1979] at best bearly breaking even.) …
Ever since I lost my ONR grant in 1970, thanks to Senator Mansfield, I have been trying to persuade NSF … that my work on the differential geometric foundations of engineering and physics is worthy of their support … I see my colleagues who stay within the disciplinary “clubs” receiving support much more readily … Thanks to Freedom of Information, I finally see what the great minds of my peers object to, and I see nothing but vague hearsay, bitchiness, and plain incompetence in reviewing … specialized cosed shops that blatantly discriminate against the sort of … work that I do.”


Parser gives fun arrow names

Not yet released with modifications, but the lambda2chemlambda parser can be made to give fun arrow names. Like for example the term

((\g.((\x.(g (x x))) (\x.(g (x x))))) (\x.x))

which is the Y combinator applied to id, becomes the mol

FROUT [^L [((\g. [((\g [((\^L [((\g.((\x. [((\g.((\x [((\g.((\^FO [((\g [((\g.((\x.(g [((\g.((\x.(g*^A [((\g.((\x.(g [((\g.((\x.(g@( [((\g.((\x.(g@^FO [((\g.((\x [((\g.((\x.(g@(x [((\g.((\x.(g@(x*^A [((\g.((\x.(g@(x [((\g.((\x.(g@(x@x [((\g.((\x.(g@(x@^Arrow [((\g.((\x.(g@(x* [((\g.((\x.(g@(x@x^Arrow [((\g.((\x.(g@(x@ [((\g.((\x.(g@(^Arrow [((\g.((\x.(g@ [((\g.((\x.(^Arrow [((\g.((\x.( [((\g.((\x.^Arrow [((\g.((\ [((\g.((^A [((\g.(( [((\g.((\x.(g@(x@x)))@( [((\g.((\x.(g@(x@x)))@^L [((\g.((\x.(g@(x@x)))@(\x. [((\g.((\x.(g@(x@x)))@(\x [((\g.((\x.(g@(x@x)))@(\^Arrow [((\g.((\x.(g* [((\g.((\x.(g@(x@x)))@(\x.(g^A [((\g.((\x.(g@(x@x)))@(\x.(g [((\g.((\x.(g@(x@x)))@(\x.(g@( [((\g.((\x.(g@(x@x)))@(\x.(g@^FO [((\g.((\x.(g@(x@x)))@(\x [((\g.((\x.(g@(x@x)))@(\x.(g@(x [((\g.((\x.(g@(x@x)))@(\x.(g@(x*^A [((\g.((\x.(g@(x@x)))@(\x.(g@(x [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x [((\g.((\x.(g@(x@x)))@(\x.(g@(x@^Arrow [((\g.((\x.(g@(x@x)))@(\x.(g@(x* [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x^Arrow [((\g.((\x.(g@(x@x)))@(\x.(g@(x@ [((\g.((\x.(g@(x@x)))@(\x.(g@(^Arrow [((\g.((\x.(g@(x@x)))@(\x.(g@ [((\g.((\x.(g@(x@x)))@(\x.(^Arrow [((\g.((\x.(g@(x@x)))@(\x.( [((\g.((\x.(g@(x@x)))@(\x.^Arrow [((\g.((\x.(g@(x@x)))@(\ [((\g.((\x.(g@(x@x)))@(^Arrow [((\g.((\x.(g@(x@x)))@ [((\g.(^Arrow [((\g.( [((\g.^Arrow [((\ [((^A [(( [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@( [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@^L [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@(\x. [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@(\x [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@(\^Arrow [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@(\x [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@(\x.x^Arrow [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@(\x.x [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@(\x.^Arrow [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@(\ [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@(^Arrow [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@ [(^Arrow [( [

Recall that the parser is part of the landing page for all chemlambda projects.

So, if you write in the parser:

(\x.a) b

which is the pattern for a beta rewrite, then the relevant part of the mol (with funny arrow names) and the rewrite will have the form:



Biography of Sci-Hub creator Alexandra Elbakyan

I found today Alexandra Elbakyan biography, written by herself.

This link is to the original (in Russian) and this link is the google translate into English.

UPDATE: Links no longer available but there is now this page and the archived version.

I think this is a very interesting read. You can get a really first hand description of the context and motivations of the creation of Sci-Hub. It is also a glimpse into the mind of a special individual who was born and lived in a middle of nowhere and who changed the world.

Some quotes, which I particularity resonate with:

“What is this misfortune?” I thought “again they see in me not a man, but a programmer”

“It was 2012, and I turned 24. I was a patriot and supported Putin’s policies. And I was also the creator of the Sci-Hub service, which, according to numerous reviews, incredibly helped Russian science.

But no one called and wrote to me like that.
No one invited me to participate in any scientific projects.
Every day I went in a cold, crowded train from Odintsovo, where the HSE hostel was located – to the university and back.”

Especially this I can’t understand. For anyone creative it would be a privilege to participate in a scientific project with Elbakyan.

Beta and dist are emergent, just like Reidemeister 3 and Hamiltonian mechanics

The title says all 🙂 This is a time tag announcement. The computing with space project is essentially finished. The last piece was discovered today.

I still have to write all down though. It would be helpful to make me do it quicker by making me give talks or something like that.

It is beautiful.

UPDATE: See how in the Pure See description (working draft at the moment of this update) and also look at the slides of the presentation “emergent rewrites in knot theory and logic).

All goes well!

The “attack” on my institute web, or whatever that was, seems to have a solution. So now all pages go well (Feb 10 2020).

In conclusion, you may use:

I can be found at:

EDIT: some words about the revived collection. There are 264 posts/animations, which is a bit more than 1/2 of the original collection. Now there is the possibility to rerun in js the simulation, because whenever possible there is a mol file attached to the animation, which can be reduced in js. Some numbers now. In verifiedMol.js there are 500 mol files, but some are duplications, in order to manually enhance the automated  matching of posts with mols, so say there are about 490 mol files. If they are too big to be used without stalling the js reduction, this is signaled by the message “mol too big” in the post. If there is no mol which matches, this is signaled as “mol unavailable”. Of all 264 posts, 36 of them fall in the “mol too big” category, 46 in the “mol unavailable” and there are 6 posts which don’t have a chemlambda simulation inside. So this leaves 264-88=176 posts which have matching mol files to play with. Finally, there are two situations where the matching mol-post is not perfect: (1) when in the original simulation is used a mol file which contains nodes of a busy-beaver Turing machine (of the kind explained here), (2) when in the original is used a .mola file (a mol with actors). In both cases the js reduction does not know how to do this.

Pure See, emergent beta, Heisenberg

Some updates, for things to come and plans.

1. Pure See (now there is a working draft) is a relative of lambda calculus, in the sense that it is Turing universal, is very simple, but it does not use abstraction, application, let as primitives. It is a programming language built over  commutative emergent algebras, i.e. those with the shuffle trick, or equivalently with the algebraic properties of em-convex (but mind that em-convex still uses lambda and application operations; these are not needed).

I plan to make a parser for Pure See very soon.

2. This means that Pure See is as commutative as lambda calculus. Or, the general theory that I have in mind is non-commutative. And emergent, in the sense of emergent algebras.

Before going full non-commutative, one has to realize the beta rewrite as emergent. This is true, in the same way as associativity is emergent in the equational theory of emergent algebras, or the way to realize Reidemeister 3 rewrite from R1 and R2 (and a passage to the limit). The fact that beta is emergent is what makes Pure See to work and answers to the question: do emergent algebras compute? Yes, they do, because in the most uninteresting situation, the commutative one, we can implement lambda calculus with commutative emergent algebras.

3. The first non-coomutative case is the Heisenberg group, described as a non-commutative emergent algebra. I have since a long time the description. The shuffle trick becomes something else. Means that beta rewrite and DIST rewrites change into something more interesting. The whole formalism actually becomes something else.

I thought that the general non-commutative case is in principle far more complex than the Heisenberg case. It was also unsatisfying that I had no explanation for the reason why Heisenberg groups appear in physics. What’s special about them?

Now I know, they are logically unavoidable (again in the frame of emergent algebras).

So I still play with this new point of view and I wonder what to do next.

The wise thing would be to carefully explain, in a legacy way, all this body of work. My initial plan was to base this explanations on a backbone of openly communicated programs and demos, so that the article versions would be a skin of the whole description. Who wants to read betdime stories has the article. Who wants more has the programs. Who wants all thinks about all this.

With the DDOS or whatever is it,  it becomes harder to use independent ways of sharing.

Or should I jump directly to the non-commutative case?

Or somebody really started to make molecular computers?  If so,  it would be, short time span, the most interesting thing.


DDOS attack or huge number of hits from US and China

UPDATE: Feb 10 2020 seems to work. The situation was between  Jan 19 – Feb 10 2020.

These are the two explanations I received about the bad behaviour of the site where I have my professional page. It started around Sunday, Jan 19 2020.(Mentioned here.)

I don’t know which is right, any of them, both or none. This however is a block of the access to this copy of the chemlambda collection, which I put online here on Saturday, Jan 11 2020.

In the case you  want to access the collection, then there are the following possibilities:

  • the original site, which sits in a place not under the control of a corp.
  • a copy of the site, with smaller pictures (I am limited by 0.5GB limit), at github, which may be blocked in your country (??)
  • you can take the original simulations which were used to make the animations from figshare. For the comments and the internal links, take them from one of the available places
  • the landing page for all chemlambda projects is on github too…
  • or maybe there is a kind soul which has access to these sites and can host the whole stuff in a non-corp. place which is accessible to everybody
  • or you find a way to notice me that you’re interested, or willing to help, and we see what we can do.

EDIT: some words about the revived collection. There are 264 posts/animations, which is a bit more than 1/2 of the original collection. Now there is the possibility to rerun in js the simulation, because whenever possible there is a mol file attached to the animation, which can be reduced in js. Some numbers now. In verifiedMol.js there are 500 mol files, but some are duplications, in order to manually enhance the automated  matching of posts with mols, so say there are about 490 mol files. If they are too big to be used without stalling the js reduction, this is signaled by the message “mol too big” in the post. If there is no mol which matches, this is signaled as “mol unavailable”. Of all 264 posts, 36 of them fall in the “mol too big” category, 46 in the “mol unavailable” and there are 6 posts which don’t have a chemlambda simulation inside. So this leaves 264-88=176 posts which have matching mol files to play with. Finally, there are two situations where the matching mol-post is not perfect: (1) when in the original simulation is used a mol file which contains nodes of a busy-beaver Turing machine (of the kind explained here), (2) when in the original is used a .mola file (a mol with actors). In both cases the js reduction does not know how to do this.

Anyway, wait for pure see, that will be a sight 🙂

All chemlambda projects landing page

I put a version of the collection of animations on github. I made a user named “chemlambda” and now there is a landing page for all chemlambda projects (bare minimum).

I’ll add more and I’ll structure more that page, but with the occasion of making available the collection, making such a site was only natural.

Please let me know if there are more (than the excellent ones I know about, like -hask, -py or -editor).



3 days since the server is too busy, so…

UPDATE: a version of the collection is on github.


the collection needs a better place. Alternatively, I could temporarily use github (by making the animations smaller, I can cram the collection into 480MB). Or better with a replacement of animations by the simulations themselves. As you see these simulations occupy 1GB, but they can be mined, in order to extract the right parameters (gravity, force strength, radii and colors, mol source) and then just reuse them in the js.

Anybody willing? I need to explain what pure see is about.

Also, use this working link to my homepage.

Google+ salvaged collection of animations (III): online again!

UPDATE: Chemlambda collection of animations. is the version of the collection hosted on github. The original site is under very heavy traffic (in Jan 2020).

UPDATE: For example the 2 neurons interacting animation can be remaked online to look like this:


First you use the mouse wheel to rescale, mouse to translate. Notice the gravity slider position. This is an animation screencasted from the real thing which takes 8 min to unfold. But in this way you see what is happening beyond the original animation.

Btw, what is such a neuron? It is simply a (vectorial) linear function, which is applied to itself,  written in lambda calculus. These two neurons are two linear functions, with some inputs and outputs connected.

Soon the links will be fixed (internal and external) [done] and soon after that there will be a more complete experience of the larger chemlambda universe. (and then the path is open for pure see)


In Oct 2018  I deleted the G+ chemlambda collection of animations, before G+ went offline. Now, a big part of it is online, at this link. For many of the animations you can do now live the reduction of the associated molecule.

The association between posts, animation and source mol file is from best fit.

There are limitations explained in the last post.

There are still internal links to repair and there has to be a way to integrate all in one experience, to pay my dues.

I put on imgur this photo with instructions, easy to share:

Screenshot from 2020-01-12 19:34:17

Use wheel mouse to zoom, mouse to move, gravity slider to expand.

The salvaged collection of animations (II)

UPDATE: Chemlambda collection of animations is the version of the collection hosted on github. The original site is under very heavy traffic (in Jan 2020).

UPDATE: much better now, although I seriously consider to jump directly to pure see. However is very rewarding to pass over blocks.


(Continues the first post.) I forgot how much the first awk chemlambda scripts were honed, and how much the constants of the animations produced were further picked so to illustrate in a visually interesting way a point of view. The bad part of the animations first produced is that they are big html files, sometimes taking very long to execute.

The all-in-one js solution built by ishanpm, then modified and enhanced by me, works well and fast for graphs with a no of nodes up to 1000, approximatively. The physics is fixed, there are only two controls: gravity (slider) which allows to expand/contract the graphs, and the rewrites slider, which changes the probabilities of rewrites which increase/decrease the number of nodes. Although there is randomness (initially in the ishanpm js solution there was not), it is a weak and not very physical one (considering the idea that the rewrites are caused by enzymes). It is funny that the randomness is not taken seriously, see for example the short programs of formality.

After I revived the collection of animations from G+ (I kept about 300 of them), I still had to associate the animations with the mol files used (many of them actually not in the mol library available) and to use the js chemlambda version (i.e. this one) with the associated mol files. In this way the user would have the possibility to re-done the animations.

It turns out it does not work like this. The result is almost always of much lesser quality than the animation. However, the sources of the animations (obtained from the awk scripts) are available here.  But as I told at the beginning of the post, they are hard to play (fast enough for the goldfish attention), actually this was the initial reason for producing animations, because the first demos, even chosen to be rather short, were still too long…

So this is a more of a work of art, which has to be carefully restored. I have to extract the useful info from the old simulations and embed it into a full js solution. Coming back to randomness, in the original version there are random cascades of rewrites, not random rewrites, one at a time, like in the new js version… and they extinguish the randomly available pockets of enzymes, according to some exponential laws… and so on. That is why the animations look more impressive than the actual fast solution, at least for big graphs.

It is true that the js tools from the quine graphs repository have many advantages: interaction combinators are embedded, there is a lambda calculus to chemlambda parser… With these tools I discovered that the 10 nodes quine does reproduce, that the ouroboros is mortal, that there are many small quines (in interaction combinators too), etc.

And it turns out that I forgot that many interesting mols and other stuff was left unsaid or is not publicly available. My paranoid self in action.

In conclusion probably I’ll make available some 300 commented gifs from the collection and I’ll pass to the scientific part. I’d gladly expose the art part somewhere, but there seems to be no place for this art, technically, as there is no place, technically, for the science part, as a whole, beyond just words telling stories.

There will be, I’m sure.

The salvaged collection of animations

UPDATE: Chemlambda collection of animations is the version of the collection hosted on github. The original site is under very heavy traffic (in Jan 2020). Small images, about a 1/2 of the collection, due to memory limitations. But you can play the simulations in js!

UPDATE: As I progress into integrating more, I think I might sell microSD cards with the full experience. Who knows, in a year from now I might even think about a whole (real or game like) VR programming medium in a sort of chemlisp crossed with pure see. If anybody interested call me.

One more thing: as you shall see, the animations (and the originals) are the result of both a work of science and a work of art. Into the constraints (random evolution, only physical constants and colors are allowed to modify, only before, only cuts allowed) a world of dreams open.


Now I have a functional (local) version of the chemlambda collection of animations, salvaged from G+. A random slice:


On the short term todo list is:

  • integrate it with the quine graphs and with the lambda stuff.
  • add text to the chemlambda for the people and integrate with the rest.
  • to release the quine graphs article I still need a decorator, a deterministic reducer and a quine discoverer, all of them pretty standard. Make a chemlisp repl, perhaps? It would need only a small rewrite of the parser…
  • to release the hapax article I need to add a visual loop, also to rewrite some of the functions because I already need them in the quine discoverer.

… and then my dues will be finally paid and I can attack pure see and stochastic SBEN with full serenity.

Oh wait, I still have to make a big intro to em-convex, release the second part, describe the category CONICAL and related work in sub-riemannian geometry,  explain the solution of the computation power of emergent algebras … and then my dues will be paid and … 🙂

Open access in 2019: still bad for the career

Have you seen this:

“The American publishing industry invests billions of dollars financing, organizing, and executing the world’s leading peer-review process in order to ensure the quality, reliability, and integrity of the scientific record,” said Maria A. Pallante, President & CEO of the Association of American Publishers. “The result is a public-private partnership that advances America’s position as the global leader in research, innovation, and scientific discovery. If the proposed policy goes into effect, not only would it wipe out a significant sector of our economy, it would also cost the federal government billions of dollars, undermine our nation’s scientific research and innovation, and significantly weaken America’s trade position. Nationalizing this essential function—that our private, non-profit scientific societies and commercial publishers do exceedingly well—is a costly, ill-advised path.”

Yes, well, this is true! It is bad for publishers, like Elsevier, it is bad for some learned societies which sign this letter, like the ACM.

But it would be a small step towards a more normal, 21st century style of communication among researchers. Because researchers do no longer need scientific publishers of this kind.

What is more important? That a useless industry loose money, or that researchers could discuss normally, without the mediation of this parasite from an older age?

Obviously, researchers have careers, which depend on the quantification of their scientific production. The quantification is made according to rules dictated by academic management. The same management who decides to buy from the publishers something the researchers already have (access).

So, no matter how evil the publishers may be, management is worse. Because suppose I make a social media app which asks 1$ for each word one types into it. Would you buy it, in case you want to exchange messages with your colleagues? No, obviously. No matter how evil I am by making this app, I would have no clients. But suppose now that your boss decides that the main criterion of career advancement is the number of words you typed into this app. Would you buy it, now? Perhaps.

Why, tell me why the boss would decide to make such a decision? There has to be a reason!

Who is the most evil? I or the boss?

There was a coincidence that the same day I learned about the letter against open access, I also read Scott Aaronson post about the utmost important problem of the name “quantum supremacy”.

The post starts with a good career news:

“Yay! I’m now a Fellow of the ACM. […] I will seek to use this awesome responsibility to steer the ACM along the path of good rather than evil.”

Then Scott spends more than 3100 words discussing the “supremacy” word. Very important subject. People in the media are concerned about this.

First Robert Rand comment, then mine, asked about Scott’ opinion  as a new member of the ACM, concerning the open access letter.

The answer has a 100 words, the gist being:

“Anyone who knows the ACM better than I do: what would be some effective ways to register one’s opposition to this?”

A  possible answer for my question concerning bosses is: OA is still bad for the career, in 2019.


What about 2020 projects?

This is the 2nd year when I updated predictive posts for the activity in the year to come, the last one is from dec. 2018, updated today.

[this place left blank, to be filled at the right time]

Dabble in pure see and anharmonic lambda as a solution for the emergent algebras problem, rigorously, for 2020.

UPDATE (dec. 2020): Many things done, more to come. Pure see in particular.

Use the lambda to chemlambda parser to see when the translation doesn’t work

I use the parser page mainly and other pages will be mentioned in the text.

So chemlambda does not solve the problem of finding a purely local conversion of lambda terms to graphs, which can be further reduced by a purely local random algorithm, always. This is one of the reasons I insist both into going outside lambda calculus and into looking at possible applications in real chemistry, where some molecules (programs) do reduce predictively and the span of the cascade of reactions (reductions) is much larger than one can achieve via massive brutal try-everything on a supercomputer strategy.

Let’s see: choose

(\a.a a)(\x.((\b.b b)(\y.y x)))

it should reduce to the omega combinator, but read the comment too. I saw this lambda term, with a similar behaviour, in [arXiv:1701.04691], section 4.

Another example took me by surprise. Now you can choose “omega from S,I combinators”, i.e. the term

(\S.\I.S I I (S I I)) (\x.\y.\z.(x z) (y z)) \x.x

It works well, but l previously used a related term, actually a mol file, which corresponds to the term where I replace I by S K K in S I I (S I I), i.e. the term

S (S K K) (S K K) (S (S K K) (S K K))

To see the reduction of this term (mol file) go to this page and choose “omega from S,K combinators”. You can also see how indeed S K K reduces to I.

But initially in the parser page menu I had  the term

(\S.\K.S (S K K) (S K K) (S (S K K) (S K K))) (\x.\y.\z.(x z) (y z)) (\x.(\y.x))

It should reduce well but it does not. The reason is close to the reason the first lambda term does not reduce well.

Now some bright side of it. Look at this page to see that the ouroboros quine is mortal. I believed it is obviously imortal until recently. Now I started to believe that imortal quines in chemlambda are rare. Yes, there are candidates like (the graph obtained from) omega, or why not (try with the parser) 4 omega

(\f.(\x.(f(f (f (f x)))))) ((\x.x x) (\x.x x))

and there are quines like the “spark_243501” (shown in the menu of this page) with a small range of behaviours. On the contrary, all quines in IC are imortal.

Lambda calculus to chemlambda parser (2) and more slides

This post has two goals: (1) to explain more about the lambda to chemlambda parser and (2) to talk about slides of presentations which are connected one with the other across different fileds of research.

(1) There are several incremental improvements to the pages from the quine graphs repository. All pages, including the parser one, have two sliders, each giving you control about some parameters.

The “gravity” slider is kind of obvious. Recall that you can use your mose (or pinching gestures) to zoom in or out the graph you see. With the gravity slider you control gravity. This allows you to see better the edges of the graph, for example, by moving the gravity slider to the minimum and then by zooming out. Or, on the contrary, if you have a graph which is too spreaded, you can increase gravity, which will have as aeffect a more compactly looking graph.

The “rewrites weights slider” has as extrema the mysterious words “grow” and “slim”. It works like this. The rewrites (excepting COMB, which are done preferentially anyway) are grouped into those which increase the number of nodes (“grow”) and the other ones, which decrease the number of nodes (“slim”).

At each step, the algorithm tries to pick at random a rewrite. If there is a COMB rewrite to pick, then it is done. Else, the algorithm will try to pick at random one “grow” and one “slim” rewrite. If there is only one of these available, i.e. if there a “grow” but no “slim” rewrite, then this rewrite is done. Else, if there is a choice between two randomly choses “grow” and “slim” rewrites, we flip a coin to choose among them. The coin is biased towards “grow” or “slim” with the rewrites weights slider.

This is interesting to use, for example with the graphs which come from lambda terms. Many times, but not always, we are interested in reducing the number of nodes as fast as possible. A strategy would be to move the slider to “slim”.

In the case of quines, or quine fights, it is interesting to see how they behave under “grow” or “slim” regime.

Now let’s pass to the parser. Now it works well, you can write lambda terms in a human way, but mind that “xy” will be seen as a variable, not as the application of “x” to “y”. Application is “x y”. Otherwise, the parser understands correctly terms like

(\x.\y.\z.z y x) (\x.x x)(\x. x x)\x.x

Then I followed the suggestion of my son Matei to immediately do the COMB rewrites, thus eliminating the Arrow nodes given by the parser.

About the parser itself. It is not especially short, because of several reasons. One reason is that it is made as a machine with 3 legs, moving along the string given by the lexer. Just like the typical 3-valent node. So that is why it will be interesting to see it in action, visually. Another reason is that the parser first builds the graph without fanout FO and termination T nodes, then adds the FO and and T nodes. Finally, the lambda term is not prepared in advance by any global means (excepting the check for balanced parantheses). For example no de Bruijn indices.

Another reason is that it allows to understand what edges of the (mol) graph are, or more precisely what port variables (edge variables) correspond to. The observation is that the edges are in correspondence with the position of the item (lparen, rparen, operation, variable) in the string. We need at most N edge names at this stage, where N is the length of the string. Finally, the second stage, which adds the FO and T nodes, needs at most N new edge names, practically much less: the number of duplicates of variables.

This responds to the question: how can we efficiently choose edge names? We could use as edge name the piece of the string up to the item and we can duble this number by using an extra special character. Or if we want to be secretive, now that we now how to constructively choose names, we can try to use and hide this procedure.

Up to now there is no “decorator”, i.e. the inverse procedure to obtain a lambda term from a graph, when it is possible. This is almost trivial, will be done.

I close here this subject, by mentioning that my motivation was not to write a parser from lambda to chemlambda, but to learn how to make a parser from a programming language in the making. You’ll see and hopefully you’ll enjoy 🙂

(2) Slides, slides, slides. I have not considered slides very interesting as a mean of communication before. But hey. slides are somewhere on the route to an interactive book, article, etc.

So I added to my page links to 3 related presentations, which with a 4th available and popular (?!) on this blog, give together a more round image of what I try to achieve.

These are:

  • popular slides of a presentation about hamiltonian systems with dissipation, in the form baptized “symplectic Brezis-Ekeland-Nayroles”.  Read them in conjuction with arXiv:1902.04598, see further why
  • (Artificial physics for artificial chemistry)   is a presentation which, first, explains what chemlambda is in the context of artificial chemistries, then proceeds with using a stochastic formulation of hamiltonian systems with dissipation as an artificial physics for this artificial chemistry. An example about billiard ball computers is given. Sure, there is an article to be written about the details, but it is nevertheless interesting to infer how this is done.
  • (A kaleidoscope of graph rewrite systems in topology, metric geometry and computer science)  are the most evolved technically slides, presenting the geometrical roots of chemlambda and related efforts. There are many things to pick from there, like: what is the geometrical problem, how is it related to emergent algebras, what is computation, knots,  why standard frames in categorical logic can’t help (but perhaps it can if they start thinking about it), who was the first programmer in chemlambda, live pages where you can play with the parser, closing with an announcement that indeed anharmonic lambda (in the imperfect form of kali, or kaleidoscope) soves the initial problem after 10 years of work. Another article will be most satisfactory, but you see, people rarely really read articles on subjects they are not familiar with. These slides may help.
  • and for a general audience my old (Chemlambda for the people)  slides, which you may appreciate more and you may think about applications of chemlambda in the real world. But again, what is the real world, else than a hamiltonian system with dissipation? And who does the computation?



Lambda calculus to chemlambda parser

To play with at this page.  There are many things to say, but will come back later with details about my first parser and why is it like this.

UPDATE: After I put the parser page online, it messed with the other pages, but now everything is allright.

UPDATE: I’ll demo this at a conference on Dec 4th, at IMAR, Bucharest.

Here are the slides.

The title is “A kaleidoscope of graph rewrite systems in topology,metric geometry and computer science“.

So if you are in Bucharest on Dec 4th, at 13h, come to talk. How to arrive there.

I already dream about a version which is purely “chemical”, with 3-legged parser spiders reading from the DNA text string and creating the molecules.

Will do, but long todo list.

Quine graphs (5), anharmonic logic and a flute

UPDATE: all available at

Several things:

  • added a new control to the quine graphs pages (all can be accessed from here). Is caled the “rewrites weights slider”: you can either favor the rewrites DIST, which add nodes to the graph, or the rewrites β  which take away nodes from the graph (for chemlambda these are the L-A rewrite, but also the termination rewrites and FI-FOE). This changes, sometimes radically, the result of the algorithm. It depends what you want. In the case of reductions of lambda terms, you may want to privilege β rewrites, but as usual this is not generally true, see the last example. In the case of the fight arena for quines, you can favor a species of quines over another by your choice.
  • of course in the background I continue with kali, or anharmonic lambda calculus. The new thing is that why not conceive a programming language which compiles to kali? It works fine, you can say things as

“from a see b as c” or

“let a mark b as c” or

“is a as b” or

“c := let x  = b in c”

and many others! Soon, hope at the beginning of December, will be available.

  • It helps to do new things, so I finished t day my first flute. Claudia, my wife, is a music professor and her instrument of choice is the flute. From a physics point of view a flute is a tube with holes placed in the right possitions, how hard would it be to make one? After I made it, it is usable but I started to learn a lot of stuff about how unnatural is the modern flute and how much you can play with all variables. As a mathematician I  oscilate between “is a known thing to simulate a flute numerically” and “I shall concentrate on the craft and will play with various techniques”. My first flute was a craft only effort, but now, when I know more, I am hooked!

OK, so that’s the news, I like to make things!

computing with space | open notebook

%d bloggers like this: