Godement relation is SHUFFLE

From Pursuing Stacks by Alexander Grothendieck, extended and revised version by Mateo Carmona with the collaboration of Ulrik Buchholtz, page 4:

Godement relation is just an instance of SHUFFLE. See this for the algebraic context and this for the computational context.

It seems that this is the root cause of the commutativity behind the most “general” formalisms (the quotes signify a false statement). There should be truly general formalisms where the SHUFFLE deviation is quantified and controlled, much like in differential geometry curvature is just the control of the deviation from LIN. Who is willing to hunt for the myriad of reasoning which use SHUFFLE somewhere in the background, though?

Towards an overall small graph rewrites system

Let’s see what we have.

  1. Deviation from LIN measures curvature. LIN is equivalent with R3, which is an emergent rewrite, i.e. it can be deduced from R1, R2 and a passage to the limit.
  2. Deviation from COLIN measures non-commutativity. Both these properties are described in arXiv:2110.08178, but the point 1 is much older, first time explained in sections 3-6 of arXiv:1103.6007.
  3. R2 itself can be seen as a commutativity of numbers multiplication, like in this post, with the name of an \varepsilon \mu rewrite. It is only a part of the rewrite NCOMM or CONCOMM, name not fixed
  4. in Pure See we have a proof that the beta rewrite and the DIST rewrites are emergent, from SHUFFLE and a passage to the limit. Also LIN, COLIN, NCOMM are particular forms of SHUFFLE.
  5. With the introduction of a star-triangle decomposition of a dilation node into inversions, we can reformulate any of the graph rewrite systems of interest (chemlambda, dirIC, ZSS) as a small graph rewrite system. Again a form of SHUFFLE appears as MIQ (related to miquelian geometry). We can now include projective geometry in our formalism.
  6. So even if initially it seemed that graph rewriting formalisms coming from knot theory were very relevant, they are only emergent from other more simple formalisms, coming from SHUFFLE. Here is a list of posts which gradually build some small graph rewrite systems (to be merged): [1], [2], [3], [1-problems], [2-problems], [4], [5].
  7. Knot theory appears also in a new alternative to Interaction Combinators. Indeed, as seen here, the IC appear as pairs (A,L) and (FI,FOE) of dirIC nodes, while in Zip-Slip-Smash we encounter a decomposition of crossings as pairs (FI.L) and (A,FOE), which shows that a graph rewriting formalism based on R1, R2, R3 and ZSS is equivalent with IC

All this seems to point to the existence of a small graph rewrite system which is universal in mathematics and logic, in the sense that it covers differential calculus and differential geometry, even non-commutative, projective and inversive geometry, linear algebra and multiplicative linear logic. It has moreover the property that it is further generalizable to new formalisms, differently from those (generalizations from too particular examples) which fashionable in applied category theory and logic.

Going even further down the hole, we remark that most of the computation effort is not even spent on the rewrites, but on (preudo-)random number generation, which itself is amenable to the same (class of) formalisms, which indicate a general shape of a model of the universe, so to say.

Although this model of the universe is very different from Wolfram physics project, there is one ingredient which is common, namely Wolfram’ Principle of Computational Equivalence, but I never seen it used as I do and you don’t know what I mean because I have not written here a word about this.

Definitely not an universe which is a giant graph where all rewrites are possible, among with God’s eye view of it or derived structures (branchial graph, ruliad, observers, etc).

This material has to be organized and structured in the near future. It is one my 3 projects I’m working on right now. I made this post more as a self-reminder about where some of the material is. As it took me so long to arrive here, I shall probably first do a switch to my other two projects, to have some fresh work. The last years I forgot how sane it is to always have three different things to work on 🙂

Inversive algebras and commutative numbers

The following list of posts is needed:

According to [3] an inversive algebra is obtained from the transformation of emergent algebra via star-triangle relations, i.e. in this case by the decomposition of dilations into pairs of inversions:

\delta_{\varepsilon}^{x} y = o_{\varepsilon}^{x} o^{x} y

where all inversions are involutions

o^{x} o^{x} y = o_{\varepsilon}^{x} o_{\varepsilon}^{x} y = y

and such that of course the dilations satisfy the emergent algebras axioms.

Moreover, there is a supplementary axioms satisfied by an inversive algebra (previously qualified as miquelian, to be explained in a future post), namely the one described in the next picture which appears in [1]: let’s call this the MIQ relation

In [1] was given a proof of this relation based on manipulations of fractions, the question is what we really did there? Can we treat these fractions as if they are real fractions?

Here is basically the same proof, but when we add what should complete the graphical rewriting side of the formalism, namely a rewrite

o-\delta: o^{x} \delta^{x}_{c} y  = \delta^{x}_{1/c} o^{x} y

and a rewrite baptized

\varepsilon \mu: \delta^{x}_{\varepsilon} \delta^{x}_{\mu} y = \delta^{x}_{\varepsilon \mu} y

but mind that this time \delta^{x}_{\varepsilon} y is a number like explained in [4].

We don’t actually suppose in the proof the R2 axiom, nor SHUFFLE (of course). The \varepsilon \mu is the definition of numbers multiplication.

In the following figure we also use the notation of pure see, used as well in [1].

The left hand side of MIQ reduces as explained in the next two pictures:

and continued

Observe that there is a number denoted by a circled star. It practically corresponds to a dilation with the complicated fraction from the LHS of last relation from the picture (taken from [1])

Now let’s reduce the RHS of the relation MIQ. There will be two steps


Remark the appearance of the second number denoted by a circled \Delta. This corresponds to the fraction from the RHS of the last relation in a previous picture.

These two numbers are equal if and only if we can reduce one number to the other. What does it mean exactly? Let’s see, in two steps:


So actually MIQ is equivalent with the commutativity of multiplication of numbers. In this picture I wrote “N-COMM”, but in the post about the Heisenberg group [2] I denoted this relation (or an equivalent one) by CONCOMM.

In conclusion inversive algebras imply CONCOMM, which implies we have to be in commutative (i.e. SHUFFLE) case or in the Heisenberg case!

ChorOS: further down the stack

It turns out that Pure See is not the bottom of the stack. There is an even lower level formalism, which I wish to call it ChorOS, in the honor of “choros” aka space.

Let’s make it into an acronym:

CHemically ORiented Operating System

The name is not important, but what it does it is.

For example it covers now projective geometry, which turns out to be a commutative phenomenon. Another illusion, shattered.

So I’ll bookmark today as the official day for the birth of ChorOS.

You know what’s also funny? That it fits into the hypotheses of small graph rewrite systems.

It is left to see it in action, how?

Permutation magic for inversion

Because I can’t type here fractions, I took instead pictures.

Remark that I use only two ingredients:

  • Permutations of 3 elements and their relation with the anharmonic group, like in Pure See.
  • that inversions (which are denoted by the letter “o” in the pictures) transform a dilation of coefficient \varepsilon into one of coefficient 1/\varepsilon

In the pictures \Gamma is the multiplicative group of scales.

Here is the permutation magic: we want to prove the following statement about inversions:

(add that the scalar c \not = 1…)

The proof starts by computing the left hand side:

For the right hand side:

So in order to finish we have:

In conclusion the identity concerning inversion turned into an identity of fractions.

Star-triangle relations in emergent algebras

There are two classes of emergent algebras which are very interesting: group-like and inversive. The former appear in relation with groups, or symmetric spaces. The latter appear from complex spaces.

The trick is that both of them appear from general emergent algebras just by substitution of the nodes (dilations) with triangles of other nodes, via star-triangle transforms.

Group-like emergent algebras arXiv:1005.5031 or ar5iv are defined by (the star-triangle relation)

\delta^{y}_{\varepsilon} x = inv^{x} inv^{x}_{\varepsilon} y

where the approximate inverse is

\delta^{\delta^{x}_{\varepsilon} y}_{\varepsilon}  inv^{x}_{\varepsilon} y =  x

and the inverse is

inv^{x} y = \lim_{\varepsilon \rightarrow 0} inv^{x}_{\varepsilon} y

Do we have a self-distributivity for inv

inv^{x} inv^{y} w = inv^{inv^{x} y} inv^{x} w

deduced from the emergent algebras and star-triangle relation?

Inversive emergent algebras are defined by (the star-triangle relation)

\delta^{x}_{\varepsilon} y =  o^{x}_{\varepsilon} o^{x} y

where the inversions satisfy miquelian relations … or alternatively they satisfy a relation which is fundamental for the definition of a cross ratio.

As with the self-distributivity of inv, is not clear yet which of those relations can be deduced from the others, a situation which resembles with the previously unclear status of the COLIN relation.

Why “star-triangle relation”? Pick the first (i.e. group-like) relation, as an example. Graphically it corresponds to the replacement of a 3 valent dilation node (a star, or an Y) with 3 nodes, each of them 3 valent (an “inv” node, an approximate “inv” node and a fanout node) arranged into a triangle (or Delta).

One may therefore ask what is the image of the graph-rewrite formalism (of chemlambda say) under the star-triangle transform. This is almost trivial (i.e. a bit tedious to pass through the all proof steps in this change of graphical notation), but notice that there is a second relation (in the example it is the definition of the approximate inverse) which complicates the affair. It is therefore not only a matter of graphical notation change, it does lead to a new graphical formalism by the inclusion of the graphical rewrites which correspond to these star-triangle relations back and forth.

There is a similar, but different situation in the case of inversive algebras. I wrote here one star-triangle relation, what is the other one (which is alike the definition of the approximate inverse)? Well it is not a definition of inversions in terms of dilations. Instead it is a relation which is very nice, which implies the Miquel theorem if we were talking about usual inversive spaces. It also is necessary for a good definition of a cross-ratio, which is even a more interesting thing.

I don’t know what is the exact status of this relation! Does it imply commutativity, (like COLIN previously)?

Articles running in the browser, almost there: ar5iv from arXiv

That’s great: take any arXiv article link, for example


and replace the “x” from “arxiv” by “5” in that link


and hit enter. You will be redirected to ar5iv where you see the html version of the article’s pdf, i.e.


We are getting closer to the articles which run everything, like simulations, in the browser.

Let me play. I take my prototype Molecular computers article which runs in the browser (from 2015, therefore “http:”)


Later I produced an arXiv version (without the simulation of the Ackermann molecule)


Converted back to html by ar5iv, gives



What an wonderful work, congratulations for ar5iv, see the repository


and don’t forget to look also at the great CorTeX


Now I save the post and click “publish”, let’s see what wordpress makes of it.

[I learned the 5/x trick from this HN post.]

UPDATE: some more words:

  • it scales! therefore old articles will be saved.
  • it is not self-sufficient, would be nice to have any article with all included, which works even without an internet connection.
  • recall Distill burnout? commented here.
  • no, Mathematica notebooks are proprietary and Jupyter notebooks are not self-sufficient or simple enough to be managed at all levels (correct me if I’m wrong, but I’ll stop listening if I have to import a library which I can’t already include in the article, or if I’ll have to compile first to obtain a static html]
  • I keep my point that a solution has to be without dependencies and non proprietary, even in the case where the simulations (should I say validations?) involve a huge data quanity and lots of hardware to execute.

On being rational about fads

A positive lesson of this pandemic is the effectiveness of being rational. This virus is a nanomachine which is as sensible to human control as physics is. The rational thing to do is to accept the virus and its consequences as we accept gravity, and then to do as much is rationally possible against the effects of it, just like we build dams and machines to limit and use the gravitational power of the sea in Holland.

Many people treated this pandemic as if it is not physics, as if it is a matter of opinion. Many people are not rational.

Another aspect of the virus as physics is us, humans as more made of meat as we like to accept. I lost the ability to recognize from afar a good place to find chestnuts, after my limited exposure to nature during the pandemic.

Not surprising, none of the scientific fads had anything to help us in this respect. There is nothing to learn from string theory, category theory, AGI, quantum computers in this respect.

Not being rational enough, we didn’t make any progress in distributed computing, the corporate cloud is more powerful than ever and eats most of the internet.

We make fun of NFTs though, as if we look more clever this way.

Please tell me, what do we learn from this virus? Which of the theories you love can explain how a simple nanomachine achieved such great effects without any coordination, high semantics and centralization of control? Which hidden symmetry fueled this phenomenon? What category theory explanation is there? How many qubits would be necessary to model this piece of physics, or a remedy to it? Is the virus intelligent in any conceivable way? What is the complexity class of this chemical computation?

It would be only rational to believe that we have much to learn and discover from it.


Concatenative programming languages use function composition as a primitive. That means they use the associativity of the composition operation. If I understand well, this leads to stack-like form of a term.

Suppose we continue with this effort to reformulate programming languages as artificial chemistries. This was done with chemlambda and chemSKI. What about a chemFORTH?

On one side function composition seems to me more natural than application, at least for emergent algebras (which led to chemlambda). On the other side a model of computation based on stacks is not obviously amenable to a model of computations which admits local random rewrites everywhere, because it forces the rewrites to happen at the stack top.

So my question is how to keep both? What if I want both associativity and rewrites happening everywhere?

Is there some alternate route to the main one?

Associativity –> Stack –> Stack rewriting

Turing complete NFTs

Indulge me, I know NFTs are hype now and many people have a justified negative reaction regarding those. There is though the project hapax [github] [mentioned here] which afaik predates the NTF craze. It may be related, or there might be an usability example for a NFT, therefore I’m going to speculate about this idea a bit.

Hapax is a way to convert a graph rewrite system into a conservative one, by using a family of small graphs as tokens (fungible) during the computation on a graph (non-fungible). This play between non-fungible graphs made by fungible graphs is interesting in itself because it has a recursive quality, but there is more.

The choice of the graph rewrites allowed during computation is so huge that we may see the graph rewrite system itself as non-fungible. As unique in the sea of possiblities. Hence “hapax”, which means “only once”.

A Turing NFT would therefore be a randomly generated graph rewrite system.

Lambda calculus (OK, graphic lambda calculus, because lambda calculus is a term rewrite system), or Interaction Combinators are just two particular choices in the sea of possibilities. Same for chemlambda or chemSKI.

As the algorithm of rewrite is always the same, the Turing NFT token (say a json of the rewrites in the hapax form) would be an input, along with graph data.

Which brings me to the very interesting LMAX Architecture article. I became interested in this because of the “Queues and their lack of mechanical sympathy” section, where comparisons with the performance of the Actor Model are made.

So let’s join these parts: Turing NFTs as input and output tokens which are processed by the disruptors and the local random rewrite algorithm as the Business Logic Processor.

This would allow, I believe, a sort of distributed world computer with a decent (read huge by the present standards, say of ethereum) speed.

And discrete.

Of course, each such Turing NFT would need to have a uniquely generated compiler from your language of choice, but this is an interesting problem as well.

For the interested, there is also this vague but concerning thought that the ring buffers of disruptors and the deterministic computation of the Business Logic Processor somehow resemble with the decomposition of a trivalent graph into circular lists and regular lists (where have I written about this? maybe when I was doing the tongue-in-cheek rna code generation screencast here, oh I remember, the idea is explained at pages 25-26 here).

Better 2022

This year Santa Klaus brought me a surprise. It was not what I expected, instead I received a key towards a non commutative, emergent algebra style, projective geometry.

Further down the rabbit hole.

UPDATE: Some more words about this subject. Initially my problem was to understand differential calculus and geometry as a computation based on abstract nonsense graph rewriting. This graphic formalism appeared first time in arXiv:math/0608536 [math.MG] , and was later transformed into an algebraic (equational) formalism used to answer a problem of Gromov in sub-riemannian geometry arXiv:0810,5042. Then the pure algebraic theory was freed from the metric geometry content and got the name emergent algebras arXiv:0907.1520.

As I was looking for a transformation of this equational theory into a rewriting based model of computation, I took as a goal the unification of this with untyped lambda calculus. This led me to Graphic Lambda Calculus arXiv:1305.5786, which started to gather interest from a mix of programmers and mathematicians, due to its possible relations with decentralized computing and knot theory.

At this point I slowly understood that a full model of computation was needed, not only a lambda calculus style formalism (like GLC). The model should be complete, under the form of a graph rewriting formalism together with an an algorithm of rewrites application which is as local in space and time as possible. This led me to chemlambda v2. This is a chemically branded, graph rewriting model of computation.

Looking back to the initial goal, it became a quest to unify differential calculus, geometry and now logic, under the form of a graphical rewriting model of computation. It turned out that chemlambda is only one choice, among many possible collections of rewrite rules, with the same algorithm of applications. Other choices (like dirIC) are equivalent with Interaction Combinators, thus multiplicative linear logic is covered (at the level of a graph rewriting formalism, which is different than a term rewriting formalism).

Emergent algebras, the equational algebraic initial point, become a denotational semantics for all these formalisms, but only if we restrict to commutative emergent algebras. Recall that emergent algebras cover Pansu differential calculus and subriemannian geometry, which are non-commutative versions of the usual differential calculus and riemannian geometry. Therefore, if commutative emergent algebras are a denotational semantics for chemlambda and interaction combinators, it means that (multiplicative) linear logic as we know it is only a commutative version of something yet unknown.

At this stage we are somehow back to the initial point, anchored in differential calculus and geometry. But we know what to search for (graph rewriting models of computation) and what is the status of the algebraic theory (a denotational semantics for those graph rewriting models).

In this context appeared the surprise I mentioned, which allows me to treat complex projective geometry in the same way, with the same tools. It turns out that projective geometry also contains hidden commutativity assumptions, and now I am about to understand how to free projective geometry from those.

The interest in this project changes fast according to the interest and competences of the readers. I say this with full understanding. It is just difficult for a linear logic specialist to also understand why linear logic is not satisfactory, as a formalism, in the light of sub-riemannian geometry. Conversely, a geometer is most likely lost on the road from sub-riemannian geometry to linear logic. A programmer, no matter how good, has an idea about linear logic but childish understanding of almost anything mathematics related. A chemist just wants to apply whatever I say is relevant for molecular computers, so he’s completely uninterested in the mathematics of projective spaces. And so on.

But the project is unitary. Advances in one direction brings better understanding in other directions.


I hope this new year to wash out, a bit, the cynicism which grew on me lately. Is this a general phenomenon? Probably. If so, then I wish you the same!

And many physical trips, starting from the next summer, when local obligations will permit this!

This year ends with non commutative projective geometry

The miquelian way works greatly. [UPDATE: I’ll explain what exactly do I mean by the “miquelian” way, but at least you may figure out by searching, perhaps in scholar, for this keyword, and then you have to imagine where can we arrive if we disentangle the results from the hidden commutativity, like it was done with affine geometry or with the Lie bracket, in past works in emergent algebras.] It was a long year, after the holidays I look forward to learn how this complex projective, but non commutative geometry fits with the whole theory, logic included.

A workgroup or a course seems to be needed. I’m willing to do this next year, especially for my known or new asian friends and researchers. Let me know if interested.

It is somehow curious how much are we blinded by habit. For example there are well known constructions in projective geometry which build big pieces of mathematics from first axioms of projective spaces. In the same time lambda calculus does build everything computable, say for example naturals, integers, folds, etc. The first, older effort from projective geometry and the newer, but almost as old lambda calculus are practically both of same nature. Computation redefines mathematics. Physics. Logic. We’re only blinded by habit when we don’t see this unity. The only new thing, we can almost confidently say, the only new great idea since Euclid is computation. Which changes all.

[also on telegram]

Observational equivalence of interaction nets and commutative emergent algebras

I use D. Mazza, Observational Equivalence for the Interaction Combinators and Internal Separation, TERMGRAPH 2006. I shall use figures taken from Section 3, p.10.

Note that the rewrites which define observational equivalence are exactly those which define commutative emergent algebras [arXiv:2110.08178], namely:

  • R1 (or Reidemeister 1) for emergent algebras corresponds to:

  • R2 (or Reidemeister 2) for emergent algebras corresponds to:

  • SHUFFLE which defines commutative emergent algebras corresponds to:

While we can see Pure See as giving a denotational semantics for directed interaction combinators, we can easily pass to Interaction Combinators as explained in Alife properties of directed interaction combinators vs. chemlambda. Marius Buliga (2020).

What does it mean?

Observational equivalence is used for total nets, which exclude quine graphs. On the contrary, here we are interested more in alife graphs.

Observational equivalence is used to give a denotational semantics for interaction nets. On the contrary, here we enquire if we can give a graph rewriting, universal description of computations in emergent algebras. As emergent algebras are an axiomatic description of differential calculus in any space, we want to understand any space related computation as a true computation in the sense of logic. We arrive to the conclusion that the multiplicative linear logic, as embodied in interaction combinators, is commutative in the sense that it is compatible with only commutative emergent algebras, which describe, in an equational theory, only usual vector spaces.

Even with these different goals, there might be an interest to understand the role of taking limits in emergent algebras (even in this limited case of commutative ones), compared with the denotational semantics given by pointed sets and observational equivalence. At first sight, it seems to me that limits can be included in a more usable version of Mazza’ denotational semantics, which would be of independent interest. Probably this could be done by interpreting limits in terms of topological filters, with the advantage of the passage to the limit as a test for observational equivalence.

But in general, it seems to me that instead of looking for a semantics of interaction nets, it is more interesting to look for non-commutative versions of linear logic, by using general emergent algebras as a tool.

Six amino acids

My old molecular computers proposal [github run-in-the-browser] [arXiv], [figshare] does not yet get the deserved love, probably because it is not well understood.

The proposal in brief is: look in the reaction databases for two specific, concrete, patterns of chemical reactions, among probably 6 elementary nodes.

These rewrite patterns are the backbone of all chemlambda formalisms, namely:

  • the(graphical) beta rewrite

  • the DIST family of rewrites

(here I put two figures, exhibiting the mentioned patterns, from

Graph rewrites, from emergent algebras to chemlambda. © Marius Buliga (2020), https://mbuliga.github.io/quinegraphs/history-of-chemlambda.html

Version 25.06.2020, associated with arXiv:2007.10288 )

Chemlambda is significantly different than the one by Fontana and Buss ALCHEMY, which is mentioned as a previous use of lambda calculus for chemistry, due to the way chemical reactions are related to lambda calculus reductions.

The proposal is closer to Flamm, and later Fontana et al Kappa language, where they propose a generic (and straightforward in principle) tooling for asynchronous graph rewriting in relation with chemistry.

But it is radically different in the specificity of patterns and constituents.

As concerns the general model of computation, recall that chemlambda was initially called “the chemical concrete machine“, in honor of Berry and Boudol “chemical abstract machine”. The distinction from chemlambda is that the chemical abstract machine is a model of computation where the reactions are described as if it is about a list of laboratory manipulations, while in chemlambda everything is about individual molecules involved in reactions which are local in space and time, without any director in the background.

OK, this is well known and simultaneously not acknowledged, because of my decision to make the chemlambda project a flagship of new forms of open scientific communication. For some reasons people find difficult to put links in their paper articles.

I am glad though that today, when I looked for Walter Fontana recent research, I found a very interesting article:

J. L. Andersen, R. Fagerberg, C. Flamm, W. Fontana, J. Kolčák, C. V. F. P. Laurent, D. Merkle and N. Nøjgaard
Graph Transformation for Enzymatic Mechanisms
Bioinformatics, 37, i392-i400 (2021)

where they apply their theory and tools for the chemical reactions in a database called Rhea.

Mind that I am not a chemist, nor do I have other than very basic understanding in this subject.

But I read with high interest the following passage, while I look with amazement at this table 1, p. i397 in the article:

“Only 8 out of the 17 amino acids considered were used by at least one of the constructed sAA mechanisms. Two of these amino acids, arginine and tyrosine, are used in five and one mechanism, respectively. For these mechanisms, we also found alternatives that operate with a different amino acid (in the case of arginine: aspartate and histidine; and in the case of tyrosine: histidine and lysine).Among the remaining six amino acids, we observed predominantly common proton acceptors and donors, such as aspartate, glutamate and histidine, while cysteine, lysine and threonine were used only rarely. The number of reactions solvable by one of the six amino acids is given in Table 1.” [quote from p. i396]

Hey, either 8 or 6? This is fully compatible numerology as in Pure See, which treats the general formalism behind chemlambda. It is also compatible with the molecular computers proposal.

[image taken from this post]

I can’t make sense about the patterns or rule sets, though, because I’m not a specialist. I can’t identify them, not can I make sense about the numbers of those: is it in the hundreds?

UPDATE: Two more comments, or personal opinions and feelings. (1) With or without acknowledgements for my prior work, with or without mind boggling confusion about graph rewriting which is definitely not a semantics for any programming language (the opposite is true), I feel relief because for years I felt guilty about the effects in the real world of making true molecular computers. Now is no longer my responsibility, that’s what I feel. (2) I wish Wolfram, who has significant computing clout to try it, would spend less time with pixelated physics and more time instead to actively do the search in reaction databases. It’s a bit ironic that he does not see chemistry as the practical scene of his new kind of science. I understand that old dreams slow us, like theories of everything based on exploitation of symmetry principles. They don’t compute. As W clearly and originally states, computation lead us to a new paradigm. Go for chemistry then, the low hanging fruit now, if you have the means. (Or not, God knows where it leads…)

Mathematics videos

The wheel is turning, even in scientific communication. Here is a list of links to collections of videos of mathematics lectures, backed by important mathematical institutions:

Paul Ginsparg won the Einstein Foundation Individual Award 2021. In the description of the award we read:

“Paul Ginsparg is Professor of Physics and Information Science at Cornell University, USA. In 1991, he created the arXiv (“The Archive”), a document server for preprints, on which scientific findings are published without review and paywall restrictions.”

Is true that the next phrase kind of retreat from this bold “published” statement by this gold open access trick:

“Preprint servers are online archives for scholarly publications…”

but the wheel is definitely turning!

Thing oriented programming

This year, after I wrote [github] [telegra.ph] Wittgenstein and the Rhino, now I’m thinking about TOP.

Thing oriented programming.

Following from the principle that all good names are already taken, I found the white paper Ercatons: Thing Oriented Programming. While there is the interesting quote (page 4)

“A Thing is unification and super-set of an object instance and a document”

that is obviously not what I am thinking about.

Have you suggestions about what thing oriented programming should be?

How to select Heisenberg groups, formally

In the frame of emergent algebras, Heisenberg groups are the most simple non-commutative conical groups.

We shall use COLIN implies LIN for emergent algebras [arXiv] [github].

At the formal level of the equational theory of emergent algebras, conical groups are those emergent algebras which satisfy LIN: for the sake of using a simplified mathematical notation, we shall use “*” for the emergent algebra operation for a generic scalar “epsilon” and “#” the emergent algebra operation for a generic scalar “mu”. Then LIN is: for any points a, b, c

(LIN) a * (b # c) = (a*b) # (a * c)

A dual relation is COLIN: with the same conventions

(COLIN) (c # b) * a = (c * a) # (b *a)

We know that for emergent algebras COLIN implies LIN. Moreover (COLIN and LIN) are equivalent with SHUFFLE:

(SHUFFLE) (a * b) # (c * d) = (a # c) * (b # d)

Algebraically, emergent algebras which satisfy SHUFFLE turn out to be commutative groups with dilations, i.e. vector spaces in the case the group of scalars is the multiplicative group of a topological field.

Notice that for any emergent algebra we have the relation

(NCOM) a * (a # b) = a # (a * b)

because both terms are equal to a : b, where “:” is the emergent algebra operation indexed by “epsilon mu”.

Also, we always have the R1:

(R1) a * a = a

Remark that NCOM is a particular case of SHUFFLE, by using R1

a * (a # b) = (a # a) * (a # b) = (a * a) # (a * b) = a # (a * b)

or even a particular case of LIN, by using R1

a * (a # b) = (a * a) # (a * b) = a # (a * b)

As well, LIN is a particular case of SHUFFLE, by using R1

a * ( b # c) = (a # a) * (b # c) = (a * b) # (a * c)

and COLIN is also a particular case of SHUFFLE, by using R1

(c # b) * a = (c # b) * (a # a) = (c * a) # (b * a)

So, all in all, by using R1 we have that NCOM, LIN and COLIN are just limited variants of SHUFFLE, where instead of the whole freedom to pick any a, b, c, d in SHUFFLE, we are left with the freedom to pick

a, a, a, b for NCOM

a, a, b, c for LIN

c, b, a, a for COLIN

What if we are allowed to pick b, a, a, a only in SHUFFLE?

(b # a) * (a # a) = (b * a) # (a * a)

which by R1 gives us a new relation

(CONCOM) (b # a) * a = (b * a) # a

We describe CONCOM as “has the commutative numbers property”.

We have the following results:

  • COLIN implies SHUFFLE which corresponds to the usual commutative vector spaces
  • LIN corresponds to conical groups, or in the smooth world it corresponds to groups with dilations, essentially Carnot groups
  • LIN and CONCOM correspond to “at most” 2-step Carnot groups (with the usual addition of smoothness)

It means that Heisenberg groups, formally, are described by the conjunction of the following 3 conditions

(LIN) a * (b # c) = (a*b) # (a * c)

(CONCOM) (b # a) * a = (b * a) # a

(NON-COLIN) there are epsilon, mu, a, b, c such that

(c # b) * a != (c * a) # (b *a)

of course, all considered in the frame of emergent algebras.

So, at the graph rewriting level, Heisenberg groups are described by LIN and COMCON, probably on the nodes A, L, FI, FOE, and the rewrites which can be deduced from those by a passage to the limit. (This will not distinguish from the particular commutative case, unless we postulate the existence of some points for which COLIN deviates from being an identity, i.e. some form of NON-COLIN).

Proof on demand.

A question about the Nov 16, 2021 Sci-Hub Virtual Court Hearing

The 16/11/2021 Virtual Court Hearing [link to pdf] of the Sci-Hub India case mentions this:

“IA No. 14908/2021 (u/O 1 Rule 8A and Order 1 Rule 10(2) r/w Section 151 CPC filed by applicants/intervenors on behalf of a Group of Scholars Studying /working in Universities across the National Capital Territory of Delhi)”

Where can I read it (/these two) ? Thank you.

UPDATE: It seems to be this [archived link]:

“A group of seven social science researchers has moved the Delhi High Court to protect the websites LibGen and SciHub, highlighting the adverse impact any decision to block the websites will have on them.

This is an intervention application that comes after three global publishers, Elsevier, Wiley, and American Chemical Society, filed a copyright infringement suit in the Delhi High Court against LibGen & Sci-Hub, that enable researchers worldwide to access knowledge without cost. In their suit, the publishers have asked the Court to block these websites permanently because they are ‘rogue’.

The social science researchers’ intervention application was filed with legal support from Internet Freedom Foundation, a New Delhi-based NGO on digital rights and liberties.”

UPDATE 2: Thanks to Peter Suber, who sent a better source: Social Science researchers move Delhi High Court to protect LibGen & SciHub, [archived link] from the site of the Internet Freedom Foundation.

The redacted intervention application pdf is available from this link, as a google doc. It seems to take ages to produce an archived version, so I downloaded it from that source and saved it here [link to pdf].

[See About notes of the Sci-Hub case in the High Court of Delhi for context.]

Hapax rewrites compared with chemlambda rewrites

In hapax we see a rewrite as a pair of permutations of the edges sources and targets. This way, any rewrite is conservative. Let’s exemplify with the “A-L” rewrite from chemlambda.

In hapax the LEFT and RIGHT patterns are mol files of connected graphs, with a specified edge name. See hapax-mol.js for a list of paterns used in chemlambda, as done in hapax.

Any rewrite needs a pattern and a “token”, and outputs patterns and tokens.

The tokens are small graphs (patterns). In hapax these are specified in hapax-chem.js.

For example the “A-L” rewrite, as done in hapax, is:

{kind:”A-L”, needs:”Arrow-Arrow”,

gives:[“Arrow”, “Arrow”, “L-A”],

pisource:[4,0,3,2,1], pitarget:[3,4,0,2,1]}

which means:

Pattern “A-L” + Token “Arrow-Arrow”


Pattern “Arrow” + Pattern “Arrow” + Token “L-A”

by using the following permutations of sources and targets:



Instead of the LEFT pattern from chemlambda v2, we have a pattern

mol:[[“L”, “c”, “b”, “a”],
[“A”, “a”, “d”, “e”]]

and a token

mol:[[“Arrow”, “b”, “a”],
[“Arrow”, “a”, “b”]]

which corresponds (i.e. recognizes the pattern in the graph) to the mol file:

L c b a
A a d e
Arrow b’ a’
Arrow a’ b’

Now, according to the type “in” or “out” of the ports, we see that we have 5 sources, colored red, and five targets, colored black:

L c b a
A a d e
Arrow b’ a’
Arrow a’ b’

This is transformed into another pattern, by permuting the sources and the targets. We get:

L b’ a b’
A a’ a a’
Arrow c e
Arrow d b

which is, as the rewrite claims, recognized as two patterns “Arrow”

mol:[[“Arrow”, “a”, “b”]]

and a token “L-A”

mol:[[“A”, “c”, “a”, “c”],
[“L”, “b”, “a”, “b”]]

The permutation of sources is described by the array pisources and the permutation of targets is described by the array pitarget.

In order to understand how this is done, first recall that (in javascript in particular) arrays elements are numbered starting with 0, therefore the array pisource = [4,0,3,2,1], for example, represents a permutation in the sense that the element 0 goes to 4, 1 goes to 0, 2 goes to 3, etc.

In order to apply the permutations pisource and pitarget correctly, we need to be sure about how the pattern recognition algorithm sees the sources and targets. The problem is that we could easily pick an order by hand of the sources and targets, but how does the program picks them?

We need to know this only once, when we define the rewrites. For this we apply the pattern recognition program to the patterns themselves. (At some point in the future this has to be automatized).

The page [9] tells us exactly this, namely that for the pattern recognition program the order of the sources for this rewrite is:

a, b, e, a’, b’

and the order of the targets for this rewrite is:

a, c, d, a’, b’

You can check by hand that pisource and pitarget permutation do indeed what they are expected to. For this see that, for example a which is source 0, appears in position 1 in pisource, therefore it will go in the place of source 1, which is b, and so on.

The hapax way of doing rewrites is more general and more flexible than the usual way. It can be applied to interaction combinators easily (one needs a way to add the sources-targets supplimentary information, in a way which respects the original formalism), but more interestingly, it can be applied to a variety of other graph rewrite systems which appear in science or mathematics.

Even more, it can be used for random systems of graph rewrites, which are in vast numbers. This justifies the “hapax” name, which means “only once”. See first descriptions of this idea in [10], [11].


[1] chemlambda moves
[2] chemlambda v2 page
[3] Y. Lafont, Interaction Combinators
[4] hapax repository
[5] small graph rewrite systems
[6] Molecular computers
[7] hapax-mol.js
[8] hapax-chem.js
[9] hapax-explore.html
[10] 14400 alternatives to the beta rewrite
[11] What is the purpose of the project hapax?

[This is a slightly edited part of the source: chemlambda and hapax.]

computing with space | open notebook