Too easy to compute

I’m looking at this 2 years old page where you can search for a graph quine among more than 9 billions possible graphs, which are generated randomly [js enabled is needed, or just go to the github repo and clone it]. You may search for chemlambda or Interaction Combinators quines…

There are newer variants and possibilities to play with, but this is not in the scope of this post.

What jumps to my eyes, after a pause in playing with these gadgets, is: it is too easy to generate a graph which grows indefinitely.

Here is why this is a problem and what the ramifications are.

Chemlambda, dirIC, Interaction Combinators, chemSKI, are just examples of very very simple artificial chemistries. Would they be possible in real chemistry? Just by looking at how simple the chemical reactions are, there should be extremely common real chemical reactions which are compatible.

Let’s take that as a hypothesis. How would the universe look like, then?

If it should be extremely easy to compute chemically in this way, then life would not be rare, but relatively easy to achieve.

Too easy!

But not only life would be too easy. In particular such systems are able to do universal computation. So let’s take a weaker hypothesis: that the universe is able to do universal computation with simple chemical reactions.

Then everything, soon at the scale of the universe, would turn into Ackermann goo. Maybe it is.

Or maybe not.

I think it is very unlikely to be so easy to compute in the real universe. With only very small, local chemical reaction.

The hypothesis we made is most likely false. If it is false, then why? because we also know that in nature if something is possible then it will happen. If the hypothesis is true, then what is the supplementary mechanism which inhibits things like the Ackermann goo?

Some possibilities of inhibitions are:

  • the simple chemical reactions which lead to computations are only a small part of the possible chemical reactions, therefore the vast possibilities of chemical evolutions inhibit using only these particular reactions,
  • that is what death is for. Large molecules are less stable or they are broken by other chemical mechanisms,
  • shuffle, which is conservative, or something analoguous as the S A B C -> (AC)(BC) reaction in chemSKI, are common, but the analoguous “emergent” reactions are rare, so that it is possible, but rather difficult to compute according to the recipe, for a big enough time. If almost everything is a shuffle and only rarely there is (an equivalent of) passage to the limit, then we would see only very rarely something like the beta or DIST. (however not an explanation for why other embodiments of the simple beta and DIST are not leading to exuberant growth most of the time.)
  • in the conservative version of the rewrites, for example the one which uses tokens, an exuberant growth is quickly inhibited by the lack of tokens (money).

I don’t find any of these possibilities very likely. There is something in nature which inhibits computation. The universe may be a not halting computation, but why there are no small non halting computations?

Open Science, copyright, communism and capitalism

Open Science works, in the sense that it abundantly creates new science. But then a predatory corporation scales the new ideas and takes all the money and credit. The creators ask themselves why should they produce free work? so that later some rich dumb ass tells them that ideas are cheap and scaling is everything?

Likewise, communism works, in the sense that well intended people work for the better of the community. There is a satisfaction in the equalitarianism, at least for creators. But then many people realize that they can have a free ride on the back of those who work. The creators ask themselves why should they produce new stuff? So that later some politically well oriented dumb ass tells them that ideas are as cheap as their lives and the politics is everything?

In the first case the copyright system is the weapon of rich dumb asses against the creators. They can steal with the law on their side. All the latest and greatest heaps of money are made from open creations scaled, then locked by copyright.

In the second case the politically correct is the weapon of propagandists against the creators. Too much originality is difficult to contain when it may spread in the big mass of free riders.

So I think that open science (or code) is a new form of communism, with copyright which channels the wealth away from the creators.

Moreover, we have now super predators who are both rich dumb asses and politically correct propagandists.

For the creators, until a more subtle system appears, there is this question: we know that we can beat any corporation and any propagandist when it is about creation of new ideas, but why should we do it if our work is stolen and then protected by copyright, or if we are silenced for not being politically correct?

As an extension of this analogy, probably very soon, like in some years, this capitalism (which is identical with the russian style communism) will have the same fate as the late russian style communism. Because without the salt and pepper of the creators you can scale only BS. How much BS can you still eat?

I remember that just before the anti communist revolutions in eastern Europe, there were so many naysayers telling that nah, it’s impossible, the system is too strong to fall.

Let’s be optimistic. The same is about to happen now.

Meanwhile do you have any idea how to create better than any corporation and in the same time make it so they can’t profit from your work more than you do?

Probably we just have to push a little more and to be fast enough. I’m not sure, but probably being public can be turned into an advantage if it adapts faster than bureaucratic whales can move. Humor helps. Just watch them, aren’t they funny? I always thought that in Atlas Unchained, the Atlas is not the rich dumb ass who retires, is the creator.

The Rainbow Serpent, the Ouroboros

I had the chance to see the Rainbow Serpent.

It is as big as the world. It is life, or it works like life. I experienced it more like the trunk of a huge tree, with the horizon as the bark, the sea rising throug it like a fluid in the capillary vessels and all human made artifacts like the cells of the tree. All in a huge, symmetric and lifeless space.

Where there is life, there is no symmetry. Where there is space, there is no life until the symmetry of the space is broken. Free fall according to gravity is symmetric. Here comes life and makes a pocket. Free fall is turned into the guarantee that the pocket will hold still whatever you put into it.

From Egyptians, the Greeks inherited the Ouroboros.

It is the same huge creature which is life. It is the boundary of the sea.

In Hamiltonian mechanics we don’t experience the Rainbow Serpent. It appears though if we allow random forces and momenta, with a probability given by the shape of the accessible space (here, section 2, unilateral contact example).

I think all the properties of life (like the ability to self-reproduce, metabolism and death) are emerging from the more fundamental property of lack of symmetry. It is hard to understand but it is worthy to try.

As concerns the chora, it is semantic, not real. The ouroboros makes the chora, as decoration.

Misleading content algorithm is snake oil

Per HN Google bans distribution of misleading content. They claim that they have an algorithm of classification of misleading content.

This is not an opinion. Detection of misleading content by an algorithm is equivalent to an algorithm for the halting problem. For it is misleading to claim that a Turing Machine with a given input does not halt when it does. If Google algorithm exists, then it should detect in particular such misleading statements. We know that there is no such algorithm, therefore Google lies.

Somehow this is not surprising. Google never respected science or mathematics, even if they work hard to give the misleading image that they do. Gve me one example that they did, proportionally to their economical scale. They are very easy to be defeated by single persons and when they spend money for research usually is wasteful compared with personal initiatives which are not financially supported. I am thinking about the comparison between Google Scholar and Sci-Hub, as an example. UPDATE: just these days see, as another example Odd release in conjunction with RoseTTAFold gaining traction. [archived version].

There is no new, to my knowledge, scientific result where Google is involved, which was not studied by an academic or an open colaboration before.

They can only scale, they are not capable to invent. They can supervise, they can’t create. They collect information created by others. They want to “organize”. They were favored at the beginning when it was a good idea (for a supervising frame of mind) to scan the whole web. They were always advanteged by the mother state. They attract nerds but they have to buy creative people.

It’s clear what they are. That they lie to such a degree, so to say that they have an agorithm for the halting problem, is a classical ridiculous aspect of capitalism. You know, like capitalists sell BS for money and communists make gulags.

Once, such merchands of lies were selling snake oil which can cure any disease.

At some point such practices were considered ilegal. I don’t have expectations that they will be legally sanctioned. I don’t think that they are really so important. Presently they are a factor of inhibition of science (not the only one) and historically they are already not viable. Nor them, nor communist variants of surveillance state. But you know, the time scale of history has decades as units.

Contact update (temporary)

For independent reasons all professional addresses at are now blacklisted and for the moment mailing me there does not work reliably. This may take months to change back into a functioning address, anything relying on google will not even put it in spam. Therefore here are some alternatives if you need to send me a message:

Do not cc to my adress!

Or you could post a message here (first time needs my approval, so anyway I’ll see it), or open an issue at one of my github repos.

The first web page still works (sometimes it doesn’t): here.

The github web page works: here.

As concerns telegram, remark that there is xorasimilarity, as mentioned, and another public channel chorasimilarity, where I post from time to time. You can see the public channel chorasimilarity either without telegram, or with telegram.

I tend to favor public notifications, with attribution (link to the mentioned article/post).

No deal in science over researchers heads!

[also available here]

Note. This is adapted from a part of the post Researcher Behavior Access Controls at a Library Proxy Server are Not Okay, because I think the idea is more important than the context of that post.

The trend in science publishing is to make deals over the heads of researchers.

Deals are made between publishers and academic managers, or publishers with librarians, or IT department with librarians, and so on. If you look at BOAI, the initiator of the gold (in the pockets of publishers) open access style, it was librarians with publishers. Decades of advances were lost because the fight ignored researchers needs.

What do researchers need? Something arXiv like with a Sci-Hub like interface.

Tough luck: arXiv is not publishing (per BOAI) and Sci-Hub is illegal.

What do researchers got? Gold Open Access. This is the idea that since publishers can’t force readers to pay, they force authors to pay for their own creation.

We, researchers, understood that librarians were scared by publishers that their important role will decay. We understood that managers want to turn science into business, so they apply to individual researchers the criteria which were designed for journals.

But it is time to understand that researchers have to be at the core of any deal, because without researchers there is no need for librarians, university IT administrators, managers or scientific publishers.

To make deals over the researchers heads is not Okay.

To be clear, librarians, IT departments and managers please at least return the respect you received from the researchers. Please stop treating researchers as cows which have to be herded to the publishers needs. This is not your job.

Don’t destroy science. It is not a business. Let us work.

Distill burnout shows Open Science publication is hard

UPDATE: Read also the intro I did on the telegram channel to this post. A better post title would be burnout shows why publishing research as a special activity is obsolete. Don’t publish, give 🙂


From Distill Hiatus post:

“Over the past five years, Distill has supported authors in publishing artifacts that push beyond the traditional expectations of scientific papers. […]

But over this time, the editorial team has become less certain whether it makes sense to run Distill as a journal, rather than encourage authors to self-publish. Running Distill as a journal creates a great deal of structural friction, making it hard for us to focus on the aspects of scientific publishing we’re most excited about. Distill is volunteer run and these frictions have caused our team to struggle with burnout.”

Just look at the people behind Distill. A combination of Google with Mike Bostock (of d3.js fame) aka ObservableHQ.

Still seems very hard. I know it first hand, because I started it before them. The article Molecular computers was written before Distill by this mathematician, not programmer. Mind that Github almost broke it by passing from http to https. Indeed, for animations I used iframes, so if you access the article via https then the animations will not be visible (because the iframe contains the animation link as http; every link is from Github though, so why are those http links for animations not trusted? no reason at all). (Update: I remember now that Google lost the arXiv version of the article, at some point…)

The idea of the Open Science is to replace peer review by validation. It was argued that Open Science should be rwx science. Then peer review, which is essentially an authority argument, will be naturally replaced by a sort of validation. Here validation does not mean that the research finding is formally checked, nor does it mean a sort of validation mark because it was checked to be reproducible. It is simpler and powerful. If you, researcher, give all means you used in your research, then it is just up to the reader to to produce more work based on it. Derivative works? Reviews? Edits? Comments? Proof checking? Reproductions? any of these are other’s contributions which use your work and thus they grow and process further your ideas. Just as you did in your research.

The “publication”, or “article” is only the story of the research, not the research. An article which gives all (possible) means for validation is the research.

Another advantage is that you, the researcher, don’t have to wait for your academic manager to realize we’re in 21st century, nor for your colleagues to massively move to better scientific practices. You don’t have to sacrifice all your research accessibility just because your boomer boss, or your politically well oriented colleagues tell you to “publish or perish”. If a project is too ambitious for the bureaucracy, then you can release it as an Open Science article. (Mind it though: if it is your project, if not then trust in a collaboration between people is just as important as science, so I don’t think it is good to force opinions onto others.) If others want to use square wheels then your round wheels cart will beat them in the long term evolution game.

Indeed, these ideas are certainly correct, but it is very very hard to live by them.


Because publishing is not the right frame of mind.

As it is now, the situation is alike to preach that we are all trees with beautiful flowers and tasty fruits.

Other people don’t know about this because they don’t pass near us, trees.

Like trees, we are completely at the mercy of our neighbourhood. We are imprisoned by Google.

Another problem is that it is very hard to process such a high density of information, compared with a classical article. The reader either has to cope with drinking from the hose or it does not get it at all. OK, that is science, but on the other hand is very hard to give this information in a structured way.

The structure eats the contents. If you look at the source of the Distill Hiatus article, yes, the article is at line 1012.

In that source, look for “Copyright”. There are 7 Apache licences there. Not one is a one liner. This shows the mind frame where structure is more important than content and where copyrights are more important than structure.

As bureaucracy, which is good for scaling but soul crushing for creation, here structure eats the contents. And moreover, copyright eats the structure and the content.

Compared with it, the source of the beautiful Distill article Growing Neural Cellular Automata, is more humanly structured, because the article text is not a one liner in a sea of boilerplate. But do you want to play with the scripts? Tough chance, just go to a Notebook, which is Google dependent in so many ways. This is strongly against the scientific method.

It is therefore very hard not to burnout because in this world, as it is now, to do science in the scientific way demands to build the world. Repeatedly. Objectively. This is, I think, the source of the burnout. It is very hard to try another pair of contradictory things: to discuss and in the same time to not discuss.

My first solution is better in principle, because of the no dependencies choice and because of the give everything choice. It is not a solution for publication, which is an obsolete thing. , I’m proud that I started before Distill, I am still alive after their burnout hiatus. So something is possible.

Still, my latest version is dependent on a corporation: Github.

Now I am spread between Github, Telegram, Figshare and WordPress.

I’m thinking about antennas, don’t know what this means, yet. I’ll happily take a creation and management task (I don’t know, you who sit on piles of coins, have you thought about an Open Research Institute, or a remake of an Invisible College with the 21st century power?), or collaboration, or teaching tasks.

Perhaps there is also a problem with the society where such initiatives try to survive. Look, Google supports such effort, with one hand, and guts it (unintentionally, just a small ant flattened by a very big ass, as they say) with the other. Maybe this society which tries so much too go down as fast as possible is no longer science friendly. Maybe other societies which have problems but they have the huge optimism asset are friendlier. Somewhere, there should be an interesting sea shore, an interesting border between old and new, somehow still protected from uniformization and in the same time open enough so to attract variety.

Pure See and the Moirai

Wouldn’t be nice to tell again the story of the Moirai by using Pure See?

Now we know the correct correspondence between emergent algebras and the nodes and moves of the string of graph rewrite formalisms which I pursue since 2012.

The Pure See draft is not yet 100% correct in it’s treatment of emergence (reduction by passage to the limit where things should be more symmetric, and the –missing as now, explicitely– treatment of the degenerate fanout and fanin as emerging themselves), but nevertheless it would be a nice exercise to see where in the sequence of posts about the Moirai I was right and where not.

Recall that in the list of posts about “ancient Turing machines” there are some which attribute to the 3 Moirai (or Fates) some universal computing power. They can, by manipulating their strings (and rewritings) decide our fate (like a program which is then executed).

Here is the list of posts for you to enjoy and to play with, this time from the point of view of Pure See:

Compare also with the making of, and further passage through GLC, chemlambda v1, chemlambda v2, etc from the History of chemlambda.

I think that now an attentive reader may enjoy to play with knot diagrams and their translations, in order to understand what the story of the Moirai is about.

Why would anybody do that? because is fun, what else? Now when we know how the machine works, roughly, then we may inspect some parts of it and try to make sense. You know, as like you and the car workshop guy both look at the engine. You see different things when you know how it works and what’s to be done.

Oh, that’s how semantics works… kind of thing.

Penrose’ Orchestrated OR characterised as stalinesque, question

Thanks to an announcement from Louis Kauffman, I arrived to watch the recent talk

Sir Roger Penrose and Dr. Stuart Hameroff: Consciousness and the Physics of the Brain, Roth Auditorium – Sanford Consortium for Regenerative Medicine, La Jolla, CA

I am not a fan of consciousness research, because I believe it is too early to jump to the highest level of a huge building; let’s concentrate first, I say, to understand biological life (which we don’t). Consciousness studies often neglect the basis and they obscure very promising research avenues, like how does life exists as an asemantic decentralized computation… while in the same time cartesian homunculi are kept in various disguises. My opinion!

Physics, on the other hand, that’s something I am a big fan, so I started to watch the very interesting, indeed, talk by Penrose. At the meta level, I was amused by the repeated orders to the invisible human and computer machinery to change the slides while in the same time arguing that it cannot be computation what the brain does.

Then I arrived to a point in the talk where I saw that Penrose uses an argument from Dennett, but for physics. It intrigued me because I used the same argument ten years ago, but I was not aware about Penrose’ Orchestrated OR theory.

So the question is pure vanity: who used the argument first?

I asked the following and I got no useable answer, therefore I am looking in the community for help. Or maybe you were not aware about it and we can talk about it.

According to Penrose his Orchestrated OR is a stalinesque theory of physics (exact moment in the speech is this) and I find this characterisation intriguing, therefore I ask you for help with more information.

AFAIK is Dennett who uses the characterization of theories (of brain function) which explain illusions as orwellian, stalinesque or multiple drafts. It is straightforward to apply Dennett’s classification to theories of physics arXiv:1011.4485 and I was not aware about Penrose Orchestrated OR, nor about his characterization as stalinist.

The question I have is: maybe Dennett imported this classification from something Penrose wrote before? If not, is there any evidence about Penrose using Dennett?

Here is the (mouse copy-paste from pdf) passage from arXiv:1011.4485 I mention, where quotes are from Dennett:

From the description given at [17], such theories can be characterized as:

  • (a) orwellian – ”the subject comes to one conclusion, then goes back and changes that memory in light of subsequent events. This is akin to George Orwell’s Nineteen Eighty-Four, where records of the past are routinely altered.”
  • (b) stalinesque – the ”events would be reconciled prior to entering the subject’s consciousness, with the final result presented as fully resolved. This is akin to Joseph Stalin’s show trials, where the verdict has been decided in advance and the trial is just a rote presentation.”
  • (c) multiple drafts – ”there are a variety of sensory inputs from a given event and also a variety of interpretations of these inputs”. From [16] [there is] ”no central experiencer [who] confers a durable stamp of approval on any particular draft”.

Translated into the physics realm, this gives several interesting interpretations.

  • (a) Such a path has been pursued in physics, by Everett’s Many-Worlds Interpretation of Quantum Mechanics [19]. More precisely, concerning interpretations of the collapsing of the wave function which are compatible with Everett theory, see Deutsch [18] and Stapp [24]. [Let me add here, in 2021, the following completion. A superficial view would be that Many-World Interpretation is rather akin to (c) multiple drafts. This is false because the Many-Worlds Interpretation is opposite to multiple drafts. Indeed, the multiple drafts and many worlds could be confused as “multiple drafts” and “multiple worlds”, but this would confuse “world” with “draft”. It is a serious confusion, one more which can be traced back to the confusion of things (like drafts) and objects (like worlds). See for more Wittgenstein and the Rhino. We don’t need a new world, or universe, to propose and interact within a draft.]
  • (b) In more general terms, not related especially to the problem of the discrete versus continuous nature of reality, we can see any theory based on extremality of an action like being of this type. However, probably due to my ignorance, Iam not aware of physical theories supposing that a discrete reality conspires to give (to any observer) the appearance of being continuous. More precisely, such a theory would take as starting point a discrete reality where discrete things happen, in the limit when the graininess goes to zero, like in a continuous reality. One big and fundamental difficulty would be then to give a reasonable mechanism of how is this possible.

What is your opinion about this?

What I did during this pandemic

Here are my recollections. I’ll put only professional stuff, the personal part is not for share. (The long term effect of being physically confined with only the close family is great and one thing that everybody could relate, I hope.)

I don’t write this as self-promotion, is a sort of self-justification. “What did you do during this pandemic? …”

As you know some years before the pandemic I arrived at the subject of molecular computers, following the strange paths of mathematics and computation. This subject is still fresh, not enough explored and most likely supressed at least since 2017.

People have been hit with a life changing pandemic and they still rather study quantum computers. One thing I learned is that people (me included) are stubborn beyond reason.

At the end of 2019 I started to put in order the large quantity of published and unpublished research. It was a mess. It was not clear what chemlambda is and is not, the part about computing with space was still not appreciated enough. (The other thing people love a lot, besides quantum computing, is naive digital universes based on boomer mathematics. Don’t you know what boomer mathematics is? Like everything boomer, is something – mathematics in this case – from up to 1960′ glorified in a number of mediocre developments wrapped in self-congratulations. Fortunately mathematics evolved since then a lot and history will wipe all propaganda.)

Trying to make a presentable basis for this, I retraced my steps since then, and during the pandemic, and I finally made sense of the solution of the problem of computing with space. All pieces felt in place and is beautiful.

So the first thing I did was to put a basis for the chemlambda project, which you can see in the official chemlambda page. It was useful, even if only a basis. You have there now chemlambda v2, dirIC, a lambda calculus to chemlambda parser, quine graphs experiments, relations with Lafont’ Interaction Combinators, all pieces of stuff I wanted to have since years.

The second thing I did was to rescue (about a half) of the chemlambda collection of animations. Of course that this collection is not for admiring colored dots moving in pleasing ways, but a proof about how much can be done with purely local computing. How, please tell me, my dear experts, how did I invent all these molecules? Because they are living proof that asemantic computing does work.

As soon as I posted (in jan 2020) the collection, it was hit by a ddos attack. For almost a month.

In the spring and summer 2020 I worked on the various pieces of the official chemlambda page. I gave lots of private and some public talks. Put some articles in the arXiv. Produced abundantly commented scripts.

Then I was challenged to make what became chemSKI. Before I had not appreciated combinators as I should.

It became clear that behind a lot of the graphical rewriting stuff there is a computing with space part related to the shuffle move. This received the name pure see and is still in development. What is in there: a sort of semantics related to emergent algebras, but with implications for interaction combinators. A proof that both beta rewrite and the duplication rewrites are emergent, in a precise sense (people doing “linear” logic don’t even have this on their radar).

Coupled with the similar proofs and definitions of curvatures in sub-riemannian geometry, with the proof that the R3 rewrite emerges from a passage to the limit and that the defect from R3 measures curvature, this goes into completely new territory.

Now I enter into a phase where I had to take again the route of research on paper. It happened between nov 2020 and march 2021 and I want to wash the pandemic from my mind first and then I’ll show it. Is great!

This spring I started to do the same thing I did for the chemlambda project, but for other stuff, so I made the telegram chemlambda channel of long reads, which already has lots of things inside. Made also a github writings repository.

The COLIN implies LIN, asemantic computing and combinators stuff and the numbers exploration, started in em-convex, are partial, enough compelling results.

I forgot to tell you about Zip-slip-smash aka ZSS. It is a revision of zipper logic, which is a calculus with knots and zippers, which can implement interaction combinators. The meaning of it is related though again with emergent algebras, because the smash move is nothing else but the “look down” relation from the intrinsic sub-riemannian geometry treatment.

All in all, there are about:

  • 10 technical articles
  • lots of commented programs
  • many talks to learn from
  • chemSKI, dirIC, pure see, ZSS
  • asemantic computing direction
  • writings in new media

Open Science is RWX science (reloaded)

These are motivational materials in favor of OS, written during several years when I struggled to practice what I preached. If this inspires you then the goal is achieved.

The gist is that it is much easier to do Open Science than to wait for the perfect Open Access infrastructure.

also available at:

also at:

along more others in the chorasimilarity channel of long reads.

Wittgenstein and the Rhino

Assembled from A Wittgenstein joke and Notes for “Internet of things not Internet of objects”, there is now Wittgenstein and the Rhino.

It condenses what I think is reality and why it is not what usually people think it is.

Many times alluded or explained here and there, it took me many years to realize where is the source of the problem. Some of you probably will not like it. Your time has passed, is my opinion.

Also available in form, in the chorasimilarity telegram channel.

Writings repository at Github

Further experimenting, I created a writings repository. You find there, as .md files (some with pictures), some of my writings. Follow the repository for more, in the future. Also as reference.

Alternatively, I experiment with the chorasimilarity channel of long reads, on telegram-telegraph, which you can access without a telegram channel (and of course with a telegram account they look nicer on your phone).

ZSS: zipper logic revisited, with explanations

I took the time to explain in detail the ZSS (zip-slip-smash) graph rewrite formalism,

  • why is universal,
  • why is somehow dual to directed Interaction Combinators,
  • why we need to enhance the Reidemeister moves with new rewrites to obtain a system where the R moves compute.

It uses the pictures of the slides and it has links inside.

Available here, in the chorasimilarity telegram channel (but you don’t need telegram to see it). In case you do use telegram, here is the chorasimilarity channel of long reads.

Also available on github.

In article form is on figshare.

And finally here as pdf.

COLIN implies LIN

Introduction. See the last post On the missing examples of (COLIN) condition, again and the links therein, in particular this pdf and this mathoverflow question.

Theorem. For an emergent algebra with the group \Gamma = (0,\infty) the condition (COLIN) impliex the condition (LIN)

Proof. Part 1. Recall the (COLIN) condition: for any a, b \in \Gamma and for any x, y, z \in X we have

(x \circ_{a} y) \circ_{b} z = (x \circ_{b} z) \circ_{a} (y \circ_{b} z)

Fix an element e \in X, otherwise arbitrary. (COLIN) is then equivalent with

y \circ_{b} z = (e \circ_{b} z) \bullet_{a} ((e \circ_{b} y) \circ_{a} z)

If we replace z with e \circ_{a} z and then we use (R2) and some groupings of terms then we obtain the following relation equivalent with (COLIN):

y \circ_{b} (e \circ_{a} z) = \Delta_{a}^{e}(e \circ_{b} z, E)

where E is a relative operation, namely

E = e \bullet_{a} ((e \circ_{a} y) \circ_{b} (e \circ_{a} z))

We pass now with a to 0 by using the topological axiom (em) and we obtain in the limit the following

y \circ_{b} e = \Delta^{e}(e \circ_{b} z, E_{0})

where E_{0} is the limit of E, therefore the infinitesimal dilation of coefficient b, based at e. We write it like this

E_{0} = y \circ_{b}^{e} z

Part 2. The relation (COLIN) passes to the infinitesimal level. Indeed, if we replace the operations with the infinitesimal operations (dilations) based at e then (COLIN) remains true. This is true because for an arbitrary c \in \Gamma we deduce from (COLIN) the relation

(x \circ_{a,c}^{e} y) \circ_{b,c}^{e} z = (x \circ_{b,c}^{e} z) \circ_{a,c}^{e} (y \circ_{b,c}^{e} z)


x \circ_{a,c}^{e} y = e \bullet_{c} ((e \circ_{c} x) \circ_{a} (e \circ_{c} y))

We can then pass to the limit with c to 0 and we get the “infinitesimal” (COLIN) relation

(x \circ_{a}^{e} y) \circ_{b} ^{e}z = (x \circ_{b}^{e}z) \circ_{a}^{e} (y \circ_{b}^{e} z)

But the infinitesimal emergent algebra based at e satisfies (LIN) see here, propozition 7.11. Therefore now we know that it also satisfy (COLIN). As a consequence we get that it comes from a commutative conical group. We denote with a dot this conical group operation.

Because the group is conical and commutative it follows that

(e \circ_{a} y) \cdot (e \circ_{b} y)  = e \circ_{a + b} y

via techniques explained in the em-convex paper. Therefore the conical group is a vector space, with group operation being vector addition and e \circ_{a} x equal to the scalar a which multiplies x (up to an arbitrary exponential).

Part 3. The last two relations from Part 1 are then rewritten as

y \circ_{b} e = (e \circ_{b} z^{-1})  \cdot y \cdot (e \circ_{b} (y^{-1}  \cdot z))

Commutativity of the group operation \cdot and (LIN) for the infinitesimal level gives us the equivalent

y \circ_{b} e = y \circ_{b}^{e} e = e \circ_{1-b} y

(where in the last equality we used Part 2 and we make a slight abuse of notation for the multiplication by the scalar 1-b)

Now we come back to the initial (COLIN) and we remark that with the new knowledge we can rewrite it as

e \circ_{1-a} (x \circ_{b} y) = (e \circ_{1-a} x ) \circ_{b} ( e \circ_{1-a} y)

which is equivalent with

x \circ_{b} y = e \bullet_{1-a} ((e \circ_{1-a} x ) \circ_{b} ( e \circ_{1-a} y))

We pass to the limit with a to 1 this time and we obtain that

x \circ_{b} y = x \circ_{b}^{e} y

therefore the emergent algebra is identical with the infinitesimal emergent algebra based at e. Therefore it satisfies (LIN).


We know more, actually, namely that (COLIN) is equivalent with (SHUFFLE). Indeed, we proved that (COLIN) implies that the emergent algebra is the one of a conical commutative group, or we already know that (SHUFFLE) is true if and only if we are in a conical commutative group.

As a conclusion, there is no example of an emergent algebra which satisfies (COLIN) but not (LIN).

On the missing examples of COLIN condition, again

UPDATE: Solved, COLIN impies LIN.

If you follow the development of pure see [draft], and previous post [here], then you know the importance of the SHUFFLE rewrite schema. As an algebraic equality (instead of a rewrite) the SHUFFLE is the most particular one. More general are LIN and COLIN, moreover one can prove that SHUFFLE is equivalent with (LIN + COLIN).

The problem is that LIN has models and COLIN does not.

As an algebraic equality, LIN describes (choose your level of mathematical generality) Carnot groups, non-commutative vector spaces or conical groups. There are plenty of examples where LIN holds but COLIN does not hold.

Geometrically, we can measure the deviation from LIN and it turns out to be related to curvature.

COLIN does not have known models where LIN does not hold. But we can prove that infinitesimally the deviation from COLIN is related to the non-commutativity of the addition in the tangent space.

I reproduce here a post from the chorasimilarity telegram channel:

“Almost half a year ago I asked about an example of an unknown structure described here and today I got an amazing answer involving artificial intelligence in the text Is the kind of answer I really have to think about. Update: is not clear though if the example given responds to the question.”

The obvious example where COLIN holds but LIN does not hold would be one where the curvature of the space is nontrivial (LIN does not hold) but the space is riemannian (COLIN holds infinitesimally at least). This is just regular riemannian geometry. The problem is that this is not working, for reasons I don’t understand. Suppose we simplify the search for the example by supposing that the space is a group (or a symmetric space), i.e. we suppose that the space is also homogeneous (the space is the same around any of its points). Then we can prove that globally, not infinitesimally, COLIN implies LIN. Simplified: group-like structure + COLIN implies LIN.

So the search in this direction is not fruitful. COLIN without LIN has to be satisfied in spaces which don’t have a group-like structure. On the other hand we just saw that LIN alone implies that the space has a particular (nilpotent) group-like structure and that LIN+COLIN implies usual (commutative) vector space.

Where is this example? Where to search for one?

Chorasimilarity Telegram channel as a repository

I made an adjustment of the chorasimilarity telegram channel (visible without telegram) or (visible with telegram).

The goal is to use it as a repository for the long form notes put on the, therefore I deleted many of (but not all) posts which do not point to a note.

Mind that this is just experimentation, it is in no way a recommendation to use telegram.

Now some comments.

About use case: I find very convenient the fast edit/publish via It looks good on phones, too.

Against use case: can’t put there scripts, or mathematical notation a la latex. Long form text on phones seems self-contradictory.

Mixed uses: some posts on this blog are shared there, because the “instant view” feature of the telegram app on phones, so there is no need to turn chorasimilarity blog posts into posts.

What I would want: a long term repository, away from the corporate realm, where I can store and show a mix of writings, program sources and program executions in one place. A way to produce executable and animated books which cood be read anytime in the future simply by downloading the source (with as little as possible dependencies). Static. Who doesn’t want this?

General comment: the blogs may be dead but this blog is not a blog in the usual sense, because it is not written as a temporal stream. It is almost not a “log”. I usually delete log-like posts after a while. However it is a bit of a “log” because I don’t rewrite old posts except by adding update links to newer or complementary versions.

Short explanation of the syntax of Pure See

This is the link to the Pure See draft which will soon be updated.

What is Pure See

It is a common language over all versions of chemlambda, like chemlambda v2, dirIC or kali. All these versions of chemlambda are machines, or models of computation. They are graph rewrite systems with an algorithm of application of the rewrite rules.

It became clear that there is a common source from which all these graph rewrite automata come, and that many more are possible. They all share some part of a set of 6 trivalent nodes and we can prove that there is only one graph rewrite schema called SHUFFLE which generates all particular graph rewrites encountered in these models. (The way it generates it is via a “passage to the limit” so to say, or “emergent” schema, which deserves a separate explanation.)

Thus Pure See is a language version for graphs made of these 6 nodes, the SHUFFLE rewrite schema and the passage to the limit schema.

As 6 = 3!, we need 3 words to generate, by permutations, any of the 6 nodes. These 3 words are “from”, “see” and “as”.

Hence the name “Pure See”.

Pure See syntax as map making

When we make a map of something we see, in the most abstract way we need to mention 3 things:

  • (1) where we are
  • (2) what are we looking at
  • (3) what is the representation, or name, or symbol of what are we looking at (2) as seen from where we are (1)

We say:

from a see b as c

and we mean

  • from a (where we are)
  • see b (we look at b)
  • as c (we represent it as c)

But of course that “a”, “b”, “c” are just names. What matters is the structure given by “from”, “see” and “as”.

There are 6 permutations of this structure and each of them correspond to one (type of a) trivalent node.

Mind that trivalent nodes are just nodes with a node type and 3 ports together with an ordering of the ports. We write

Q a b c

to denote a node of type “Q” and with 1st port with name “a”, 2nd port with name “b” and 3rd port with name “c”.

Such nodes form graphs by connecting ports. Each edge has an unique name and two ports with the same name are connected by the edge with that name. The names of the edges don’t matter (we can rename them and we still have the same graph). We present the graph as a “mol” (short from “molecule”) which is just a list of node denotations. Of course, the order of the nodes in the list does not matter, we can permute the nodes and we still have the same graph.

At the language level this means that a proposition like “from a see b as c” represents a node (we’ll see how immediately), that graphs are represented as lists of such propositions, but the order of propositions does not matter, and that names like “a”, “b”, “c” can be modified by renaming into anything (except code words like “from”, “see”, “as”) with the condition that they are unique as names in the “mol”. Mind that each such name appears at most twice in a mol (i.e. twice if it representes an edge between two ports and only once if it represents a half-edge with the other end free). Therefore a renaming is a bijective trasformation of the list of names into another list of names, which is then used to replace in a mol each old name with the new name.

Now let’s see how to represent the type of the node in this language. It is simple: there are 6 permutations of 3 things and we have 6 nodes (types) to represent. These are

  • from a see b as c, i.e. D a b c
  • see a from b as c, i.e. L a b c
  • as a from b see c, i.e. A a b c
  • see a as b from c, i.e. FI a b c
  • from a as b see c, i.e. FOE a b c
  • as a see b from c, i.e. FOX a b c

The name of the node types is partly historical, like “FOE” which comes from “external FO” (and “FO” means “fanout” and “FI” means “fanin”), or like “D” which means “dilation” (from dilation structures or emergent algebras).

Forget about these names for now, but remember them later when you read the algebraic heuristics and anharmonic group sections.

Think that a priori these are 6 possible propositions which describe map making, or the descriptions of 6 different situations encountered when we make a map.

They are though related in subtle ways with well known subjects, like lambda calculus or interaction combinators. For the relation to interaction combinators go back to the link to dirIC.

Relation to lambda calculus

Lambda calculus is a term rewrite system. We can turn a lambda term into a graph (it’s syntax tree modified in a certain way) which uses 3 node types: L for lambda, A for application and a FO node for fanouts. We need fanouts because in a term rewrite system we are allowed to use many times the same variable name, or when we turn the term into a graph we need FO nodes to mark that several variable names are “the same”.

But “FO” type of node is not among out 6 nodes. Later, in another part, about this.

As lambda calculus is an inspiration for chemlambda, which in turn is in the scope of Pure See (excepting the FO node which has a special treatment), there is an association which we can make between lambda calculus and Pure See.

As concerns the lambda and application, they look like this, when we just suppose that we don’t care about repeated use of the same name variiable:

  • in lambda calculus we write \x.A = B and in Pure See this is translated into “see A from x as B”
  • in lambda calculus we write A B = C and in Pure See this is translated into “as A from B see C”.

Let’s see how we do this with some examples:

I = \x.x is “see x from x as I”

K = \x.\y.x is “see x from y as a; see a from x as K”

therefore there is this new “a” which is just “a = \y.x” here, so “K = \x.a”.

When we want to write for example

S = \x.\y.\z.((x z) (y z))

then is more difficult because we see that “z” appears 3 times. Just ignoring this, we might write in Pure See the following thing which is not a mol:

“see a from x as S;

see b from y as a;

see c from z as b;

as x from z see d;

as y from z see e;

as d from e see c”

It is not a mol because “z” appears 3 times. In the following we shall allow this, for the sake of the exposition. (But see Substitution is objective (II) for more.)

Algebraic heuristics

Well, the notation of the S combinator is not comfortable, so why not eliminate the intermediary a,b,c,…? For the lambda is simple:

instead of “see a from x as c” just write “c = see a from x”, rather close to “c = \x.a”.

We did something like this algebraic heuristics

see a from b as c = 0

see a from b = – as c

(see a from b) / (-as) = c

and we took as = -1 to obtain:

see a from b = c

Let’s try the same for the application:

as a from b see c = 0

as a from b = – see c

as a from b = see as c

(as a from b) / (see as) = c

(as / (see as)) a (from / (see as)) b = c

(1/see) a (from/ (see as)) b = c

We introduce new keywords

apply = 1/see

over = from/(see as)

and we get

apply a over b = c

which is very close to “a b = c” from lambda calculus.

Then, all in all we would have

S = see (see (see (apply (apply x over z) over (apply y over z)) from z) from y ) from x

which is a bit more verbose than lambda calculus, but it is recognoscible.

Anharmonic group

The algebraic heuristics is weird, but what we did?

The “from”, “see” and “as”, also “apply” and “over”, are like scalars.

The “a”, “b”, “c”, .. are like vectors.

Juxtaposition like apply a over b = c is vector addition.

But! The vector addition is not commutative, otherwise we collapse all permutations of the 3 “from”, “see”, “as”, into one.

Multiplication by scalars is distributive, OK.

as = -1

and any proposition is = 0.

Oh, 0 is a special vector, call it “origin”.

Recall now that the 6 permutations of 3 elements (here “from”, “see”, “as”) are a group, which is none the other than (isomorphic to) the anharmonic group.

The anharmonic group is made of 6 rational functions, of a parameter z (complex if you like). We can give roles in this game so that all the syntax explained, and the algenraic heuristics make sense, by a kind of magic.

For this let’s enumerate the 6 rational functions, with names from Pure See:

see[z] = z

from[z] = 1 – z

apply[z] = 1/z

over[z] = (z-1)/z

with[z} = 1/(1-z)

note[z] = z/(z-1)

This form a group with composition of functions. For example

see[from[z]] = from[z]


with[apply[z]] = note[z]

and mind that addition and multiplication of scalars is commutative.

To these 6 functions with made up names we add two more:

as[z] = -1

in[z] = 1

So then indeed

over[z] = from[z] / (see[z] as[z])


apply[z] = in[z]/see[z]

as we wrote in the algebraic heuristics.

As for the isomorphism of the anharmonic group with the 6 permutations which denote node types, this can be recovered from solving equations.

Indeed, we shall replace, formally the “from”, “see”, “as” with their functions, juxtaposition with vector addition with “+” and remember that “+” with vectors is not commutative!

from a see b as c becomes

from[z] a see[z] b as[z] c = 0

(1-z) a + z b + (-1) c = 0

(1-z) a + z b = c

We associate the node D, or the proposition “from a see b as c” with the function see[z] = z from the anharmonic group.

Now watch the treatment of L

see a from b as c


z a + (1-z) b + (-1) c = 0

z a + (1-z) b = c

(1 – (1-z)) a + (1-z) b = c

which after all it could be written as

from[from[z]] a + see[from[z]] b = c

that is

from[1-z] a + see[1-z] b = c

That is why we associate from[z] = 1-z with the node L, or with the proposition “see a from b as c”.

This works for the two propositions with end with “as”, but what we do with those which end with “see”?

We already explained this for A, or “as a from b see c”, let’s do it properly now

as[z] a + from[z] b + see[z] c = 0

(-1) a + (1-z) b + z c = 0

(-1) a + (1-z) b = (-z) c

(1/z) a + (z-1)/z b = c

which is indeed

apply[z] a + over[z] b = c

but also

from[over[z]] a + see[over[z]] b = c

which tells us that A, or the proposition “as a from b see c” is associated with the function over[z] = (z-1)/z from the anharmonic group.

All in all the 6 propositions from Pure See have the same equivalent form

from[g[z]] a + see[g[z]] b = c

for g[z] one of the 6 functions of the anharmonic group.

All this structure is not randomly here, it has to be explained intrinsically.

Note for example that if we believe that the association with the anharmonic group has any meaning, then we should have nontrivial relations between scalars, like

as from = see over


apply from = as over

For the moment we don’t have any more structure in Pure See, because there was nothing about the SHUFFLE schema.

But we can still play with the look of the beta rewrite in this language.

(\x.A) B

is written as (I’ll not put the “+” between “vectors”)

apply(see A from x) over B

We use the distributivity of the scalar multiplication (not yet justified)

((apply see) A (apply from) x) over B = C

(in A (as over) x) over B = C

… if the vector addition would be associative, then

in A ((as over) x over B) = C

then again distributivity

in A over (as x in B) = C

which is a kind of

A[x/B] = C

if we interpret the beta rewrite as meaning

as x in B = origin

otherwise said

x = B

and as a consequence

in A over origin = C

which, because origin is the neutral element of the vector addition, reads

A = C

But why? That is for later.

Gnomons and homunculi in the space theater

I put together, in an unitary presentation, a sequence of 9 posts titled Gnomons and homunculi in the space theater.

If you think like in the Wittgenstein joke on the primate of objects and states of affairs, which have intrinsic structure and properties like colour, that there is nothing more to talk about and that everything is as neatly organized as a modern database, accesible via a search engine (telling you what is real) and a fact checker (telling you what is true), then you’ll might be surprised.

A Wittgenstein joke

From this source. I only added the “Framing”, “Telling” and “Punchline” headlines, to make the joke clearer. All the rest is (the standard English translation) of Wittgenstein’ text.


The whole sense of the book might be summed up in the following words: what can be said at all can be said clearly, and what we cannot talk about we must pass over in silence.

Thus the aim of the book is to draw a limit to thought […]

It will therefore only be in language that the limit can be drawn, and what lies on the otherside of the limit will simply be nonsense.


2.026 There must be objects, if the world is to have unalterable form.

2.01 A state of affairs (a state of things) is a combination of objects (things).

2.03 In a state of affairs objects fit into one another like the links of a chain.

2.031 In a state of affairs objects stand in a determinate relation to one another.

2.011 It is essential to things that they should be possible constituents of states of affairs.

2.013 Each thing is, as it were, in a space of possible states of affairs. This space I can imagine empty, but I cannot imagine the thing without the space.

1 The world is all that is the case.

2 What is the case —a fact— is the existence of states of affairs.

2.04 The totality of existing states of affairs is the world.

1.1 The world is the totality of facts, not of things.

1.2 The world divides into facts.

2.05 The totality of existing states of affairs also determines which states of affairs do not exist.

2.06 The existence and non-existence of states of affairs is reality.

2.1 We picture facts to ourselves.

2.141 A picture is a fact.

2.12 A picture is a model of reality.

2.224 It is impossible to tell from the picture alone whether it is true or false.

2.223 In order to tell whether a picture is true or false we must compare it with reality.

2.21 A picture agrees with reality or fails to agree; it is correct or incorrect, true or false.


… the truth of the thoughts that are here communicated seems to me unassailable and definitive. I therefore believe myself to have found, on all essential points, the final solution of the problems.

computing with space | open notebook