If the jedi is stupid, does the trick work?

I bet you know what I mean and the answer is: yes, in a stupid society. A little bit more time.

This year I wrote in february about how 2021 is a spiral around the sink and in october about looking for a job in non-academic research.

Now I hold much stronger opinions. It is obvious not only the direction in a big part of the world, where I also live, but the acceleration of this trend is so big.

Is not something I wish, on the contrary, but just look at the desire of those boomers and selected pupils to not leave anything on the table.

Garbage in, garbage out.

A new paper about emergent algebras

Today appeared arXiv.2110.08178 with the title COLIN implies LIN for emergent algebras and with the abstract:

Emergent algebras, first time introduced in arXiv:0907.1520 , are families of quasigroup operations indexed by a commutative group, which satisfy some algebraic relations and also topological (convergence and continuity) relations. Besides sub-riemannian geometry arXiv:math/0608536, they appear as a semantics of a family of graph-rewrite systems related to interaction combinators arXiv:2007.10288, or lambda calculus arXiv:1305.5786 . In arXiv:1807.02058 there is a lambda calculus version of emergent algebras.


In this article we prove that for emergent algebras the condition (COLIN), or right-distributivity for emergent algebras, implies (LIN), or left-distributivity for emergent algebras. It means that any emergent algebra which is right-distributive has to come from a commutative group endowed with a family of dilations.
This is surprising, because there are many examples of emergent algebras which satisfy (LIN), but not (COLIN), namely those who are associated to non-commutative conical groups, in particular to non-commutative Carnot groups.

For those reading my telegram channel or even for those who read the mathematical content of this blog, this is no surprise.

Otherwise is a good introduction into the equational theory of emergent algebras.

Also available on github.

When I write that emergent algebras “appear as a semantics of a family of graph-rewrite systems related to interaction combinators arXiv:2007.10288 ” I really mean that commutative emergent algebras (ie those which satisfy SHUFFLE) can be used to decorate any of the artificial chemistry graphs of chemlambda, dirIC, or a random choice as kali24, in a way compatible with the graph rewrites. This is shown in the Pure See draft and also explained in the detailed comments in the source js, for example in chemistry.js. Also in rhs.js from mbuliga.github.io repo.

Logically, this leaves out the question: are non-commutative emergent algebras the semantics of other graph-rewriting systems? For examples, give a graph-rewriting system which has LIN but not COLIN (thus not SHUFFLE).

An attempt is already available: ZSS or “ZIP-SLIP-SMASH”. But it only goes half way in the required direction, because the SMASH rewrites project back to commutativity any computation done at the LIN level.

An interesting particular case would be, at the equational level, the emergent algebras of Heisenberg groups. I do have this characterization, but I have not passed to the algorithmic level.

Coming back to the COLIN implies LIN article, it would be revealing to find out a proof at the algorithmic level, not at the equational level.

As this year made me think about many questions, I confess I don’t have the answer to this fundamental one: why some things that I do are believed only after a decade, and even so in a distorted way? What I’m doing wrong? What if I don’t do wrong, what’s to do?

That applies to other projects I have or had… Wouldn’t be nice to try to understand and discuss more, instead of running on a tangent with a half-understood idea? This would be a huge time economy.

Looking for a job in non-academic research

It does not look that academic research is in the same century as the rest of the world. I wouldn’t want this to happen, but especially during these years it looks more and more like that academic research will soon crash.

As an academic researcher with high interest in Open Science, as someone who has encyclopedic knowledge in fields from modelization for engineering to functional analysis and geometry, I am tired to find again and again that what I do as a researcher has very little to do with the academic bureaucracy.

As I jumped the boat of Open Science very early, I was always convinced that what I do will become in few years the new normal in academia.

Instead, I witness a lack of knowledge of the real, vibrant moment we live in research.

When the normal was to publish in good journals I chose arXiv, later when the normal became arXiv I chose github.

And so on.

That is why I enquire for other possibilities, outside academia, as I know it.

Do you need me? Not someone like me, because there are not sufficiently many to form a population of people who are competent in several fields from applied to pure mathematics, from mathematics for industry to theoretical or emerging technologies in computing.

If so then I’d be glad to talk with you.

I shall not reply to offers from hiring agencies.

To avoid spam: my contact information is in this page.

Personal description: INFJ-A personality. Lots of counseling experience in many past research projects, privately and behind the scenes. Encyclopedic professional knowledge. I don’t like to be in the front seat. I share ideas I have because I always have another two of them for the future. Some of them happened to be successful in the past in the academic realm. I tend to live in the future. I like to build things with my hands, especially when they are beautiful.

About notes of the Sci-Hub case in the High Court of Delhi

Sreejith Sasidharan points to a wonderful link concerning the proceedings of the case ELSEVIER LTD. AND ORS. vs ALEXANDRA ELBAKYAN AND ORS. at the High Court of Delhi.

UPDATE: “Websites prove their identity via certificates, which are valid for a set time period. The certificate for delhihighcourt.nic.in expired on 10/20/2021.”

There are notes available for each hearing which are very interesting to read (in pdf form).

Just by looking at 3 of them, we get a picture of what is happening, related to Elbakyan announcement with the ocasion of Sci-Hub 10 years anniversary.

On Sept 5th Alexandra ended the stalemate with the publishers by telling everybody that:

“I’m going to publish 2,337,229 new articles to celebrate the date. They will be available on the website in a few hours (how about the lawsuit in India you may ask: our lawyers say that restriction is expired already)”

The notes from the Delhi High Court complete this story.

The following are quotes from the notes associated to the hearing from Sept 15th:

“1. This application, at the instance of the plaintiff, adverts to an
undertaking given by the defendant, before this Court on 24th

“6.2 However, given the stand taken by Mr. Sibal, Mr. Jain says
no new articles or publications, in which the plaintiffs have copyright, will be uploaded or made available, by defendant no. 1/Alexandra Elbakyan, via the internet, till the next date of hearing.

6.3 The statement of Mr. Jain is taken on record.”

December, 2020, which stands recorded in Para 6.2 and 6.3 of the
order passed on the said date thus:


2. On 6th January, 2021, the aforesaid undertaking granted by
Defendant No. 1 was directed to ‘continue till the next date of
‘hearing’.

3. There is no subsequent order extending the undertaking.


4. The present application has been filed by the plaintiff,
contending that, while the defendant was abiding by the aforesaid undertaking thereafter, it is now acting in breach of the undertaking.
As such, the plaintiff seeks a direction from the Court, binding the respondent by the aforesaid undertaking, granted on 24th December, 2020, and extended on 6th January, 2021.

5. Mr. Gopal Sankarnarayanan, learned Counsel for the
defendants, submits that no ground, for issuance of any such direction, is made out. He has placed on record, judicial authorities on which, according to him, clearly hold that the undertaking would continue only till the date which it was given, and not thereafter.


6. Mr. Sibal, per contra, submits that the decisions on which Mr.
Sankarnarayanan relies, are cases in which, the interim order was
extended, either for a specific date or for a specific period of time, and not orders in which the extension was “till the next date of hearing”. “

Read the full notes accessible vi the previous link.

In the next hearing, on Sept 21st, we read:

“1. Instead of entering into the niceties of the interpretation of the expression “till the next date of hearing” on which both learned Senior Counsel have filed a volume of authorities, both learned counsel submit, with the fairness expected of them, that any dissertation on this aspect may be avoided if IA 12668/2020, in which pleading are now complete, is taken up for hearing by the court.”

Moreover, the hearing from Sept 28th (i.e. the said IA 12668/2020) only says that:

“1. As there is a Full Court Meeting at 5.00 p.m., re-notify on 7th
October, 2021.

2. The understanding will continue till the next date of hearing.”

Oct 7th is today, follow the link to the case to find out what happened.

Thank you Sreejith!

UPDATE (Oct 11 2021): Rescheduled for Nov 16, 2021, source.

Homunculus fallacies, falling rocks and the computational paradigm

[from the original appeared at telegra.ph]

[also at my writings repository at github]

There are two different ways a falling rock computes. There are two different homunculus fallacies. And they are all related.

In his Even beyond Physics: Introducing Multicomputation as a Fourth General Paradigm for Theoretical Science, Stephen Wolfram proposes the idea that we are about to discover or build towards a 4th paradigm in science.

The previous three paradigms are, as Stephen writes:

  • “The first, originating in antiquity, one might call the “structural paradigm”. Its key idea is to think of things in the world as being constructed from some kind of simple-to-describe elements—say geometrical objects—and then to use something like logical reasoning to work out what will happen with them.”
  • The mathematical paradigm: “the idea that things in the world can be described by mathematical equations—and that their behavior can be determined by finding solutions to these equations. […] In the mathematical paradigm one imagines having a mathematical equation and then separately somehow solving it.”
  • The computatonal paradigm: “define a model using computational rules (say, for a cellular automaton) and then explicitly be able to run these to work out their consequences. […] there may be no faster way to find out what a system will do than just to trace each of its computational steps.”

Puting aside the first paradigm (because it is so old and rich and… still alive everywhere), I’d reformulate the mathematical and the computational paradigm as:

  • The mathematical paradigm: find invariants of the system and then, separately, compute consequences by the way of the human mind
  • The computational paradigm: define the evolution of the system as a computation and then run it, because most of the time there is no faster way.

So, how does a rock fall?

  • according to the mathematical paradigm: there are invariants of the movement, like the conservation of energy. By knowing the initial condition, we can compute the evolution of the rock (center of mass say) by solving a simple equation.
  • according to the computational paradigm: the rock is composed by a big number of atoms which interact ones with the others and all interact with the gravitational field. The gravitational field is a property of space and all the interactions can be expressed by simple rules. We build a model of space, the rock and their interactions and then we run it to see how the rock will fall.

In the case of the rock falling, we can accept that the rock is actually falling as in the computational paradigm, but we do have a much shorter computation to make in the mathematical paradigm, which will give us the way the rock will fall.

In this simple case the computational paradigm is overkill (although for only slightly more complex systems, as billiards, the computational paradigm does give better results, while the mathematical paradigm let us to solve approximations of the solutions of the mathematical equations which are no longer conserving the imporant invariants. This is well known by specialists but less known by the general public: it is indeed difficult to make a numerical algorithm for the rigid motion which conserves the invariants of motion. The mathematical paradigm starts to look harder…).

A system which is as physical as the falling rock is a fly’s brain. This brings us to the homunculus fallacies.

Dennett introduces the idea of a Cartesian Theater, a remnant of the Cartesian dualism:

“Cartesian materialism is the view that there is a crucial finish line or boundary somewhere in the brain, marking a place where the order of arrival equals the order of “presentation” in experience because what happens there is what you are conscious of. … Many theorists would insist that they have explicitly rejected such an obviously bad idea. But … the persuasive imagery of the Cartesian Theater keeps coming back to haunt us—laypeople and scientists alike—even after its ghostly dualism has been denounced and exorcized.”

As argued here, there is more here than a homunculus inside our brains, there’s also a scenic space which are introduced by the back door in any theory of biological vision which is reducible to a Cartesian theater. (As Phil Agre would say, there’s an orbiculus.)

The homunculus idea is bad because it leads to infinite application: the homunculus has a homunculus inside.

This is one homunculus fallacy, which is really hard to avoid. Really hard.

But there is another homunculus fallacy, less known: in the original homunculus fallacy, the homunculus is inside. In the external homunculus fallacy the homunculus is outside. The fallacy functions as well.

Let’s see an example, proposed here.

In order to understand the biological fly vision, a fly is glued in the center of a huge apparatus, so that the experimenter can control what the fly sees and in the same time it can measure the neural activity of the fly and the movements that the fly makes.

This is a huge technical achievement, along with the detailed, up to neurons, charting of the fly brain.

So how does the fly’ vision system works? There are two explanations:

  • the mathematical explanation of the human experimenter. In the explanation is used euclidean geometry and newtonian physics. Some equations are solved.
  • the computational explanation of the fly. There is no place in the fly’ brain for euclidean geometry. Something different than what the experimenter tells us happens in the fly brain. It is presumably a local computation with rather simple rules which is done by the fly’ brain. Like the computation of the rock when it falls, but in order to reproduce it in the computational model (on a computer) we probably need a huge computer.

You may notice that the experimenter’ explanation introduces an external homunculus in the theory. The computational explantion is, at this step, a hope.

So the external homunculus fallacy in biological vision is like the mathematical paradigm in science.

The new science, the 4th paradigm in the making, will attempt to pass beyond or to unify the previous two paradigms.

But how?

I don’t know yet, however there are clues. One of these clues is that the mathematical paradigm is based on the reasoning with invariants.

As I argued many times before, like here for ZSS, invariants don’t compute. Indeed, if we start from the hypothesis that everything is a computation, an invariant is something which does not change during the computation. We can use invariants in order to prove that two computations are the same, or that they are not the same. But we can’t compute with invariants, otherwise than separately as in the mathematical paradigm.

The mathematical paradigm is the mathematical computation based on invariants. Invariants give equations which are left to be solved by the human ingenuity, in order to obtain, hopefully, a shortcut computation which allows us to predict the evolution of the system without running the true computation.

The computational paradigm concentrates on the true computation and leads us to the realisation of computational irreducibility, namely that more often than not we have to rerun the system’ computation in order to understand it.

Probably the 4th paradigm will help us understand how the two computations — mathematical and real — are related.

And moreover computation is not what it were, in a sense. We understand now, as Stephen Wolfram writes, that we have somehow to make sense of a computational universe which is decentralized and local. Moeover, we have to discover ways to avoid the external homunculus fallacy or the God’s eye point of view and we have to understand how to reason without global semantics.

UPDATE: (from the chorasimilarity channel on telegram)

[I used to be an admirer of the Erlangen Program and it’s extension to symplectic geometry and Hamiltonian mechanics. I only now realize that the thread which I followed in the last 10 years is an attack on the central dogma of groups, invariants and representations which is pervasive in physics. It was not intended.

Also, now I fully realize that I should assemble notes and publish the role of emergent algebras, or their differential calculus analogs, the dilation structures. That is because it might be a bridge between the two paradigms, showing that the group-invariants-representations dogma of the mathematical paradigm can be worked out as the artificial chemistry of Pure See, from the computational paradigm. I worked on the pieces and I know how they fit and they could go as much as desired, but until now I worked hard to be understood by the computational universe researchers. Or, there is needed a parallel effort in the work in both paradigms. I understand only now why it is objectively hard to make this social effort to work…]

Only now I realize the scale of this attack. Put on one side, as a model ToE, the beautiful momentum-map of a symmetry group, with particles as unitary irreps, the legacy of Souriau turned into industry by an army of pragmatic physicists. That would be the magical mathematical recipe of the universe. The cherished epitome of mathematical physics of the 20th century.

On the other side put the many small artificial universes like those of Wolfram, but also Church, Schonfinkel, Turing, Lafont, up to artificial chemistries.

The attack of the mathematical paradigm is to say that the universe can be described by the computational paradigm, embodied by one or any of the universal small universes. That even life at the chemical scale can be explored like this.

The attack is aggravated by the claim that even the mathematical basis of the symmetry groups worship is describable by a particular chemistry of space, embodied by emergent algebras.

What a fight.

What a dirty attack on the most loved scientific discovery, that symmetry groups make invariants and their unitary irreducible representations make the elementary particles.

[But each step of the reconstruction of the mathematical paradigm inside the computational one seems accessible. Indeed, complex spaces and their extensions, symplectic manifolds, can be seen as their metric contact structures equivalents. There are intrinsic descriptions in terms of emergent algebras or dilation structures of those, which only little modify the Pure See and the main rewrite, the shuffle. Then the intrinsic differential calculus enters into play in order to define the differential objects; also for the mathematical paradigm is needed only the commutative theory of Lie groups (i.e. with algebras which come from dilations structure which are commutative, namely they respect the shuffle. Unitaries are just differentiable isometries, Hamiltonian systems are just smooth (intrinsically) volume preserving flows, or in particular flows of unitaries, which turn out to have the peculiar property (as seen in the contactification of the symplectic manifold) that the flow lines have Hausdorff measure 2, with density proportional to the Hamiltonian. I have not worked out the full construction of the momentum map in the intrinsic framework, looks to be a very interesting task. But in the end, if it works, we get a fully intrinsic description, only in terms of dilation structures, which itself would be easily (?) turned out into an artificial chemistry a la Pure See. This would show that actually the mathematical paradigm is a collection of particular computations which sit naturally into the computational paradigm.

An institute would be needed for this, or at least time and serenity from my part. But all steps are already available, only the paradigm shift blocks this.]

I am meat

During an unexpected hospitalization and surgery I experienced at the meat level the organisation in a big hospital.

Findings:

  • it is a decentralized, lively organization
  • it works
  • it’s always about human interactions
  • all interactions are local
  • I am meat in this organisation

I was successfully processed and now I am home, well physically.

After I experienced with my meat what it means to be a small part of a functioning decentralized society, I am left wondering if this is really what I want.

To be immersed in a huge organisation like a food bite in the intestine of a dragon?

Or to be toddlered by a giant robot?

Hm.

Sci-Hub is 10 years old, India lawsuit restriction expired already

Alexandra Elbakyan exits the stalemate created by the India lawsuit. From twitter:

Today is Sci-Hub anniversary the project is 10 years old!

I’m going to publish 2,337,229 new articles to celebrate the date. They will be available on the website in a few hours (how about the lawsuit in India you may ask: our lawyers say that restriction is expired already)

E. coli transcriptional regulatory network and the Linux call graph

In the article

Comparing genomes to computer operating systems in terms of the topology and evolution of their regulatory control networks,

Koon-Kiu Yan, Gang Fang, Nitin Bhardwaj, Roger P. Alexander, and Mark Gerstein

PNAS May 18, 2010 107 (20) 9186-9191; https://doi.org/10.1073/pnas.0914771107

are compared the E. coli transcriptional regulatory network and the Linux call graph.

Any model of molecular based life should be able to predict these differences.

Likewise, any model of decentralized computing which is based on the same hypotheses as a model of life should be different in the same qualitative ways from the Linux call graph as this E. coli transcriptional regulatory network is.

Here are two figures from the article which I consider highly relevant.

The first one [link to source] is:

and the decription is:

“The hierarchical layout of the E. coli transcriptional regulatory network and the Linux call graph. (Left) The transcriptional regulatory network of E. coli. (Right) The call graph of the Linux Kernel. Nodes are classified into three categories on the basis of their location in the hierarchy: master regulators (nodes with zero in-degree, Yellow), workhorses (nodes with zero out-degree, Green), and middle managers (nodes with nonzero in- and out-degree, Purple). Persistent genes and persistent functions (as defined in the main text) are shown in a larger size. The majority of persistent genes are located at the workhorse level, but persistent functions are underrepresented in the workhorse level. For easy visualization of the Linux call graph, we sampled 10% of the nodes for display. Under the sampling, the relative portion of nodes in the three levels and the ratio between persistent and nonpersistent nodes are preserved compared to the original network. The entire E. coli transcriptional regulatory network is displayed.”

What are “persistent functions” and “persistent genes”:

“In the Linux kernel […] {persistent functions are] defined as those that exist in every version of software development. Persistent functions in software systems are analogous to persistent genes in biological systems, which are genes that are consistently present in a large number of genomes.”

The article says that most of the persistent genes are down in the hierarchy, at the “workhorse” level, their aparently analogous persistent functions are spread in the Linux kernel at all levels, but mostly towards the top.

The second figure is about the modularity. [link to source]

In the graphs, they look for the average overlap between the nodes which are on the downside of two master nodes, and also they look for the average node reuse.

The problem as I understand it is not why the Linux graph is as it is, because it is obvious: it is written by programmers, who value semantics, modularity and reuse.

The problem is why the other graph is so different. A quantitative answer is needed for any computational model of biological life. Evolutionary explanations are alike a proof by contradiction. Here contradiction would mean “not observed now”. For persistent genes the evolutionary explanation would be that (from the article):

“The idea of persistence is closely related to the rate of evolution. In biological systems, the fundamental components of life exist in every genome independently of environmental conditions. These persistent genes, say, ribosomal proteins and dnaA, are under high selective pressure and evolve very slowly.”

which seems to say that persistent genes are observable now because they evolve very slowly, due to high selective pressure. (i.e. if the persistent genes are not very important for life then random evolution would wash them out). This is a proof by contradiction, it is not constructive. Costructive proofs in well defined models of life would be very valuable, in my opinion.

Summer numerics: permutations cube

Puresee nodes are in correspondence with the 6 permutations of 3 elements. With the two extra nodes (which should not exist btw), i.e. a literary fanin and fanout, that makes 8 nodes, which can be coded in a cube. Not surprising, but fun. Summer fun.

If we look at the cube with vertices coordinates 0 or 1, there are two vertices

000

111

which we reserve for later and there remain 6 other vertices, with the property that they contain both 0 and 1. Three of them contain two 0s and one 1:

001

010

100

and the other three contain two 1s and one 0:

011

101

110

Here is the correspondence with permutations of 3 elements. Denote by e, a, b the 3 elements, as I do in puresee. Among the 6 permutations, there are 3 which have positive sign and 3 with negative sign. The 3 with positive sign correspond to nodes with the port 3 which is out (the other two ports are in) and the 3 with negative sign correspond to nodes with the port 1 which is in (the other two ports are out).

Therefore we may see the sign of the permutation encoded by the presence of two 0s (for positive permutation) or the presence of two 1s for negative permutations).

Also, in each of the 6 strings 001, 010, 100, 011, 101, 110, there is always exactly one letter which is not like the others. We take the position of this letter to encode which of the 3 elements is in port 3 (for positive permutations) or in port 1 (for negative permutations).

These two rules make the correspondence to work. Indeed, once we know the sign of the permutation and the position of one element, then there is an unique way to build the permutation of the 3 elements.

The result is therefore: positive permutations give the nodes

001 – D e a b – (123)

010 – A b e a – (312)

100 – FI a b e – (231)

and negative permutations give the nodes

011 – FOX b a e – (321)

101 – FOE e b a – (132)

110 – L a e b – (213)

In the picture positive permutation vertices are green and negative permutation pictures are red.

Phil Agre’s orbiculus

With great joy I discovered the writings of Phil Agre (via a HN post). Rather quickly I zoomed on his Writing and Representation and from there I learned about the “orbiculus”.

(Of course, just at the beginning he urges the reader: “Please do not quote from this version, which probably differs in small ways from the version that appears in print.” Just like in his story about photocopier supplies, I ignored this.)

Please read his writings!

What is an “orbiculus”? (boldfaced by me)

In the old days, philosophers accused one another of believing in someone called a homunculus — from Latin, roughly “little person”. For example, one philosopher’s account of perception might involve the mental construction of an entity that “resembled” the thing-perceived. Another philosopher would object that this entity did nothing to explain perception since it required a mental person, the homunculus, to look at it. Computational ideas appeal to these philosophers because they can imagine “discharging” the homunculus by, for example, decomposing it into a hierarchy of ever-dumber subsystems (Dennett 1978: 124).

But the argument about homunculi distracts from a deeper issue. If the homunculus repeats in miniature certain acts of its host, where does it conduct these acts? The little person lives in a little world — the host’s surroundings reconstructed in his or her head. This little world deserves a Latin word of its own. Let us call it the orbiculus. […]

AI is full of orbiculi. A “world model” is precisely an orbiculus; it’s a model of the world inside your head. Or consider the slogan of vision as “inverse optics”: visual processing takes a retinal image and reconstructs the world that produced it (Hurlbert and Poggio 1988). You’ll also find an orbiculus almost anywhere you see an AI person talk about “reasoning about X”. This X might be solid objects, time-extended processes, problem-solving situations, communicative interactions, or any of a hundred other things. “Reasoning about” X suggests a purely internal cognitive process, as opposed to more active phrases like “using” or “participating in” X. AI research on “reasoning about X” requires representations of X. These representations need to encode all the salient details of X so that computational processes can efficiently recover and manipulate them. In practice, the algorithms performing these abstract manipulations tend to require a choice between restrictive assumptions and computational intractability (see Brachman and Levesque 1984, Hopcroft and Krafft 1987).

Agre’s orbiculus is the same as the scenic space in a cartesian theater!

Here’s another relevant section:

Within the technologically informed human sciences, cognition is almost universally understood to involve the mental manipulation of assemblages of symbols called representations. These representations represent the individual’s world — they are the orbiculus. The vast majority of this research assumes symbolic representations to have certain properties. They are:

object-like (neither events nor processes),

passive (not possessing any sort of agency themselves),

static (not apt to undergo any reconfiguration, decay, or effacement, except through an outside process or a destructive act of some agent),

structured (composed of discrete, indivisible elements whose arrangement is significant in some fashion),

visible (can be inspected without thereby being modified), and

portable (capable of being transported to anyone or anything that might use them without thereby being altered or degraded).

Although the cognitivist understands symbolic representations as abstract mental entities, all of these properties are shared by written texts (Latour 1986). Words like “structured”, “inspected”, “modified”, “transported”, and “altered” are metaphors that liken abstractions inside of computers to physical materials such as paper. Observe also that most of these properties are deficient or absent for spoken utterances, which evaporate as quickly as they are issued (Derrida 1976: 20) and are only decomposed into discrete elements through complex effort. Thus we can speak of a writing metaphor for representation.

Shortly said, it took me years to arrive much later at the understanding that it’s a consequence of the Wittgenstein joke. See more in Wittgenstein and the Rhino.

What more is hidden and useful in the writings of Phil Agre? Looking forward to discover!

Some conclusions of the Hi experiment

Some days ago I started the Hi experiment. Here are some things I observed.

MITM is the default. By this I mean that when two persons communicate via one of the principal mail providers, the provider acts by default as a MITM. There is no guarantee that the sender receives a notification that a message is blocked. There exist spam messages who consistently arrive at the receiver, mostly in the spam folder. But there are also messages which are not spam (they are replies, for example) which do not even arrive as spam at the receiver. They are blocked by the provider and only rarely there is a notification to the sender that the message was blocked.

Lack of predictability. A message sent may be blocked, even if the provider indicates to the sender that the message was sent. Some messages are blocked and some similar messages appear at the receiver after a delay of order of hours. There is no predictability of what happens. This lack of predictability has nothing to do with the fact that mail is decentralized because when we use a big mail provider decentralization is only internal to the provider. Something else happens, something creepy when you have the chance of a side channel to observe.

Phenomena not restricted to one provider. Looks like there is a common ground truth over all providers. If you tell me that this is related to spam blocking then I don’t agree. As I said the system allows obvious spam to be sent for months, every day, to arrive in the spam folder, but in the same time non spam, casual communication and varied senders and receivers, is blocked or delayed far more than a pure automatic big provider infrastucture would do it.

Phenomena not restricted to mail. With a major browser, chrome or firefox, it is impossible to know what are we looking at, why some things do not load, or they don’t load quickly enough, or if we see the same thing as other person. This phenomenon is corelated with mail blocking as if there is an overall system which MITM everybody, some times, which makes almost impossible to establish a side channel to verify which can be trusted.

Lack of spam. When mail is openly announced, it is just weird to receive no new spam.

Lack of reaction. People who participated, voluntarily, in the experiment were either overly cautious or not at all. The cautious were therefore mildly interesting, because they limited the ammount of information, the other batch were sharing borderline but not scientifically very interesting information.

End conclusion. If you want to contact me and I don’t reply then assume your message didn’t arrived. If I seem to behave disrespectful to you, assume I have no information about you or your information about me is distorted. Don’t contact me exclusively via the professional mail (imar.ro). Engage with me directly if you can, via other channels, or via several unreliable channels simultaneously. Ask for confirmation, I don’t mind. Preferably, if there is a public way to contact me, use it. Better write an article to critic or interest me, I’ll contact you if you have a rigorous point.

The HI experiment

I need help to understand the filtering of my correspondence. If you want to participate to the following experiment then I’d be very glad.

My name is marius buliga. Please do send me messages to firstname dot lastname dot … in the order of appearance: pm dot me, imar dot ro or gmail dot com. If I don’t reply to your message then it means I have not received it. [update: or it was MITM by your mail provider or other party between us, see below]

Put in the subject something credible to interest a mathematician or programmer, put in the content (exactly) one link to a page which describes something you did and you are proud of, or is likely to interest me.

If you have telegram then we may extend the experiment by sending me the same to at xorasimilarity.

I added this poll as another supplimentary channel. Only the participation is relevant 🙂

Thank you for the participation in this experiment.

UPDATE: Thanks to the kind persons who participated until now. The experiment continues.

Until now the results show that some mail is MITM, in one or both directions. By MITM I mean the block of the message passing in one direction by a party between me and you. For example, a message like

“Our system has detected that this message is likely unsolicited mail. To reduce the amount of spam sent to Gmail, this message has been blocked.”

which I received after I replied to a message I received. It tells me that in the opinion of G there is OK mail, spam and messages like this which are blocked before they arrive in the spam folder of the receiver. Mind that a reply can’t be unsolicited by definition.

Made me realize that G or any other party in the middle between the two human parts which communicate (looking at you firefox, who still updates the browser without my permission) has the technical capacity to MITM. There should be a law that MITM is ilegal. Is it?

Until now I received exactly 0 spam, only mails from people I already messaged before, no mail with a link inside.

2nd day: I received the first message with very interesting content, from a previously unknown person, ice was broken.

3rd day: I trashed and then I restored the post, the experiment allowed me to make some very interesting observations. Let’s continue it a bit, what will happen? Thanks again to the participants and welcome to future ones.

The poll does not work (is it an obsolete block in wp now?).

Derivative of a rewrite

At the end of The Rainbow Serpent, the Ouroboros, there is this passage:

“As concerns the chora, it is semantic, not real. The ouroboros makes the chora, as decoration.”

My two months needed to wash away the pandemic feeling from my mind are not yet passed. The following is not part of the new stuff. You can quicken me if you challenge me. Propose new.

Until then, here is it.

Related to this, here are some commutative diagrams which indicate that the emergence of the beta rewrite in Pure See has the same nature.

First let’s remember the definition of a derivative in emergent algebras. Take a function f defined and with values in the same space X. The space is endowed with a dilation structure. I explained many times, for example here, the relation between the dilations and the various quasigroup operations which are induced, which in turn they can be represented in the same way quandles are, i.e. via crossings diagrams.

The derivative of the function f is obtained from conjugation with dilations, followed by a passage to the limit, where we obtain the derivative, or the tangent map of f. In the next picture we see this as:

where the operation denoted by a green dot is the dilation based at x, applied to y, say, and the operation denoted by a red dot is the inverse dilation.

This diagram is commutative by all meanings, even if some vertical arrows are not functions, but limits.

We can arrange this commutative diagram in a different but equivalent way, like this:

Formally this can be put into a graph rewriting form, where we use crossings:

Look at the vertical side from the left. It consists into an insertion of f straight to the middle of the Reidemeister 2 like pattern of crossings. We may think about this as if it is a graphical rewrite.

Horizontally we pass to the limit in both, initial and final patterns of this rewrite (usually named as the left hand side and right hand site patterns of the rewrite, but in the explanation of this drawing it would be confusing to use these names).

The rewrite from the left vertical passes to the limit to the rewrite from the right vertical.

What about the rewrite from the right vertical? It is simply the rewrite where we insert over two edges, x and y, the tangent map

Tf(x,y) = (f(x), Df(x)y)

where Df(x)y is the derivative of f at x, applied to the direction y, as seen from x.

Should we be in a Riemannian manifold, with dilations given by the geodesic exponentials lowered from the tangent space to the manifold, then the derivative of f at x applied to vector Y in the tangent space at x would be a vector in the tangent space at f(x), denoted say by df(x) Y (where I put a small “d” just to make a notational difference from the emergent algebra “D” derivative). Then

Df(x) exp_x (Y) = exp_f(x) df(x) Y

But we don’t need to be restrained to the Riemannian particular case, as explained many times too. The definition makes sense in many other cases, leading to a non-commutative, generally, differential calculus.

Only when the dilations satisfy an algenraic condition called SHUFFLE we do fall back to the commutative case and Pure See is built on the SHUFFLE expressed as a graph rewrite. But as regards the passage to the limit, it is a different thing than the particular choice of the graph rewriting formalism.

We can see the SHUFFLE rewrite as a function from graphs to graphs. Then the following diagram taken from the Pure See description

is formally the same as the previous one, modulo the details that here we have mu instead of epsilon, which goes to infinity instead of 0.

In this formal sense the beta rewrite is the derivative of the SHUFFLE as a function.

It would be interesting to see how much this holds rigorously, because there is structure missing on the left vertical. Or maybe is there from the beginning?

Too easy to compute

I’m looking at this 2 years old page where you can search for a graph quine among more than 9 billions possible graphs, which are generated randomly [js enabled is needed, or just go to the github repo and clone it]. You may search for chemlambda or Interaction Combinators quines…

There are newer variants and possibilities to play with, but this is not in the scope of this post.

What jumps to my eyes, after a pause in playing with these gadgets, is: it is too easy to generate a graph which grows indefinitely.

Here is why this is a problem and what the ramifications are.

Chemlambda, dirIC, Interaction Combinators, chemSKI, are just examples of very very simple artificial chemistries. Would they be possible in real chemistry? Just by looking at how simple the chemical reactions are, there should be extremely common real chemical reactions which are compatible.

Let’s take that as a hypothesis. How would the universe look like, then?

If it should be extremely easy to compute chemically in this way, then life would not be rare, but relatively easy to achieve.

Too easy!

But not only life would be too easy. In particular such systems are able to do universal computation. So let’s take a weaker hypothesis: that the universe is able to do universal computation with simple chemical reactions.

Then everything, soon at the scale of the universe, would turn into Ackermann goo. Maybe it is.

Or maybe not.

I think it is very unlikely to be so easy to compute in the real universe. With only very small, local chemical reaction.

The hypothesis we made is most likely false. If it is false, then why? because we also know that in nature if something is possible then it will happen. If the hypothesis is true, then what is the supplementary mechanism which inhibits things like the Ackermann goo?

Some possibilities of inhibitions are:

  • the simple chemical reactions which lead to computations are only a small part of the possible chemical reactions, therefore the vast possibilities of chemical evolutions inhibit using only these particular reactions,
  • that is what death is for. Large molecules are less stable or they are broken by other chemical mechanisms,
  • shuffle, which is conservative, or something analoguous as the S A B C -> (AC)(BC) reaction in chemSKI, are common, but the analoguous “emergent” reactions are rare, so that it is possible, but rather difficult to compute according to the recipe, for a big enough time. If almost everything is a shuffle and only rarely there is (an equivalent of) passage to the limit, then we would see only very rarely something like the beta or DIST. (however not an explanation for why other embodiments of the simple beta and DIST are not leading to exuberant growth most of the time.)
  • in the conservative version of the rewrites, for example the one which uses tokens, an exuberant growth is quickly inhibited by the lack of tokens (money).

I don’t find any of these possibilities very likely. There is something in nature which inhibits computation. The universe may be a not halting computation, but why there are no small non halting computations?

Open Science, copyright, communism and capitalism

Open Science works, in the sense that it abundantly creates new science. But then a predatory corporation scales the new ideas and takes all the money and credit. The creators ask themselves why should they produce free work? so that later some rich dumb ass tells them that ideas are cheap and scaling is everything?

Likewise, communism works, in the sense that well intended people work for the better of the community. There is a satisfaction in the equalitarianism, at least for creators. But then many people realize that they can have a free ride on the back of those who work. The creators ask themselves why should they produce new stuff? So that later some politically well oriented dumb ass tells them that ideas are as cheap as their lives and the politics is everything?

In the first case the copyright system is the weapon of rich dumb asses against the creators. They can steal with the law on their side. All the latest and greatest heaps of money are made from open creations scaled, then locked by copyright.

In the second case the politically correct is the weapon of propagandists against the creators. Too much originality is difficult to contain when it may spread in the big mass of free riders.

So I think that open science (or code) is a new form of communism, with copyright which channels the wealth away from the creators.

Moreover, we have now super predators who are both rich dumb asses and politically correct propagandists.

For the creators, until a more subtle system appears, there is this question: we know that we can beat any corporation and any propagandist when it is about creation of new ideas, but why should we do it if our work is stolen and then protected by copyright, or if we are silenced for not being politically correct?

As an extension of this analogy, probably very soon, like in some years, this capitalism (which is identical with the russian style communism) will have the same fate as the late russian style communism. Because without the salt and pepper of the creators you can scale only BS. How much BS can you still eat?

I remember that just before the anti communist revolutions in eastern Europe, there were so many naysayers telling that nah, it’s impossible, the system is too strong to fall.

Let’s be optimistic. The same is about to happen now.

Meanwhile do you have any idea how to create better than any corporation and in the same time make it so they can’t profit from your work more than you do?

Probably we just have to push a little more and to be fast enough. I’m not sure, but probably being public can be turned into an advantage if it adapts faster than bureaucratic whales can move. Humor helps. Just watch them, aren’t they funny? I always thought that in Atlas Unchained, the Atlas is not the rich dumb ass who retires, is the creator.

The Rainbow Serpent, the Ouroboros

I had the chance to see the Rainbow Serpent.

It is as big as the world. It is life, or it works like life. I experienced it more like the trunk of a huge tree, with the horizon as the bark, the sea rising throug it like a fluid in the capillary vessels and all human made artifacts like the cells of the tree. All in a huge, symmetric and lifeless space.

Where there is life, there is no symmetry. Where there is space, there is no life until the symmetry of the space is broken. Free fall according to gravity is symmetric. Here comes life and makes a pocket. Free fall is turned into the guarantee that the pocket will hold still whatever you put into it.

From Egyptians, the Greeks inherited the Ouroboros.

It is the same huge creature which is life. It is the boundary of the sea.

In Hamiltonian mechanics we don’t experience the Rainbow Serpent. It appears though if we allow random forces and momenta, with a probability given by the shape of the accessible space (here, section 2, unilateral contact example).

I think all the properties of life (like the ability to self-reproduce, metabolism and death) are emerging from the more fundamental property of lack of symmetry. It is hard to understand but it is worthy to try.

As concerns the chora, it is semantic, not real. The ouroboros makes the chora, as decoration.

Misleading content algorithm is snake oil

Per HN Google bans distribution of misleading content. They claim that they have an algorithm of classification of misleading content.

This is not an opinion. Detection of misleading content by an algorithm is equivalent to an algorithm for the halting problem. For it is misleading to claim that a Turing Machine with a given input does not halt when it does. If Google algorithm exists, then it should detect in particular such misleading statements. We know that there is no such algorithm, therefore Google lies.

Somehow this is not surprising. Google never respected science or mathematics, even if they work hard to give the misleading image that they do. Gve me one example that they did, proportionally to their economical scale. They are very easy to be defeated by single persons and when they spend money for research usually is wasteful compared with personal initiatives which are not financially supported. I am thinking about the comparison between Google Scholar and Sci-Hub, as an example. UPDATE: just these days see, as another example Odd release in conjunction with RoseTTAFold gaining traction. [archived version].

There is no new, to my knowledge, scientific result where Google is involved, which was not studied by an academic or an open colaboration before.

They can only scale, they are not capable to invent. They can supervise, they can’t create. They collect information created by others. They want to “organize”. They were favored at the beginning when it was a good idea (for a supervising frame of mind) to scan the whole web. They were always advanteged by the mother state. They attract nerds but they have to buy creative people.

It’s clear what they are. That they lie to such a degree, so to say that they have an agorithm for the halting problem, is a classical ridiculous aspect of capitalism. You know, like capitalists sell BS for money and communists make gulags.

Once, such merchands of lies were selling snake oil which can cure any disease.

At some point such practices were considered ilegal. I don’t have expectations that they will be legally sanctioned. I don’t think that they are really so important. Presently they are a factor of inhibition of science (not the only one) and historically they are already not viable. Nor them, nor communist variants of surveillance state. But you know, the time scale of history has decades as units.

No deal in science over researchers heads!

[also available here]

Note. This is adapted from a part of the post Researcher Behavior Access Controls at a Library Proxy Server are Not Okay, because I think the idea is more important than the context of that post.

The trend in science publishing is to make deals over the heads of researchers.

Deals are made between publishers and academic managers, or publishers with librarians, or IT department with librarians, and so on. If you look at BOAI, the initiator of the gold (in the pockets of publishers) open access style, it was librarians with publishers. Decades of advances were lost because the fight ignored researchers needs.

What do researchers need? Something arXiv like with a Sci-Hub like interface.

Tough luck: arXiv is not publishing (per BOAI) and Sci-Hub is illegal.

What do researchers got? Gold Open Access. This is the idea that since publishers can’t force readers to pay, they force authors to pay for their own creation.

We, researchers, understood that librarians were scared by publishers that their important role will decay. We understood that managers want to turn science into business, so they apply to individual researchers the criteria which were designed for journals.

But it is time to understand that researchers have to be at the core of any deal, because without researchers there is no need for librarians, university IT administrators, managers or scientific publishers.

To make deals over the researchers heads is not Okay.

To be clear, librarians, IT departments and managers please at least return the respect you received from the researchers. Please stop treating researchers as cows which have to be herded to the publishers needs. This is not your job.

Don’t destroy science. It is not a business. Let us work.

Distill burnout shows Open Science publication is hard

UPDATE: Read also the intro I did on the telegram channel to this post. A better post title would be Distill.pub burnout shows why publishing research as a special activity is obsolete. Don’t publish, give 🙂

__

From Distill Hiatus post:

“Over the past five years, Distill has supported authors in publishing artifacts that push beyond the traditional expectations of scientific papers. […]

But over this time, the editorial team has become less certain whether it makes sense to run Distill as a journal, rather than encourage authors to self-publish. Running Distill as a journal creates a great deal of structural friction, making it hard for us to focus on the aspects of scientific publishing we’re most excited about. Distill is volunteer run and these frictions have caused our team to struggle with burnout.”

Just look at the people behind Distill. A combination of Google with Mike Bostock (of d3.js fame) aka ObservableHQ.

Still seems very hard. I know it first hand, because I started it before them. The article Molecular computers was written before Distill by this mathematician, not programmer. Mind that Github almost broke it by passing from http to https. Indeed, for animations I used iframes, so if you access the article via https then the animations will not be visible (because the iframe contains the animation link as http; every link is from Github though, so why are those http links for animations not trusted? no reason at all). (Update: I remember now that Google lost the arXiv version of the article, at some point…)

The idea of the Open Science is to replace peer review by validation. It was argued that Open Science should be rwx science. Then peer review, which is essentially an authority argument, will be naturally replaced by a sort of validation. Here validation does not mean that the research finding is formally checked, nor does it mean a sort of validation mark because it was checked to be reproducible. It is simpler and powerful. If you, researcher, give all means you used in your research, then it is just up to the reader to to produce more work based on it. Derivative works? Reviews? Edits? Comments? Proof checking? Reproductions? any of these are other’s contributions which use your work and thus they grow and process further your ideas. Just as you did in your research.

The “publication”, or “article” is only the story of the research, not the research. An article which gives all (possible) means for validation is the research.

Another advantage is that you, the researcher, don’t have to wait for your academic manager to realize we’re in 21st century, nor for your colleagues to massively move to better scientific practices. You don’t have to sacrifice all your research accessibility just because your boomer boss, or your politically well oriented colleagues tell you to “publish or perish”. If a project is too ambitious for the bureaucracy, then you can release it as an Open Science article. (Mind it though: if it is your project, if not then trust in a collaboration between people is just as important as science, so I don’t think it is good to force opinions onto others.) If others want to use square wheels then your round wheels cart will beat them in the long term evolution game.

Indeed, these ideas are certainly correct, but it is very very hard to live by them.

Why?

Because publishing is not the right frame of mind.

As it is now, the situation is alike to preach that we are all trees with beautiful flowers and tasty fruits.

Other people don’t know about this because they don’t pass near us, trees.

Like trees, we are completely at the mercy of our neighbourhood. We are imprisoned by Google.

Another problem is that it is very hard to process such a high density of information, compared with a classical article. The reader either has to cope with drinking from the hose or it does not get it at all. OK, that is science, but on the other hand is very hard to give this information in a structured way.

The structure eats the contents. If you look at the source of the Distill Hiatus article, yes, the article is at line 1012.

In that source, look for “Copyright”. There are 7 Apache licences there. Not one is a one liner. This shows the mind frame where structure is more important than content and where copyrights are more important than structure.

As bureaucracy, which is good for scaling but soul crushing for creation, here structure eats the contents. And moreover, copyright eats the structure and the content.

Compared with it, the source of the beautiful Distill article Growing Neural Cellular Automata, is more humanly structured, because the article text is not a one liner in a sea of boilerplate. But do you want to play with the scripts? Tough chance, just go to a Notebook, which is Google dependent in so many ways. This is strongly against the scientific method.

It is therefore very hard not to burnout because in this world, as it is now, to do science in the scientific way demands to build the world. Repeatedly. Objectively. This is, I think, the source of the burnout. It is very hard to try another pair of contradictory things: to discuss and in the same time to not discuss.

My first solution is better in principle, because of the no dependencies choice and because of the give everything choice. It is not a solution for publication, which is an obsolete thing. , I’m proud that I started before Distill, I am still alive after their burnout hiatus. So something is possible.

Still, my latest version is dependent on a corporation: Github.

Now I am spread between Github, Telegram, Figshare and WordPress.

I’m thinking about antennas, don’t know what this means, yet. I’ll happily take a creation and management task (I don’t know, you who sit on piles of coins, have you thought about an Open Research Institute, or a remake of an Invisible College with the 21st century power?), or collaboration, or teaching tasks.

Perhaps there is also a problem with the society where such initiatives try to survive. Look, Google supports such effort, with one hand, and guts it (unintentionally, just a small ant flattened by a very big ass, as they say) with the other. Maybe this society which tries so much too go down as fast as possible is no longer science friendly. Maybe other societies which have problems but they have the huge optimism asset are friendlier. Somewhere, there should be an interesting sea shore, an interesting border between old and new, somehow still protected from uniformization and in the same time open enough so to attract variety.

computing with space | open notebook