Strange Sci-Hub India case opinion about Gold OA as legal alternative

The source of the opinion is this. The Sci-Hub India case is this, archived.

I quote first from the conclusion of the opinion.

Concluding remarks

The odds are stacked up against these self-confessed piracy repositories and there is a significant chance that the Delhi High Court might issue a dynamic injunction against them. From a legal standpoint, with Sci-Hub being a self-proclaimed piracy website, it may not be right to say that it should be permitted to function. This would set a precedent and provide impetus for other piracy websites to provide unauthorized and illegal access to copyrighted material.

The responsibility of making quality research material accessible and affordable should lie with the Government and not any rogue website (especially when most research that is done in a country like India is through public funds). Therefore, we believe that Sci-Hub and other such websites should be banned. At the same time, the Government of India should follow the footsteps of the countries that have signed national open access deals with publishers in order to help local researchers. The European Commission and the European Research Council, for instance, have launched an initiative called “cOAlition S” which aims to make full and immediate open access to research publications a reality. It is focused on “Plan S”, which mandates that research funders would have to ensure that research publications generated through grants allocated by them are openly accessible and not monetized in any way. “Project Deal”, spearheaded by the German Reactors’ Conference [sic] (on behalf of the Alliance of German Science Organizations), is tasked with negotiating open access deals with large commercial publishers. They have succeeded in signing an open access agreement with Springer Nature (a large commercial publisher). Similar plans have also been formulated by other countries like Finland and Netherlands.

The Indian Government has already unveiled an ambitious “One Nation, One Subscription” policy, through which it proposes to buy a bulk subscription of all important scientific journals in the world and provide free access to them to everyone in India. This could act as a permanent solution to the problem of exorbitant prices and keeping both the parties, the academic community and the publishers, happy. However, for the Government to be successful in doing so, it must include the academic community in the decision-making process and must act in an expedient and effective manner.”

What I understand: Gold OA is pitted against Sci-Hub. This is a wrong way to see the situation.

On one side, Gold OA is a trick in favour of the publishers. Because many scientists choose to make available their research (via arXiv or other repositories), the trick consists into the classification of this research sharing practice as “archiving”, aka Green OA. .

On the other side, because the readers can’t be charged, the publishers propagandists invented Gold OA. The name “gold” betrays their true intentions. Gold OA is classified as “publishing” and the Gold OA publishers demand from the researchers a tax: article publishing charges, or APC. This tax is ridiculously high, thus the publishers can still make money from a service nobody needs no longer.

How can this trick stand?

Because academic managers support it. They are hand-in-hand with the publishers and they are always protected. Every time somebody blames the publishers for their greed, the role of academic managers, who buy the publisher product, is ignored.

Academic managers support publishers with public money, most of the time, and in return publishers give power and prestige to the academic managers.

Sci-Hub is not the enemy of OA.

Whatever solution is politically favored, it can apply only to new articles.

Here comes Sci-Hub, which solves from one trait the problem of almost all published articles, for free.

Where the activists of what turned out to be Gold OA spent decades, with the result of high APC, Sci-Hub solved far better technically than any legacy publisher.

It is illegal, according to some courts rulings and most of all according to the hearsay of commenters. Because it infringes the copyright of the publishers. Publishers have this copyright because academic managers force researchers to publish in these conditions, for many years ,meaning that the publishers get the article with the copyright transferred from the researcher.

Slavery was legal, should we enforce it today?

Coming back to this opinion, it is misleading to present Gold OA as legal alternative to Sci-Hub. Gold OA is legal and Sci-Hub may not be legal, but Gold OA serves to preserve the gains of publishers by taxing the creators, while Sci-Hub makes free old and new published articles, available at only one click distance.

Shuffle, mol, chora and Timaeus

In Substitution is objective (II), I proposed the following form of the SHUFFLE schema of rewrites, in combination with the “chora”

shuffle (
in c chora (
from origin see d as u;
from origin see e as v;
) of (
see origin from u as v;
);
) as (
in c chora (
from origin see origin as origin;
) of (
see origin from origin as origin;
see origin from d as e;
);
);

Relevant notes from that post:

  1. The keywords “chora” and “of” admit mols as variables, with the syntax: in a chora (b) of (c); where “a” is a variable which does not appear in the mols (b) or (c). Semantics:
    • “chora” passes to the limit with the scale paramaters in (b),
    • “of” keeps the scale parameter of (c) as is in the mol where the “chora” is,
    • “in” signals that “a” plays the role of “origin” in mols (b), (c)
  2. Clarify purpose of the “shuffle” keyword. Maybe the whole SHUFFLE
    https://mbuliga.github.io/quinegraphs/puresee.html#ReductionOfTrigrams
    should be rewritten with respect to the use of “chora” … “of”. Syntax: shuffle (a) as (b); where (a) is the particular instance of the LHS of the shuffle
    and (b) is the particular instance of the RHS of the shuffle.

Is clear then that the SHUFFLE schema can be put in the form

shuffle (
ε (
from a see b as e;
from u see w as c;
)
μ (
from e see c as d;
)
) as (
ε (
from v see p as d;
)
μ (
from a see u as v;
from b see w as p;
)
);

and “chora” becomes clearer, as something between the state (i.e. the mol available) and the rewrite schema.

That is because “in a chora (b) of (c);” on one side bonds “a” to the keyword “origin” in mols “b” and “c” (a map making thing). On the other side, by looking to the SHUFFLE general schema, “chora” also passes the scalar “ε” to the limit to “0”, as it should, since “chora” is an infinitesimal space.

The SHUFFLE schema in this form exposes the fact that the actual formulation of pure see is still in need for clarification, because the double use of “from”, “see”, “as” as scalars and keywords. See the related pure see mnemonics table.

Here is Timaeus 48e:

“This new beginning of our discussion of the universe requires a fuller division than the former; for then we made two classes, now a third must be revealed. The two sufficed for the former discussion: one, which we assumed, was a pattern intelligible and always the same [here the SHUFFLE rewrite schema]; and the second was only the imitation of the pattern, generated and visible [here the mol]. There is also a third kind which we did not distinguish at the time, conceiving that the two would be enough. But now the argument seems to require that we should set forth in words another kind, which is difficult of explanation and dimly seen. What nature are we to attribute to this new kind of being? We reply, that it is the receptacle, and in a manner the nurse, of all generation [i.e the chora].”

Personal note: ten years ago, the first post of this blog was Computing with space, which has a link to this paragraph. There was a hard road 🙂 and now all pieces are set, ready to assemble. But where are my two months of roaming free, to shake off the mental load of the pandemic?

Against copyright, for authorship

Only in the eyes of beneficiaries the copyright and IP is not like slavery or other outdated legal concepts.

As an author I can see that the root of all wrongs concerning copyright is that it can be sold or transferred.

The solution is simple: destroy copyright, replace it with authorship, which cannot be sold or transferred.

No less than this will end the big problems created by copyright.

Always the people or corporations who complain about copyright infringements are not the authors of the creation protected by copyright. They are producers, in music or film, they are pharma or other corporations who scale (but not invent)… Scaling is a thing which is independent of copyright, so why do they try to protect their huge gains by appeal to copyright? Is the scaling capacity not enough?

It is also said that copyright is just a form of capitalistic greed. Or, the enforcement of copyright is clearly against the narrative of capitalism. If companies A and B try to scale the same idea (previously protected by copyright), which company will win? The one which scales better, not the one which has better lawyers, or the one which is a front of a powerful government.

The proposal of replacement of copyright with authorship, alone, has some small benefits. Indeed, it would have as effect that if you renounce at copyright in favour of the publisher then you are no longer the author! Therefore, in the fake academic world where publish is perish, we would get rid of a majority of professors who made their carreer by enslaving their students to publishers. Because these professors would be the authors of almost nothing…

As much as I would like to see academia cleansed and reborn from ashes, it is far better to also make authorship something which can’t be sold or transmitted.

Application 1: Covid comes, some researchers invent a Covid vaccine, Pfizer scales the vaccine production, but of course anybody who can produce the vaccine should be welcomed. In the same time the authors of the vaccine, not Pfizer, should be praised for the invention. Pfizer should be praised for the scaling of the production, or if not Pfizer, who’s the best. Contrary to the Atlas Unchained BS, pfizers appear everywhere (if they get a chance) and covid vaccine inventors are scarce. Everybody gets the chance to vaccinate, quickly.

Application 2: Sci-hub comes and liberates the researchers from the control of their evil opportunist managers. The researchers are praised as the authors and the publishers go to greener pastures because they can’t scale the dissemination of science as well as Sci-hub. Because they can’t. Elbakyan is praised, not pursued for her performance at scaling.

Asemantic is search of chora, not nihilism

Many people think that my rejection of semantics is a kind of nihilism. On the contrary, see my views that asemantic computing is the right frame for distributed computing! When I say that global semantics is the enemy of reality, it is because:

  • reality is a discussion, or a thing
  • and semantics always wants to substitute reality, the thing, with evidence, the object.

Therefore global semantics is the enemy of the discussion, is canned reality, is less than reality.

As discussion needs a place to happen, the asemantic point of view is a necessary part in the search of place, of chora.

As this search is not ideologically driven, mathematics comes to rescue. To try mathematical models for various aspects of this search, to accept mathematical consequences of mathematical hypotheses is the contrary of ideology.

Part of the search for chora is the passage from term rewrite systems, from logos, to graph rewrite systems, to khumeia. As argued in the asemantic computing draft, there are clear, mathematical not ideological reasons for the liberation from semantics. But to substitute it with something more, outside of logos.

This something more exists since the most ancient times. One of my sources of amazements is the cultural importance of space for aboriginals, see Garak, the universe by Gulumbu Yunupingu.

Besides Plato’ chora, you find traces in hermeticism, buddhism, everywhere. That is why, perhaps, the original chemlambda simulations (and their comments) have such a strange halucinatory influence.

Because I can’t tell you “is like this”, I can only tell “your ideology is not enough” and then show examples almost devoid of the constraints of the logos.

So there is a world, real, outside semantics and I think that we can explore this world in a scientific way.

Indexed but excluded

Just learned that google indexed but excluded, without any explanation, without any recourse, about a 1/3 of the posts on this blog. This means, as I noticed, that google does not show these pages in the search results.

There is no technical reason, because one can find among the censored pages new and old ones.

That is why you can use the internal search window, once you already here. Many pages among those excluded are about google, or about open access and open science, or about molecular computers.

There are also menus with posts by months.

There is also the recent telegram channel which you can see without telegram, or with telegram, which is rather nice on phones.

I don’t endorse any of the means I use to communicate. It is your choice, it would be nice though if there were means for the ones who want to progress to be able to do so free from corporate horror.

Pure see and the topological point of view

Appeared first at the chorasimilarity telegram channel.

I just want to bring to your attention where pure see is going, if you were intrigued by the Substitution is objective (lI) post.


In case you wonder why “substitution is objective” this is in the sense used for “objective” in my posts.


But as concerns pure see I said that it is a formalism interesting enough, like linear logic, however it is just a shadow of a larger, noncommutative one.


It is then natural to structure it in such a way that it holds as much as possible in the general case.


So where is it going? It has emergent algebra as a model, but there are two other different models. The “commutativity” name comes from an emergent algebra characterization in the presence of the algebraic version of SHUFFLE, the only rewrite schema of pure see.

In the more general case all fits into a map making formalism, which has algebraic and topological parts, with the topological part usually left aside until the whole range of the algebraic side is clear.

Or it turns out that the more profound mathematically is to start from the topological side, which will make clear why the algebraic side is like it is.

And so new algebraic structure appears, like the dual side of map making hinted in the linked chorasimilarity post.

On a personal note the complete math was done on paper while I was more than worried about the bout of Bell’s palsy which hit me in jan. So is good to stay more on paper, I was so happy that it worked that I had to make the sibylline Maps are intrinsic, charts are model theoretic.

Chorasimilarity channel on Telegram

I created today the chorasimilarity channel on telegram. Combined with telegra.ph articles, it looks good on phones.

You can see it without telegram: https://t.me/s/chorasimilarity

Come join! You can reach my as @xorasimilarity or by mbuliga@pm.me .

There are several reasons for that.

We can communicate without knowing the phone numbers of others.

Is fast and simple.

Has the possibility to write articles, fast and simple again.

Mind that chorasimilarity channel is a channel, not a group. Let’s see how it works before making a group. Some know that I consider chat interesting up to a limit, when I react only to (article or program).

See you there, too.

Substitution is objective (II)

Follow the [source] as it may change.

Continues from: Substitution is objective (I) and [here].

Uses: Pure See

Main:

The abstracted duplication from part (I) does not treat the two variables obtained from duplication on equal footing.

Indeed, we have a beta reduction

 (\(=d.c).T) C  -> \(=d.C).(T[c=C])

and then a copy-beta reduction

\(=d.C).(T[c=C]) -> (T[c=C])[d=C]

Let’s see the treatment of duplication in Pure See style.

[continue to source] and chime in if you want to contribute.

Mind that “chora” makes an appearance! Yes, the infinitesimal place.

The chemistry of Neuromancer feat. Breaking the x86 Instruction Set

I understood something about the size of the molecular machines which implement something like chemlambda or chemSKI in the cell.

A chemist probably stares in disbelief when reading:

” Lambda calculus, or more precisely graph rewrites systems inspired from it, can be taken as first principles when designing molecular computers.”

That is because tiny modifications of molecules produce huge effects. It can’t work this way. There’s no lego bricks tricks which can achieve this. It can’t be that simple.

I read recently the excellent post from 2018 The chemistry of William Gibson’s Neuromancer, and as a mathematician I starred in disbelief when I saw that, for example [source of the image]

the almost identical structure of cocaine, atropine and scopolamine.

Who isn’t a fan of the overall universe of Neuromancer? We live in a more complex version of it.

Just that I don’t buy into the semantic side of it, here I’m special wrt to the main CS thinking, but maybe I get a virtual nod from the chemistry thinking when I say that chemistry is asemantic.

There’s a slide in Artificial physics for artificial chemistries which shows

The Matrix: what does it mean?

“still he’d see the matrix in his sleep, bright lattices of logic unfolding across that colorless void” (William Gibson, Neuromancer)

versus

Blade Runner: is this alive?

So it has to be that the molecular machinery is much BIGger than those tiny structures which we can play with.

We can play with some instruction though. Have you seen the very instructive Breaking the x86 Instruction Set? The presentation is on youtube:

The abstract says

“We’ll disclose new x86 hardware glitches, previously unknown machine instructions, ubiquitous software bugs, and flaws in enterprise hypervisors. Best of all, we’ll release our sandsifter toolset, so that you can audit – and break – your own processor.”

We kind of have access to some of the instructions, in RNA or DNA, these we understand they exist. But the molecular machine (aka the processor) has to be bigger.

Like in the fuzzing of the x86 processor, small differences in tiny molecules can sometimes produce signs of big effects.

Likewise, even if the principle of asemantic graph rewrite system is reasonable for bio-chemistry, it might mean that the mathematical graph structures considered in chemlambda are implemented in reality via big molecular chunks, which makes the search for such structures harder.

There is the possibility though to find some simpler chunks which then would embedd some manifestations of life into otherwise not living chemistry. Like a plane versus a bird.

Substitution is objective (I)

Follow the [source] as it changes. [archived first version]

Context:

– While writing I searched for linear lambda calculus with copy and found

  Operational aspects of linear lambda calculus, P. Lincoln, J. Mitchell, (1992) Proceedings of the Seventh Annual IEEE Symposium on Logic in Computer Science  

 I adapted then the naming to be compatible (naively) with that reference, but without the weight of linear logic. Maybe it is the same thing, maybe not, anyway all misinterpretations are due to my ignorance. 

– I use the asemantic computing draft [here] , [source], [archived], see also [Ghost in the web].

Main:

Decentralized reduction of lambda terms is problematic because of substitution. Indeed, where to make the substitution of a variable with a term, without any global information? Substitution of variables by terms is objective, that is it is not up to any discussion, but the problem appears when there is more than one place where substitution is made.

With global information available, we know how to do reduction by using de Bruijn notation. We might use the calculus proposed in

 A λ-calculus à la de Bruijn with explicit substitutions, F. Kamareddine, A. Rios, 7th international conference on Programming Languages: Implementation, Logics and Programs, PLILP95, LNCS 982, 45-62

which makes very clear that beta reduction is mainly an icing of a big cake made of alpha renaming.

Such a notation (and such a calculus) is objective and global. It is not asemantic.

Substitution of variables by terms is local for linear lambda terms. It can be made local even for lambda terms which are not linear, by looking at the SKI calculus. 

Let’s explore how we can make this work.

The S combinator in lambda calculus

 S = \a.\b.\c.((a c) (b c))

contains the c variable 3 times.

S is not “linear”, where “linear” just means (classically) that any free variable appears once, any bounded variable appears twice (counting also the first appearance in a lambda operation. But we could make it like this if we introduce a “copy”, which has the lambda-like notation:

 =x.T  where: x is a variable and T is a term

This reads “copy T to x” and it should mean something like this:

{  x = T;     \\ or “store T in x” ?

   return T; }

Then S will look like this:

 S = \a.\b.\c.((a d) (b (=d.c)))

or maybe like this?

 S = \a.\b.\c.((a (=d.c)) (b d))

or perhaps we should just use the copy at the beginning

 S = \a.\b.\(=d.c).((a c) (b d))

Now we see that S contains an abstracted fanout:

 \(=d.c).T

where c, d are variables and T is a (linear perhaps) lambda term.

We can transform any lambda term into an SKI combinator and back into a new lambda term, where every variable appears at most 3 times. It follows that we might just use SKI reduction rules translated into lambda calculus with the abstracted fanouts notation.

But how? For example

 S A B C -> (A C) (B C)

would become after 3 beta reductions

 (((\a.\b.\(=d.c).((a c) (b d))) A) B) C

 -> \(=d.C).((A C)(B d))

where the 3rd beta reduction is

 (\(=d.c).((A c)(B d))) C

 -> \(=d.C).((A C)(B d))

 Now we apply a kind of beta reduction (or is it a duplication?) adapted for the copy operation

  “copy-beta”:   \(=d.C).T -> T[d=C]

which gives

  -> \(=d.C).((A C)(B d))

  -> (A C) (B C)

 provided that C is linear.

Also

 S K K -> I

would become

 ((\a.\b.\(=d.c).((a c) (b d))) (\x.\y.x)) (\z.\u.z)

 -> \(=d.c).(((\x.\y.x) c)((\z.\u.z) d))

 -> \(=d.c).((\y.c) (\z.d))

 -> \(=d.c).c

and the last step would be a sort of

 “discard”:  if d does not occur free in T then \(=d.c).T -> \c.T

which gives in our case

 -> \(=d.c).c

 -> \c.c

Let’s see the omega combinator (in the SKI translation)

 (S I I) (S I I)

 First remark that (S I I) would become

  ((\a.\b.\(=d.c).((a c) (b d))) (\x.x)) (\z.z)

  -> \(=d.c).(c d)

 therefore the omega combinator would look like

  (\(=d.c).(c d)) (\(=e.f).(f e))

  -> (\(=d.(\(=e.f).(f e))).((\(=e.f).(f e)) d))

 … oups, not linear? Nah, it is all right, we just have to finish with a copy-beta reduction:

 -> (\(=d.(\(=e.f).(f e))).((\(=e.f).(f e)) d))

  -> (\(=e.f).(f e)) (\(=e.f).(f e))

which is linear.

 Conclusion: We proposed the following algorithm:

 1- Input a lambda calculus combinator A

 2- Convert A into SKI, obtain the combinator B

 3- Replace in B each instance of S, K, I with their lambda (plus copy) expressions

 4- Reduce using beta, copy-beta and discard.

Alternatively we may start with a linear (plus copy) term and jump directly to step 4.

We get a hint why chemSKI works in an asemantical way: because S encapsulates an objective substitution, thus making it local.

chemSKI not Nock

I describe here the kind of project I would be interested to make, if I could, time provided. My mathematician skills are limited in this direction, but I would replace Nock with chemSKI.

UPDATE: see my comments on th Urbit whitepaper.

I like very much the whole stack rewriting effort of Urbit. I don’t want to be disrespectful by proposing to rewrite a version where the very fundamental Nock is replaced by a more chemical version.

In the long run this could provide an advantage. Or maybe not that long a run, as now we lived through a pandemic and we had very limited tools to counter it. Or as we see how necessary is a decentralized net, faced with the rising of the new middle ages in the West.

A decentralized and biological compatible net.

Taking out the root of Urbit, throwing trees for mols, rely on randomness and decentralized quines, never ending computations, asemantic, this is all new. But the stack is there, at least in principle showing that it is possible.

If I were now in my 20’s, I would probably be a biochemist, or else I would write the stack myself in less than a decade. With all due respect to those who did it for Urbit, what a life work.

I’m not in my 20’s so maybe someone who is can try?

Google essential facility and more

Google is afraid of this subject. If you don’t search exactly this then you will not see relevant links in the google search.

Therefore search for “google essential facility“.

I was recently amazed by Knuckeheads’ work on this.  Then I found out that there is more and older work on the subject.

Now I believe that google cache an essential facility doctrine is not enough. Because even if there is access to the cache, this does not stop google to control the output of the search.

Why not this?

For any search you do on google,  they donate to Common Crawl all the results they have but they don’t show you in the first 3 pages.

The problem users have with google is not that they crawl the web, but that they show a very biased selection.

Ghost in the web: decentralized graph reduction

By decentralized computation we usually mean a system where many small (program) computations execute locally and communicate one with another.

Decentralized graph reduction would be new.

Think about a potentially huge distributed computation. What is new is that the participants reduce pieces of a huge graph, not that they reduce the same graph (program) over and over again, here and there.

Even the huge graph distributed reduction is a misleading image in that there is no local image of a huge graph.

Think about individual computers as pixels and about distributed computation as a movie over the pixelated screen. A program execution can happen over several (many) computers, as a movie character appears spreaded over many pixels.

Even this image is misleading, because pixels on a screen are in fixed geometrical relative positions.

Oh, I know, let’s get back to ancient views.

Decentralized reduction is like computers are the atoms in a river and the computation is what you see when you look from the river bank. The river is the same but it is always new.

It is interesting to have a computation which executes decentralized, which you can move it around while it computes, independently of hardware. You may say that we can already move programs around and that there are abstractions which make it independent of hardware. But can you move the computation while it executes? No! You have to pack it into an abstraction, transmit the inert program to another machine and then execute it.

I don’t know if this kind of decentralized computation is possible (it should be, because nature functions like this) and without a prototype there is no way to find out.

Of course, the bigger goal is to have a biological like internet. Of course that chemistry is naturally more adapted to this than our computers. But with enough calls of Math.random() we can use the current computer architecture.

Graph reductions are traditionally a fancy way to do term reductions.

Graph rewrite systems and term rewrite systems are different beasts because you can’t use a term rewrite system (inherently unlocal) to do what I propose, namely distributed reduction. You can use term rewrite systems in the traditional way, namely repeated reduction of small terms.

We don’t actually know how to compute with local graph rewrite systems, because traditionally we start by writing a program term, then we parse it into a graph (this is a non-local step, you have to know all the term to do this) then we may distribute the graph and reduce it by pieces, say, then we have to evaluate the graph to get again a term. But the graphs rapidly evolve to graphs which are not of terms and the evaluation works in a limited way. In the traditional view we love terms (or graphs) which evolve to a normal state because we know that the computation is done. However, conveniently we like also terms like the Y combinator because we like recursion.

Recently I was surprised to realize that chemSKI is both programmer-friendly (via the parsing to SKI) and purely local. Schonfinkel wa the first and still he is not enough understood.

if we try to treat graph rewrite systems as the principal part then we have some choices. The first is the IC system of Lafont. Great, but we actually don’t know what to do with it. Lafont proves that it can emulate any Turing machine, but this is somehow contrary to the distributed computing purpose. There are parsers of lambda calculus (that horrible term rewrite system) to IC, but the parser is non-local. Another system is chemSKI because it is local, the SKI calculus is really great and easy to use by a programmer. Just give the programmer a parser from/to language_of_choice to/from SKI. So the programmer is not immersed in a Turing tarpit. That is the reason chemSKI is better than IC and than chemlambda.

Finally, at the graph level there is no need to restrict to normal forms. Any of these graph rewrite systems lead to non-terminating computations, which is because they are Turing universal. Why, instead a better possibility would be to embedd the computation into a quine. That way we don’t need to redistribute all the time a new graph or new graphs.

The river is a big quine. Or an universe of quines, ghosts touching our many physical machines.

alexo quine in chemSKI

chemSKI is a purely local artificial chemistry associated to the SKI combinators calculus. Being purely local, it satisfies the requirements from the asemantic computing draft.

In [arXiv:1701.04691] section 4 there is an example of a lambda term with a strange behaviour in chemlambda and also in dirIC. This lambda term is

(\a.a a)(\x.((\b.b b)(\y.y x)))

and I called it “alexo example”. The problem with this term is the following, explained in

Lambda calculus to chemlambda. Marius Buliga (2019-2021), https://mbuliga.github.io/quinegraphs/lambda2mol.html#alexo_example

It should reduce to the lambda quine (\x.x x)(\x.x x). But in chemlambda, with the random reduction algorithm it reduces to a FOE node. Use the reference to check this.

If you change the chemistry to dirIC (by using the “change” button for chemistries) then it behaves the same in dirIC.

However, you can check that (the molecule generated from) the alexo example is a chemlambda quine. For this use the “change” for algorithms to toggle to “older is first” and move the slider for rewrites to “GROW”. Then convert again the lambda term to mol (with the button “λ>mol” and push “start”. It is a chelambda quine.

But it is not a dirIC quine. Repeat the same procedure, but this time for the dirIC chemistry. It always reduce to a FOE node.

So, it is a chemlambda quine which can die, therefore different than the lambda quine (\x.x x)(\x.x x). Btw you can see this lambda quine here:

Lambda calculus to chemlambda. Marius Buliga (2019-2021), https://mbuliga.github.io/quinegraphs/lambda2mol.html#omega

In both cases (chemlambda and dirIC) it does not reduce like in lambda calculus.

chemSKI comes to the rescue!

chemSKI & chemlambda. Marius Buliga (2020), https://mbuliga.github.io/chemski/chemski.html#SKInote

Alexo example lambda term translates into the SKI combinator

(S I I) (S (K (S I I)) (S (K (S I)) (S (K K) I)))

Please notice that I don’t write “SK(SII)”, I write instead “S K (S I I)” because the space ” ” is interpreted as application. Otherwise the parser would interpret “SK” and “SII” as names of variables…

Copy this SKI combinator into the “λ or SKI -> mol” input window and click on the “λSKI> mol” button to convert this combinator into a chemSKI molecule.

You are now looking at the alexo quine in chemSKI.

Well, it should be, but I have not implemented an “older is first” algorithm for chemSKI. There is only the random reduction algorithm, You can check that the alexo quine does not die. It produces lots of trash pairs of nodes. You can eliminate this trash by moving the rewrites weight slider to “SLIM”.

This is a different procedure than the one used to check for chemlambda quines, but my hypothesis is that if there is an “older is first” reduction algorithm then we would get the same effect if we move the rewrites weight slider to “GROW”.

So in conclusion we can say that we look at a chemSKI quine, therefore it behaves correctly.

Is it a quine like (S I I)(S I I), i.e. the equivalent of the lambda quine (\x.x x) (\x.x x)? It does not reduce in the same way. Actually it periodically (with the slider to “SLIM”) transform into a chemSKI molecule associated to

(((S I) I) (((S (K (S I))) ((S (K K)) I)) ((S (K ((S I) I))) ((S (K (S I))) ((S (K K)) I)))))

in the sense that the “mol -> λ or SKI” evaluator gives this SKI combinator for the smallest chemSKI molecule with 35 nodes which is part of the periodic evolution of alexo quine.

Check it! Copy this combinator and paste it into “λ or SKI -> mol” input window and click on the “λSKI> mol” button to convert this combinator into a chemSKI molecule.

UPDATE: Just after I wrote this, I found the very interesting post by Wolfram

https://writings.stephenwolfram.com/2021/03/a-little-closer-to-finding-what-became-of-moses-schonfinkel-inventor-of-combinators/

and I posted the following comment, which partially reproduces what is in this post. The gist is, I think, not alexo quine, but the greatness of the work of Schönfinkel. (Update: apparently Wolfram didn’t accept my comment.)

I came to a very recent, for me, appreciation of the great work of Schönfinkel. To the point where I wonder what the situation would be in an alternate universe with Schönfinkel and Turing switched. I don’t know which historical circumstances would lead to the use of Schönfinkel in the same way as Turing was used in the war effort, but probably today (in the alternate universe) we would have a working, chemistry like decentralized www computation.

For the interested in the various developments of SKI calculus, there is the recent Combinatory Chemistry: Towards a Simple Model of Emergent Evolution, arXiv:2003.07916, by G. Kruszewski and T. Mikolov. As a response to a problem they put there, I made chemSKI, an artificial chemistry which is purely local, described in chemSKI & chemlambda. M. Buliga (2020), https://mbuliga.github.io/chemski/chemski.html#SKInote

Coincidentally I read your post just after I wrote about a new quine graph in chemSKI, deduced from an interesting lambda term, found in arXiv:1701.04691 section 4:

(\a.a a)(\x.((\b.b b)(\y.y x)))

In SKI calculus this translates into

(S I I) (S (K (S I I)) (S (K (S I)) (S (K K) I)))

which in chemSKI is a quine (graph with a periodic evolution). The same quine is generated from

(((S I) I) (((S (K (S I))) ((S (K K)) I)) ((S (K ((S I) I))) ((S (K (S I))) ((S (K K)) I)))))

Details in [this post].

Excuses for the long message, but I am curious what these combinators give with your programs. “

The Knuckleheads’ Club: “Google’s Web Cache Is An Essential Facility”

The fact that Google’s web cache is an essential facility for humanity blows my mind. Is so true.

This is an awesome idea. More than awesome: it is correct. Read about it at the Knucleheads’ Club site “Google’s Got a Secret“.

They give solid arguments that Google has a natural monopoly on search. It is impossible to compete with Google. This is because Google was the first to crawl the whole www (when it was much smaller than today). After that they advanced like a good chess player. Now, according to the Knucleheads’ Clug explanation, only Google is really allowed to crawl the web, as a composite effect of the mass adoption of their great search service and the much higher demands on a competitor today.

Google had a great idea and a perfect execution. Mathematically, it has now a monopoly. Here’s where the things get really interesting.

Google should give access to its cache of web crawl data. Because today Google’s web cache is an essential facillity.

It’s only natural, it’s fair and so right.

And it’s the same wrong which has to be made right, exactly like the situation of scientific communication. Previously the scientific publishers were useful, now they are a nuissance. What was many years ago designed to advance the knowledge is now used to limit the access to scientific knowledge.

And, the most important, Google’s web cache is a cache of things we humans made, just like scientific research is made by researchers, irrespective to the place where the results are reported.

Please go read the whole case. [HN discussion.]

And also visit the wonderful Common Crawl effort. And the Internet Archive Scholar.

Asemantic computing

[source]

True distributed computing which also has a global meaning (semantic) is not possible. People still stubbornly try to do it in stupid ways. They are fond of semantics.

However Nature is a true distributed computation which does not have the kind of meaning sought by humans (and programmers in particular). As concerns the biological realm, all is chemistry. Therefore we can be sure that it is possible to have the most amazing distributed computing.

What about asemantic computing?

Turing machines are asemantic. No matter what stack of programs you use to produce the initial string written on your tape, from that point on a TM works locally in time and space, without any need for global control or meaning. But a network of TM cannot work without a global component (hence semantics) because you have many heads writing on the same tape, which lead to well known problems. As TM can be translated into headless TM, which they cound be converted into graph rewrite systems, a network of TM can’t be turned into a confluent graph rewrite system.

Lafont’ Interaction Combinators is a graph rewriting system (a chemistry) which can be used for true distributed computing. Lafont proves that IC is Turing universal because it can implement any Turing machine.

IC is confluent, meaning that of we have an IC graph which can be rewritten into one without any further possible rewrites, then this final state is unique (and therefore it can be attained no matter how we split and distribute the graph to the participants at the computation, nor does it matter which protocol use the participants to transfer pieces of graphs). Confluence does not say anything about graphs which don’t have a final state. Confluence is undesirable for life like distributed computations, where final states are to be interpreted as death and they have to be recycled somehow by anoher mechanisms. Chemlambda is an alternative.

IC and chemlambda are good for true distributed computation only as long as they are not used for implementing a sufficiently powerful term rewrite system, like lambda calculus. That is because it is not possible to have only local computations with a term rewrite system. This is simply because a term rewrite system is non-local by definition. [Lambda calculus in particular is mostly alpha-conversion, as made visible by the passage to (global) de Bruijn indices, see λs calculus.]

So if we want a true distributed computing system then we have to make it asemantic (because otherwise is not local), we have to use graph rewrite systems and not term rewrite systems (because term rewrite systems are non local) and the meaning we can extract from the system can be only local as well (therefore there is no point to try to extract from the system precise global measures of agreement, syncronization).

In conclusion such a system is, as for now, unclear how to program it or how to use it (as long as we want to program it in the old ways and to use it for communication in the old ways).

But we can try to use it as if it is a living ecosystem, an extension of the meatspace. For this we have to go even more extreme and to renounce at confluence.

Such a system would be certainly useful for protecting, evolving and sadly challenging the life on Earth.

Since it is possible, it has to be tried, though.

Pure See mnemonics

Fo those who follow Pure See, until I get the vaccines and update the pages, here is a useful table.

This is only partially compatible with the draft, because there are two variants of from, see, as, and only one variant is used in the draft. Although in various sections the other variant is used and many statement are theorems (which need proof). For example if you look at the Commands section, the functional form column uses the last 3 rows from the table and the command column uses the first 3 rows.

The draft is messy, still, that is why is it a draft. But the competent one can see through, and by using the rest of the material in colin.pdf and in chemlambda.github.io, can make it until the connection with linear logic.

Mind that the rational functions in square brackets are just mnemonics.

Interesting details about some Sci-Hub proxies

There is a new interview with Elbakyan: “Cognition, communism, and theft” or [archived version]. The end of the interview is very interesting: it is about a new phenomenon of sites which serve as proxies for Sci-Hub as a kind of an effort to stiffle the speech which is associated with the Sci-Hub project. [This is my interpretation, it may be wrong, so go read the interview and make your own opinion about it.]

I reproduce this last part of the interview here, but you better use the links to read the whole interview.

[Question:] “In September of 2020 the http://sci-hub.tw domain was blocked under a Website Infringement Complaints lawsuit by Elsevier using legal representation from Beijing. Can you explain the reasons behind this worldwide block and the suspicious follow-up appearance of fake look-alike Sci-Hub domains?”

[Elbakyan answers:]”I have doubts about the real reason for the Elsevier lawsuit. Why? Well, I bought the .tw domain a few years ago from one Russian Internet company and since then, sci-hub.tw was never blocked, while other domains (Sci-Hub had a lot of them) did not live long, a couple of months or so. But the .tw domain was miraculously resilient to this. I was thinking, perhaps Chinese government (back then I did not know about the conflict between mainland China and Taiwan) was silently supporting Sci-Hub because of communist ideas?… What prevented Elsevier from seizing the .tw domain, just like they did with all other domains? (Another resilient domain is .se but Pirate Party in Sweden is backing it up).

When sci-hub.tw suddenly got blocked in September, I contacted that Russian company asking them what happened, because I had no letters from the domain registrar in my mailbox that are usually sent before the domain gets blocked. They took a long pause and then responded that they had asked the company where the .tw domain was registered, but they were silent and did not reply. I asked whether I can ask them myself, and they gave me an email. I sent a letter on 29 September, but then already I felt something was not clear here. The company responded the next day, very shortly, ‘we have sent you the document, please check, thanks’ I asked whether they could send me the document again because I received nothing! After 10 days, they finally responded with a document, explaining that there was a lawsuit filed by Elsevier (I posted that on Sci-Hub Twitter).

Then it popped up. sci-hub.tw was a very popular domain, it popped up first in Google search results, 45% of Sci-Hub users were coming from Google and other search engines (now percentage of search traffic is only 22%) but after it got blocked, it disappeared and instead, some suspicious ‘Sci-Hub’ websites started to appear first in Google (I also posted about that on Twitter)

By suspicious ‘Sci-Hub’ websites I mean scihub.wikicn.top, sci-hub.tf, sci-hub.ren, sci-hub.shop, and sci-hub.scihubtw.tw. These websites are actually the same, and they worked as a proxy to Sci-Hub, so they receive request from the users, redirect it to real Sci-Hub website (using some non-blocked Sci-Hub address) and give user the response, hiding/masking the address of real Sci-Hub. Actually, such websites can, in theory, have good goals, just to unblock Sci-Hub in those places where access to real Sci-Hub is blocked, for example, scihub.unblockit.top or scihub.unblockit.lat work the same way – but we can easily see these as generic services to unblock various blocked websites.

In the case of the websites mentioned above, the first time I encountered this was when one of the Sci-Hub domains was blocked in Russia. In such cases I usually add a new Sci-Hub domain for Russian users to work. After .se was blocked in Russia back in 2019, I quickly added sci-hub.st (if I remember correctly) as a replacement but then I noticed, that surprisingly, instead of this new domain published by me, people promoted some ‘sci-hub.ltd’ website. I opened it and it worked as a proxy, and I really did not like that, also because .ltd domain means ‘limited’ and Sci-Hub should not be limited. I found their IP address and configured Sci-Hub, so that when Sci-Hub is accessed though sci-hub.ltd proxy, it shows the REAL Sci-Hub addresses that people can use instead.

After I did that, the sci-hub.ltd author contacted me, and instead of providing some good reason for his .ltd website, such as “we want to provide access to Sci-Hub where real addresses are blocked” mumbled something about promoting Sci-Hub through this domain!

Then coronavirus happened and I forgot about this, but this September it all resurfaced as a replacement for the .tw website worldwide. These websites are adding advertisements while real Sci-Hub has no advertisements. They use suspicious domains such as ‘shop’ or ‘tf’ which reads as ‘thief’. Just like previously, I replaced their content with real Sci-Hub addresses (.st .se and .do) and they were aggressively fighting it! They tried using multiple proxies to hide their IP, they were desperately replacing and removing real Sci-Hub addresses from my message, they changed my email (!) on my About page (sci-hub.do/alexandra) to some another email registered at 163.com, and later they removed link to my page completely. If they had good intentions, just to unblock Sci-Hub, they could SIMPLY provide real Sci-Hub addresses in the left menu, with an explanation that they are only a proxy to help people when Sci-Hub is inaccessible by real addresses. They did nothing, instead they started to redirect to some Sci-Hub database mirror instead, and for new articles they put a completely fake “proxy search” page, while in fact it does not search anything, is just an imitation of the real Sci-Hub.

I really suspect that these websites are kind of man-in-the-middle attack from publishers (or somebody else!), who are providing fake Sci-Hub websites instead of the real one, to manipulate or control Sci-Hub’s image. But they could not do this with the.tw domain live, they needed to block it in order to replace Sci-Hub with their fake Sci-Hub they can control. This happened soon after I posted “About me” information on Sci-Hub for everyone to read. See? Somebody might want to prevent such information from being posted, so they need a controlled Sci-Hub, so there will be no “About me” or “about Sci-Hub” pages that can provide true facts about Sci-Hub. Media is controlled, but I could post my story on Sci-Hub, and everyone respects Sci-Hub… they want to block this opportunity. Additionally, simple advertisements already create a negative impression of Sci-Hub as some shady website, while real Sci-Hub does not rely on advertisements.”

BBC advertorial for Sci-hub

According to BBC “Police warn students to avoid science website” [archived version] the “science website” Sci-Hub “offers open access to more than 85 million scientific papers and claims that copyright laws should be abolished and that such material should be “knowledge to all”.

It describes itself as “the first pirate website in the world to provide mass and public access to tens of millions of research papers”.”

Mass and public access to more than 84 million scientific papers!

But!

“But Max Bruce, the City of London Police’s cyber protection officer, has urged universities to block the website on their network because of the “threat posed by Sci-Hub to both the university and its students”.”If you’re tricked into revealing your log-in credentials, whether it’s through the use of fake emails or malware, we know that Sci-Hub will then use those details to compromise your university’s computer network in order to steal research papers,” he said.”

This is an article which makes me ask lots of question.

Why does one have to steal credentials in order to provide mass and public access to millions of research papers?

Are the authors of the research those who keep them hidden?

Why would the students go to Sci-Hub in order to educate themselves? The universities managers are very generous and they spend a lot of money to buy the access to these articles. Students benefit via their university credentials.

But even so, apparently students are attracted by Sci-Hub.

This has to be an advertorial for Sci-Hub, written with a british dry humour.

Two years from discovery to explanation, what can be done

As in others posts, I can offer proof. In Jan 12, 2019 I posted Kaleidoscope, information about a new project, with the proposed logo:

“Kaleidoscope” means sight of a beautiful image and everybody knows that a kaleidoscope toy involves 3 mirrors; the resulting image has the symmetry of the permutations of 3 elements group, etc. There was enough information in the name 🙂

In 24 June, 2019 I find that I posted somewhere here this image

which is an almost correct, but not correct version of the (nodes associated with) commands in Pure See. This pure see construction was announced in Nov 2019 in this post. This is still at a draft level, because one has to explain exactly how the formalism self updates to “anharmonic”, without having at the start anything anharmonic manifestly. So Pure See is still not well explained online, but you can feel there is a connection with “kaleidoscope”.

The direct connection is instead with the 13 March, 2021 post Entropic relation generates duplication, which contains the fresh image

You may say that these are only images, but in program form this is used in the tool to find correct duplication rewrites. The program, like others I wrote, is rather heavily annotated, but still not obvious to understand.

So in conclusion, there is a 2 years delay from the on paper discovery to online explanation.

I find this very annoying. This is not on purpose.

As said before, this is roughly the delay between what I have on paper and what I have online.

I wonder from time to time what can I do to speed up the process and what are the reasons for these delays.

If you have any advice then I’d be happy to hear it.

And I’d be more than glad to be able to have a more streamlined process of going from “on paper” to “published”. Sometimes I despair but I have to find one.

Entropic relation generates duplication: explanation

There is a third kind of duplication rewrite which is generated by an entropic relation, as mentioned in the post The 3 kinds of duplication confusion explained.

This kind of duplication is special because it does not involve fanout nodes.

In Pure See or anharmonic lambda calculus (or kali) there is a tool I made, which generates all the possible duplication rewrites coming from the entropic relation. Would you want to understand how the tool works?

What is an entropic relation: It appears also under several names, like “medial” relation, or even “shuffle” relation. If you have two operations, say * and #, then the pair of operations satisfy the entropic relation if for any a,b,c,d we have

(a*b)#(c*d) = (a#c)*(b#d)

A relevant example is related to quandles, mentioned in the “duplication confusion” post. Indeed, take an Alexander quandle, i.e. the dilation operations in a vector space:

a*b = (1-z) a + z b

a#b = (1-1/z) a + (1/z) b

and check that they satisfy an entropic relation. (That is, in the context of emergent algebras, equivalent with the commutativity of the vector addition operation.)

What kind of duplication is generated? The kind you saw in Lafont’ Interaction Combinators , figure 2:

Also in chemlambda there is a whole family of such rewrites, named “DIST”. (I.e. short of “distributivity”, but actually these are not related to distributivity, which induces a duplication of the 2nd kind. Confusion… clarified.) For example this one [source]:

Here “LHS” and “RHS” mean respectively the left-hand-side and the right-hand-side of the rewrite.

How does an entropic relation generate a duplication rewrite?

Suppose we have a pair of operations which satisfy an entropic rewrite. Then we invent two trivalent nodes (they are port nodes in the sense of Bawden), one for each operation:

Then we can write the graphical equivalent of the entropic relation as this:

We just keep adding nodes by respecting the labels on edges…

and we get hexagons. Now we could just look at one hexagon, which expresses the same entropic relation, but in a different way:

We cut now the hexagonal graph into two pieces:

and we see that we can transform the two pieces into the LHS and RHS of a rewrite which respects the labels of free half-edges:

This is how a “DIST” rewrite appears. It does not come from “distributivity”, though…

It does not contain any fanout node. It is identical to the examples given.

Graph rewrites, term rewrites and the duplication confusion

Attention, for explanatory purposes, this post will be modified several times. this post is the first part in a series. Follow the exposition in time.

In the last post The 3 kinds of duplication confusion explained I mentioned knot diagrams and quandles, then there was a comment which I updated later by saying that (my initial comment) is incorrect and inexact.

So let’s take a graph rewrite system as knot diagrams (but eliminate the condition to have planar graphs) with the Reidemeister rewrites. The goal will be to turn it into a term rewrite system.

And let’s take a term rewrite system as quandles and the goal will be to turn it into a graph rewrite system.

The two rewrite systems are related, because quandles decorate edges of knot diagrams in such a way that the R moves translate into quandle rewrites.

So we have a graph rewrite system and a term rewrite system which decorates the graph rewrite system.

Are there procedures to turn one into the other?

Is clear that we have to make precise definitions and then to reason rigorously based on these definitions.

That is why this post will be modified several times.

We don’t want to be too rigid because this means we introduce by the backdoor too big pieces of formalism which later we shall pretend is invisible. Or, it may happen that this invisible formalism is in itself much more powerful than the visible part. In such a case it will be like we pretend to explain something by magic.

Let’s start from the gist of the last post (and comments) applied to knot diagrams and quandles. I said there that the R3 rewrite is an example of the 2nd kind of duplication, because R3 translates into a self-distributivity axiom of quandles.

What happens is much more nuanced than that. Look: in order to build a graph rewrite system from quandles, we shall need the following three nodes

Notice that we speak about port graphs in the sense of Bawden. So each node has ports (which appear here as little colored dots).

By a knot diagram we mean a 4-valent (port) graph with two types of nodes, which correspond to the two types of crossings we have (recall that we have two types of crossings in the case we use oriented diagrams). The orientation of the edges in the diagram is though a byproduct of the graph being a port graph, so we are happy with the definition of port graphs here. Notice also that, distinctively from the usual knot diagrams, we don’t impose that the 4-valent graphs are planar. (We don’t introduce virtual crossings in order to make these diagrams planar again!)

We shall use freely the name “knot diagram” for these graphs.

Then any crossing in a knot diagram can be parsed in the usual way into a pair of 3-valent nodes from the list given before.

This is not enough to make a clear correspondence between knot diagrams with R moves and quandles. This is only a first step towards understanding more.

With this parsing, the left-hand-side (LHS) of one of the R3 rewrites looks like this.

In the upper part we see the LHS of the knot diagram rewrite R3, in the lower part of the figure we see the same, where each crossing is parsed.

With dotted magenta lines is marked the LHS pattern of the quandle rewrite (as translated from a quandle term to it’s graphical version).

Likewise now for the right-hand-side of the R3 rewrite:

We see that indeed the patterns of rewrite for the (tentative graph-rewrite system translation of the) quandle system are parts of the pattern rewrites for the knot diagrams graph rewrite system.

But there is more structure. Here is again the LHS pattern:

This time the pattern is decomposed into the one used for quandles (down) and the one not needed for quandles but needed for the R3 rewrite (middle). In this pattern from the middle there is a region (dotted magenta enclosure) of interest in a moment, as well as remaining pieces of graph, made by fanout nodes.

Let’s look at the same decomposition for the RHS pattern:

The dotted region of interest is now rewired into the RHS pattern for the quandle rewrite, and more rewiring is needed to free the tho fanout nodes which appear in the LHS too.

How to explain all this structure in an uniform way?

Here are some preliminary questions:

  1. Most likely the “right” translation of knot diagrams with R moves is in the terms of R-matrices. Quandles appear as a particular solution to the quantum Yang-Baxter equation. Explain what you have to add to (knot diagrams, R moves) graph rewrite system in order to turn it into a term-rewrite system for R-matrices coming from quandles.
  2. We can turn a term rewrite system like quandles into a graph rewrite system by passing from terms to their abstract syntax trees and by adding fanout nodes (in order to duplicate variables). Or there is an ambiguity in the parsing of terms into such graphs when it comes to fanout nodes and trees of them. (We are here at the level of the first attempt, in graphic lambda calculus, as described here) Here we encounter a “duplication confusion”, because in order to make a graph rewrite system from that we have to take into consideration the duplication of (quandle graphical) terms by fanout nodes, as well as this 2nd form of duplication which does not involve fanout nodes in the LHS (as this distributivity rewrite associated with the “R3”, quandle version). Propose such a system.
  3. Remark that the R3 is conservative in nodes and edges, but the distributivity rewrite (for quandles) is not. There are ways to make the distributivity rewrite to be conservative, by using tokens (like in hapax or chemlambda strings), see also some simple graph rewrite systems which are relevant. Explain the compatibility between the graph rewrite system of knot diagrams with R moves and the quandle diagrams rewrite system proposed at 2.

The 3 kinds of duplication confusion explained

There are actually 3 different ways duplication appears in various mathematical formalisms. They are often confused as the same, but they form a hierarchy.

Fanout duplication is when you have one variable, term, etc and you pass it to a fanout to get two.

X –> X X

Graphically at left we have something which produces the X which is connected to a fanout trivalent node. At right we have two copies of the thing which produces X and as many fanouts as needed to duplicate the input to the thing which produces X.

Distributivity duplication is a duplication of a variable due to algebraic distributivity, for example

x(a+b) –> xa + xb

In knot theory, this manifests as Reidemeister 3 rewrite. In quandle notation the R3 is a self-distributivity rewrite:

x*(a*b) –> (x*a)*(x*b)

Graphically at left we have two trivalent nodes which represent the quandle operations. At right we get three trivalent nodes (three quandle operations) and one fanout node to duplicate x.

Also linearity is a good name for this kind of duplication: if X is a linear operator and a*b is the quandle operation of taking a pondarate average of points a and b, then

X(a*b) –> (Xa)*(X*b)

expresses the essence of linearity.

In λs calculus we see that duplication is achieved by distributivity rewrites:

Entropic rewrites are used in chemlambda. They are made after a schema which does not involve any fanout node (who dumbly duplicates a variable). Here we duplicate operations, using the medial or entropic relation

(x*u)#(y*v) = (x#y)*(u#v)

This relation is transformed into the DIST rewrites which all superficially look like the previous, distributivity duplication, but there is no fanout node, there are only operation nodes.

To understand how this is done, look at the following mage taken from Pure See, section of emergent rewrites.

The left pattern (LHS) is made of two operation nodes, the right pattern (RHS) is made by 4 operation nodes. The topology is the same as for distributivity duplication rewrite.

The entropic relation is used to prove that the decorations of the edges (the variable involved) is correct.

A further look at the image shows that among the 3 duplication kinds the last one is the most general. In the image “SHUFFLE” denotes the last kind of duplication, which allows for the most general edges decoration.

A more particular (smaller family of) decoration(s) is described in the column “LINEAR” which corresponds to the second kind of duplication. There one of the operation nodes becomes a fanout, basically because the operations are idempotent: taking a ponderate limit of x with x always gives x.

An even more particular family of decorations comes in the third column, “FANOUT”. This is possible only if in the LHS one of the operation nodes behaves like a fanout (due to idempotence). Then we can decorate the RHS edges in such a way that two operation nodes behave like fanouts. It is the first kind of duplication schema.

___

More freedom in the edge decorations comes with a price: in the most general case we have to comply with the entropic, or medial relation. In the “LINEAR” case we have something equivalent with R3, which is a weaker relation than the medial. In the “FANOUT” case we have no supplimentary relation to satisfy, but we can use it only if in the LHS we have a fanout node.

___

In Lafont Interaction combinators the duplication is achieved by what is called here “entropic duplication”. This is not surprising from our point of view because IC can be described with dirIC (a variant of chemlambda) where we use the same entropic duplication.

___

In knot theory, if we propose to compute with Reidemeister moves (as opposed to the usual use of the R moves as invariants of computations), then we have available only the “linear” duplication, or in knot theory terms,, the R3 move. Or, this is the goal of ZSS, where we see that’s basically not enough for universal computation. This is due to the fact that we can prove that if there was a parser from lambda terms to knot diagrams, such that there is also an algorithm which translates any beta rewrite into a succession of Reidemeister rewrites, then all the knot diagrams obtained from lambda terms have to be equivalent under Reidemeister rewrites. This means that ven if we can describe an universal model of computation with knot diagrams, such that the computation happens via Rmoves, then we can’t use them otherwise than as a notation, because there will always be a sequence of R moves which transforms the input into the output, even if there is no computation translation which does so.

Therefore we have to add more to the R moves, and in ZSS this is the SMASH move which transforms a crossing into two dirIC or chemlambda nodes, which can be further used with the entropic duplication schema to achieve universality. A SMASH move looks like this for example:

we smash the crossing by turning a fanout FO node into a chemlambda FOE operation node, thus passing to a higher level of the hierarchy of duplications.

Everything ok in critical pop science communication

A pair of critical posts from pop science bloggers were posted recently. Let’s call them poster A and poster B. What are they about:

  • a complex mathematical theory [link updated] by a strong pure mathematician is again obstinately trashed by pop science poster A, incompetent in the field.
  • an unspecified theory of everything (a baby boomer fetish thing) is announced without details by an otherwise very reasonable critic of the actual failure of methods, motivations and social structure of academia. This triggers a cascade of vague critics, a wave of links to youtube (instead of links to the site where, good or bad, a collaborative project seems to develop), which is hosted by pop science poster B.

Lately pop science communicators seem to lack the due attention of the public 🙂

Also, I see with great satisfaction the appearance of yet another collaborative project outside academia.

Something is wrong with the world we live in, right?

Post A: ABC is still a conjecture

Post B: [Guest Post] Problems with Eric Weinstein’s “Geometric Unity”

Kali IC task

Lafont Interaction Combinators turn out to be made by pairs of nodes, in a variant of chemlambda called dirIC which has the same nodes as chemlambda, but slightly different rewrites. The relevant translation between IC and dirIC is

as explained in Alife properties of directed interaction combinators vs. chemlambda. Marius Buliga (2020), https://mbuliga.github.io/quinegraphs/ic-vs-chem.html#icvschem

You can find the chemistry dirIC in the file chemistry.js from the quinegraphs repository. There you see that there is a common trunk of rewrites CHEMLAMBDABARE and two branches DICMOD and CHEMLAMBDAEND, so that

chemlambda (chemistry) = CHEMLAMBDABARE + CHEMLAMBDAEND

and

dirIC (or DIC chemistry) = CHEMLAMBDABARE + DICMOD

The pleasant (for some) feature of dirIC is that it does not have conflicting rewrites. (This leads to fewer interesting alife phenomena.)

At the end of the page you can play with the translation. The button IC>chemlambda transforms any IC graph (btw there is a chemistry IC as well :)) into a dirIC graph or a chemlambda graph, they are the same. You can choose how to reduce the translation by changing between chemistries (there is “change” button which toggles between chemlambda and dirIC).

An interesting thing happens if you play with one of the IC quines, namely 4_IC_5AB718246309

https://mbuliga.github.io/quinegraphs/ic-vs-chem.html#4_IC_5AB718246309

In IC, this is a quine. Translated into dirIC is no longer a quine.Why? that means that the translation is bad somehow! No, because IC and dirIC have no conflicting rewrites, in case you input an IC graph which can be reduced to a state where no ore reductions are possible, then this final state is unique. If you translate that IC graph into dirIC then you shall always reach a unique final state which is the translation of the final state of the IC graph.

But in case of an IC quine there is no final state, so confluence does not tell us anything. In particular, the translation from IC to dirIC gives two nodes for one node and an IC reduction corresponds to two dirIC reductions in parallel. In the absense of an unique final state, anything can happen if we reduce in dirIC and the rewrites are no longer coresponding to rewrites in IC (because we don’t force the system to always do pairs of parallel rewrites in order to preserve the correspondence of rewrites).

Convince for yourself: use IC>chemlambda button to turn the IC quine into a dirIC graph, use the “change” button to have the dirIC cheistry and hit “start”.

OK, but that’s not what I wanted to tell you in this post. I want to present you a task.

As you see, the IC nodes are pairs of chemlambda nodes: Delta is a pair of A (application) and L (lambda) nodes, Gamma is a pair of FI (fanin) and FOE (external FO) nodes. Or A and L nodes in chelambda and dirIC are involved in the beta rewrite, they annihilate one the other. In chemlambda and dirIC the same happens for the pair FI and FOE. Conveniently this implies the Delta-Delta and the Gamma-Gamma rewrites from IC, where there are two kinds of annihilations.

But if you look to Pure See then there is a thirs pair of nodes which annihilate one the other: D (dilation) and FOX (a sort of FO node…)

So we could use this third pair to construct an IC system with 3 nodes, say Delta, Gamma and Phi. The nodes and rewrites for Delta and Gamma will be the same. Because Phi is built from D and FOX, then we also have an annihilation rewrite Phi-Phi, which will be like Gamma-Gamma.

This leaves us with the rewrites which multiply nodes, or DIST rewrites in the parlance of chemlambda. We have Gamma-Delta as in IC, but there are more, namely: Delta-Phi and Gamma-Phi.

The task is to deduce them from a completion of dirIC with the nodes D and FOX which is compatible with Pure See. This is what is behind the “kaleidoscope” project, or “anharmonic lambda calculus“.

The problem is that you have to choose among many possible DIST rewrites. There is a help page which gives you these possible rewrites. You should find a completion of dirIC which is as symmetric as possible.

Or you could change the pairing (A-L, FI-FOE, D-FOX) and produce new ones, say A-FOE, FI-FOX, D-L, which would then give other rules than IC eventually. Say Delta-Delta would become a DIST rewrite and Delta-Gamma an annihilation rewrite, etc.

For chemlambda there is such a completion, called kali, which is only one of the many possible.

So what is your proposal for kali IC, an extension of dir IC?

Propose it as you like, preferably as an issue at the quinegraphs repository. Or by any other way.

All in all a working kali IC will give a 3 nodes IC version, both most likely interesting. Why not yours?

Asemantic computing draft

here. As this is a draft, probably has parts in need of rewriting. Or criticized. Some will certainly dislike it 🙂

Also archived.

UPDATE: a related subject, not touched in the draft, is the composability bloat. Composability of computations is often presented as desirable and it is a feature of an easy to program system. However, in Nature there are no examples of composable computations. It is always about the context. In programming, even if at small scale composability is desirable, at large scale it produces bloated computations.

Composability should not be enforced at the fundamental level, instead it should be a welcomed, side effect, of a polite manner to treat the participants at a distributed computation.

Unrelated: you may ask why do I use telegra.ph? Because is pure freedom 🙂 Previous uses: (internet of smells) (archived)   (home remodeling) (archived) the chemical sneakernet stories.

2021, a spiral around the sink

Shall we stay home until 2022?

2020 was like a ship which navigates through a strong current. There was no way to get out, the only direction was forward. The only choice was speed. New ways to work, new things to do. More of them.

And then, in 2021 we seem to understand that we are in a vortex. We advance only to go back, to go around.

It looked that in 2021 we shall find the escape. Instead, is even not more of the same, is turning around.

In the Western world, with the exception of Israel in my opinion, we are stalled, spiralling around the sink. Last to have vaccines, we can’t produce enough of them. We are masters of propaganda instead.

Why not be kept in the spiral one more year? In 2022 there will be other elections. If the pandemic ends then the problems we had before will appear in their true stature. They are huge. We are done! There is no future in the race for more inequality, for unethical laws (like copyright), for more power to corporations, for more objects and no discussions. We are past reality, we deny reality. In the West there is no future for the old ways. All this will blow out when the pressure of the pandemic will go down.

So let’s go very, very slowly with those vaccines. Let’s be afraid of the new mutations, which exist and they are a real reason of concern. But let’s favor them by not being stern enough.

Let’s keep the lid on the pot, maybe the pressure will not build. Maybe we shall use the bad examples, the deluded persons extreme behaviours, as the reason to cancel more, to censor more.

It won’t work like that, I’m sure, but the people in power have the same kind of short term thinking which rotted our societies. They will try and I am afraid that 2021 will be spent in a spiral, around the sink.

A statement of Indian academics on the ongoing petition to shut down Sci-Hub in India

If you care about Open Access then support our academic fellows from India who urge the Delhi High Court to rule against the big corporate publishers petition to block Sci-Hub in India.

Don’t believe me. Read their statement and tell if they have a point. An ethical point.

Read also the recent Interview With Sci-Hub’s Alexandra Elbakyan on the Delhi HC Case.

Previously here: Breakthrough Science Society statement: Make knowledge accessible to all. No to banning Sci-Hub and LibGen

[The following Statement is reproduced from the source Sci-Hub Case: Academics Urge Court To Rule Against ‘Extortionate Practices’ and put here as is, for the benefit of the readers of this blog]

The ongoing attempt by an international conglomerate of highly profitable publishers to shut down Sci-Hub and LibGen is an assault on the ability of scholars and students to access knowledge.

We urge the Delhi high court to consider that Sci-Hub and Libgen have thrown open the world of knowledge and helped to fire the imagination of students in India. Universities in the global south have much fewer resources than their counterparts in the north. Sci-Hub and Libgen have played a vital part in enabling Indian universities to keep up with cutting edge research the world over. Open access to scholarly knowledge points the way to the future.

We also urge the courts to recognise that scholarly publications are the result of research that is not funded by private publishers. Moreover, crucial components of the publishing process – peer review and editing – are performed for free by scholars on the understanding that they are helping to further the cause of rigorous knowledge production.

Yet, publication houses charge as much as $30-$50 dollars per article and $2,000 to $35,000 per journal title. Based on such exorbitant pricing, big academic publishers make large profits. In 2019, for example, Elsevier’s operating profits were $1.3 billion and Wiley’s were $338 million. Elsevier’s profit margins amount to an eye-watering 35-40%.

Given this, leading US and European universities are currently refusing to subscribe to Elsevier for their extortionate practices. Increasingly scholars, government funders, and large foundations have felt that these conglomerates are holding back scientific progress.

Websites like Sci-Hub and LibGen, on the other hand, widen access and scientific progress. About 66% of respondents of a survey at top-tier Indian universities said that they are highly dependent on Sci-Hub. During the pandemic, this has risen to 77%. A 2016 analysis found that Indian scholars downloaded 3.4 million papers over a six-months period from on Sci-Hub. If these were downloaded legally, it would have cost $100-125 million. This is more than half of what all research institutes in India cumulatively spend on subscriptions to paywalled scholarly literature.

To find out more about the information cited in the petition, visit:

  1. https://www.nature.com/articles/d41586-020-02708-4
  2. https://www.sciencemag.org/news/2016/04/whos-downloading-pirated-papers-everyone
  3. https://spicyip.com/2021/01/the-sci-hub-case-why-it-is-time-to-stop-favouring-the-doctrinal-approach-to-law-over-an-empirical-one.html
  4. https://www.bloomberg.com/opinion/articles/2020-06-30/covid-19-shows-scientific-journals-like-elsevier-need-to-open-up
  5. https://www.theatlantic.com/technology/archive/2016/02/the-research-pirates-of-the-dark-web/461829/
  6. https://libraries.mit.edu/scholarly/publishing/elsevier-fact-sheet/
  7. https://newsroom.wiley.com/press-releases/press-release-details/2020/Wiley-Reports-Fourth-Quarter-and-Fiscal-Year-2020-Results/
  8. https://www.relx.com/investors/annual-reports/2019

As participants in the global community of scholarship, we urge the publishers to withdraw the lawsuit and the court to stand against the extortionate practices of publishing companies who are profiting off the unpaid labor of the global scholarly community and impeding the free-flow of knowledge and vital new discoveries.

Parenthesis hell, nonlocality of parsers: semantics delusions

A binary tree can be seen as a term built from an alphabet Leaf of leaves and the rules:

  • a in Leaf is a tree
  • if A, B are trees then (A.B) is a tree

When given a family of binary operations, a term built from these operations is a tree (in the sense just explained) with colored nodes (just replace the “.” with the name of the operation as a color).

When we use this definition of a tree as a term, we need a parser to transform this term into a tree as a trivalent graph. Any parser we use, it has to make repeated nonlocal passes through the term, in order to parse the parenthesis. Here nonlocal means not a priory bounded.

Or, this nonlocality makes the tree term unfit for decentralized computing. If we would use instead the tree graph then this nonlocality barrier would not exist.

Then the question seems to be: how to encode (as a term) the tree graph? Any naming of edges schema would induce another problem, that potentially somewhere else somebody uses the same name for another edge of the graph. A randomness solution (use a big enough random number as a name) would make this naming clash improbable. Another solution would be to treat these names as money, as suggested in the hapax project.

Nature has an alternative for the parenthesis hell: chemistry.

But computers don’t have this alternative, unless we build new ones where magical but physical trivalent nodes are used instead of bits.

An advantage of the use of tree terms and parentheses is semantics, in the sense that any tree term has a meaning coming from a local algorithm of decoration of terms from decorations of the subterms.

It seems that we live as if there is no alternative than terms because of this.

But this is a delusion: semantics matters only to humans, because it allows to reason about these terms. Our computers don’t need semantics to function, we historically used semantics to (build computers and to) program on them. But we, humans, are lousy at keeping track of parentheses. Witness the superhuman and therefore unused or unpopular capacities of functional languages.

The concrete, punctual reason of my rant is that I write (write! not program) an article in latex where I use reasonings involving trees of depth 4. On paper, with my pen I can easily draw these trees and the reasoning is fluid and natural, but in latex? I could use subscripts and superscripts (not lisible enough!), or I could use tikz to draw the trees (not readable in the .tex file by a human, or I could embedd pictures of trees (not the same as the tree graphs themselves). Or I could use the term trees, unreadable by humans (and in particular by interested readers).

UPDATE: after some fiddling I hope this musically like notation (first line) is bearable in latex produced documents compared say with the third line, what do you think?

Alternatively, and probably in the future when I’ll have a parser for Pure See, I’ll just write programs in that language instead of proofs.

But you see, it’s so easy to make sense with pen and paper and eventually impossible in the present computers and languages. Even in Pure See, we write essentially terms and under the hood there is a nonlocal parser.

Sometime in the future, when we shall program in chemistry, we shall no longer have this semanic delusion.

Another speculation is that if our ancestors were squirrels , then probably we would write naturally on trees.

Space fabrics in the chemlambda collection

Although the chemlambda project has a different use of graphs compared with the Wolfram Physics project, we can produce space fabrics which look alike some of the notable universes in the WP project.

Here are some of them (source: the chemlambda collection):

https://chemlambda.github.io/collection.html#135

https://chemlambda.github.io/collection.html#142

https://chemlambda.github.io/collection.html#175

https://chemlambda.github.io/collection.html#184

https://chemlambda.github.io/collection.html#216

https://chemlambda.github.io/collection.html#182

Maps are intrinsic, charts are model theoretic

Even in the commutative world of Pure See, and using that language, maps are intrinsic and charts are model theoretic.

With only the 6 instructions available, like

from a see b as c;

we can build maps of finite terms. A finite term is like a place which can be attained. We build finite terms with only

  • variables a, b, c, …
  • if (from A see B as C) and A, B are finite then C is finite,
  • if (from A see B as D; from A see C as E; from D as E see F) and A, B, C are finite then F is finite (called the difference from C to B, based at A).

If it weren’t weird, in Pure See we could define finite terms as a sort of wrong lambda terms, built from abstraction operation

see B from A as λA.B

and from application operation

as B from A see BA

as

  • either variables a,b,c,…
  • or λA.B with A, B finite,
  • or λA.(BC) with A, B, C finite.

There would be syntactic equality of terms (not only finite ones), up to rewrites corresponding to R1 and R2 (in this Pure See version), which I don’t elaborate on. I would not use emergent rewrites, nor SHUFFLE.

Then there is the interesting equivalence of finite tems defined as A = B if the term C defined by (from A as B see C) is finite. Or in the wrong lambda terms notation, if AB is finite (up to the R1, R2 rewrites).

This equivalence of finite terms would deserve the name “appproximate equality of places”.

A map is then made from the atomic difference: it associates to a finite term a “proof of attainment”, namely to finite F associates

map(F) = (A,B,C)

such that

(from A see B as D; from A see C as E; from D as E see F)

Therefore to a finite term F is associated the information that “based at A, you find F as the difference between C and B”.

Think about it. A map is a collection of instructions: from here, take this road to go there. While instructions are very useful to move around, you certainly cannot know if two different strings of instructions lead you to the same place.

As if we steer a ship on the see at large, we need some times to make the point.

That is why we use charts.

A chart is a model of Pure See (or more general). Models of Pure See are linear spaces and a chart associates to variables points in the linear space and to

from a see b as c

the fact that c is the homothety, or dilation based at a applied to b (of a generic or fixed, depends on the model, scale parameter). All Pure See instructions correspond to dilations in the way explained in the first link.

With a chart available, we can say that A = B model theoretically if

chart(A) = chart(B) + O(ε)

Mind that the syntactic A = B implies the model theoretic A =B but not the other way around, outside finite terms as defined, because the syntactic A = B imples the stronger

chart(A) = chart(B) + ε O(ε)

where ε is the scale parameter. In the Pure See world finite terms are polynomial in ε therefore syntactic A = B is equivalent with model theoretic A = B. But just consider an externally given function from finite terms to finite terms, then

f(A) = f(λA.B) syntactically for any finite A,B

is equivalent with f derivable.

Transcript of “Zipper logic revisited” talk

This is the transcript of the talk from September 2020, which I gave at the Quantum Topology Seminar organized by Louis Kauffman.

There is a video of the talk and a github repository with photos of slides.

My problem is if we can compute with tangles and the R moves. I am going to tell you where does this problem comes from my point of view, why it is different than the usual way of using tangles in computation and then I’m going to propose you an improvement of the thing called zipper logic, namely the way to do universal computation using tangles and zippers.

The problem is to COMPUTE with Reidemeister moves. The problem is to use a graph rewrite system which contains the tangle diagrams and as graph rewrites the R moves.

Can we do universal computation with this?

Where does this problem come from?

This is not the usual way to do computation with tangles. The usual way is that we have a circuit which we represent as a tangle, a knot diagram, where the crossings and maybe caps and cups are operators which satisfy the algebraic equivalent of the R moves. The circuit represents a computation. When we have two circuits, then we can say that they represent equivalent computations when we can turn one into another by using R moves.

In a quantum computation we have preparation (cups) and detectors (caps) (Louis).

R moves transform a computation into another. Example with teleportation.

R moves don’t compute, they are used to prove that 2 computations are equivalent. This is not what I’m after.

The source of interest: emergent algebras. An e.a. is a combination of algebraic and topological information…

[See https://mbuliga.github.io/colin/colin.pdf as a better source for a short description of emergent algebras]

We can represent graphically the axioms. This is the configuration which gives the approximate sum. It says that as epsilon goes to 0 you obtain in the limit some gate which has 3 inputs and 3 outputs.

We say that the sum is an emergent object, it emerges from the limit. We can also define in the same way differentiability.

We can define not only emergent objects, but also emergent properties.

Here you see an example: we use the R2 rewrites and passage to the limit to prove that the addition is commutative.

The moral is: you pass to the limit here and there, then this becomes by a passage to the limit the associativity of the operation.

Another example: if you define a new crossing (relative crossing) then you can pass to the limit and you can prove that, based on e.a. axioms. Moreover you can prove, by using only R1 and R2 and passage to the limit, that the R3 emerges from R1, R2 and a passage to the limit, for the relative crossings.

With e.a. we can do differential calculus. We use only R1, R2 and a passage to the limit. It is a differential calculus which is not commutative.

There are interesting classes of e.a.:

  • linear e.a. correspond to calculus on conical groups (Carnot groups, explanations)
  • commutative e.a. which satisfy SHUFFLE (calculus in topological vector spaces) In this class you can do any computation (Pure See https://mbuliga.github.io/quinegraphs/puresee.html )

What I want to know is: can you do universal computation in the linear case? This corresponds to the initial problem.

What means universal computation? There are many, but among them, 3 ways to define what computation means. They are equivalent in a precise sense.

Lambda calculus is a term rewrite system (follows definition of lambda calculus).

Turing machine is an automaton (follows definition of TM).

Lafont’ Interaction combinators is a graph rewrite system, where you use graphs with two types of 3valent nodes and one type of 1valent node. Explanation of rewrites. These are port graphs.

Lafont proves that the grs is universal because he can implement TM, so it has to be universal. There is a lot of work to implement LC in a grs, but the reality is that this is extremely dificult, in the sense that there are solutions, but the solutions are not what we want, in the sense that you can transform a lambda term into a graph and then reduce it with the grs of Lafont, say, and then you can decorate the edges of the graph so that you can retrieve the result of the computation. But these are non-local. (Explanation of local)

We have 3 definition of what computation means, by 3 different models, which are equivalent only if you add supplementary hypotheses. For me IC is the most important one, but we don’t know yet how to compute with IC.

Let me reformulate the problem of if we can compute with R moves in this way.

Notation discussion. We can associate a knot quandle to a knot diagram, simply by naming the arcs, then we get a presentation of a quandle. The presentation of a quandle is invariant wrt the permutation of relation or the renaming of the arcs. There is a problem, for example when an arc passes over in two crossings, we have a hidden fanout. The solution is to use a different notation and FIN (fanin) and FO (fanout) nodes. This turns the presentation into a mol file.

Can we compute with that?

Theorem: If there is a computable parser from lc to knot diagrams, such that there is a term sent to a diagram of the unknot, then all lambda terms are sent to the unknot.

We can compute with knot diagrams, but in a stupid way: if we use diagrams as a notational device. Example with the knot TM (flattened unknot, you may add the variations with the head near the tape, or the SKI calculus, to argue that it can be done in a local way. Develop this.)

Conclussion, you need a little something more, by reconnecting the arcs.

Let’s pass to ZSS

For this I introduce zippers. The idea is that a zipper is a thing which has two non-identical parts, so it’s made of two half-zippers, which are not identical.

We can use zippers like this: we have a tangle 1, a zipper, then tangle two. Explanation of the zip rewrite.
This move transforms zippers to nothing.

The move smash create zippers. Explanation of smash.

Then we have a slip move.

Here is the new proposal for ZL. The initial proposal used tangles with zippers, but there were also fanins and fanout nodes.

The new proposal is that we have 4 kinds of 1valent nodes, the teeths of the zippers, then we have 4 kinds of half-zippers, and then we have two kinds of crossings.

Explanation of the new proposal.

Theorem: ZSS system turns out to be able to implement dirIC, so it is universal.

Explanation of dirIC: https://mbuliga.github.io/quinegraphs/ic-vs-chem.html#icvschem

Chorasimilarity 10 years anniversary

I missed that: chorasimilarity (name and) blog had the 10 years anniversary on Jan 2nd: link to first post.

UPDATE: and this is the 801 post. There are more than 800 posts written, but I tend to trash the personal posts after a while. What remains appears to be read many years after the writing date. May be happening because there is value in these posts, or by time circularity phenomenon, which says that because of strong intuition, I start to explain something from the conclusion (far in the future) to the beginning (in the near past). As the future of a past post is in the past of a future post, you get time circularity.

As concerns the advancement and sometimes curious time circularities, read the recent Logic in the infinitesimal place.

I folded and got my first locked smartphone

I want to keep it in pristine, normies standard state. Makes me say yikes several times every time I install something.

What to put on it? I put telegram, where you can find me as xorasimilarity.

I also put revolut, what else?

Warning 🙂 from personal experience, every time I was a very late adopter of anything, it crashed in short time. So locked smartphones, beware!

UPDATE: I know I am comical, but for the first time I listened music on spotify (or from a smartphone, more generally). I mean listened some music I know well and respect. Never in my life, included the dark years of living under a dictatorial regime, never ever I listened before such a crappy pamperized collection of crap. Linux and free software preserved me from such a shame, to be a protected slave. Many probably don’t know that even their ears are protected from wrongsound, not only from wrongthought, by the mighty corporations who, when you ask them a lobster and champagne, they give you a mcdonalds menu. The same who make you, rebel, click on “no, thanks”. What are the thanks for? OMG, in the world which is not free things are far worse already than I thought.

Example of Google and Duckduckgo censoring

Now everybody talks about how the corporate media companies practice overt censorship, be it Google, Facebook, Twitter or the delicious Gamestop and r/WallStreetBets banning story at Discord. Continued by Robinhood is limiting purchasing of stock, (archived). What’s that about? Read the Open Letter (archived).

UPDATE: And now Google deleted over 100000 negative reviews of Robinhood. Disgusting.

Internet exists since 30 years. The world greatest fortunes formed in relation to the net. Still, the world organization is not ready for it.

More than ever is clear, as Carlin said, that it’s a small club and you ain’t in it.

As many others, I became aware of this some years ago, in relation especially with Google and Twitter (never was a real FB user), and in relation to the fact that these companies can never be trusted to well handle the precious scientific information. Although I had, for a time, entertained the illusion that such corps can afford to engage into the preservation of the tiny (bitwise) ammount of scientific information, for the sake of the humanity.

When I criticised these companies, in the past, it was always from this point of view (which I consider very important in the long term). These behemots not only censor individuals, say by controlling the info bubble an individual has access to. They do much more damaging work: they classify people according to random criteria and then they censor the visibility across these arbitrary boundaries. (So they build infinite walls, not only designed bubbles.) Suppose for example that A Somebody invents new bike models, but Somebody sometimes posts his criticism of a big corp. The effect is not only that Somebody’ criticism is censored, but also Somebody’s bike models are very hard to find by anybody on the other side of the wall.

What about decentralized solutions to the centralization of power problem? Tough luck, not one of the corporations will show them to everybody.

OK, enough generalities. I noticed in October 2020 that Google search results re chorasimilarity blog changed abruptly. I looked at the (fake) alternative Duckduckgo and noticed the same happened. I then updated my About page with advice on how to check this.

Today I archived a google search page which shows that Google has my recent post, but they only show them if you search much harder, otherwise almost everything from chorasimilarity dissapeared.

Incidentally, this archived search page is almost all about critics of Google. If you go in there then you’ll find much older examples of Google censoring.

I’m not an important person, so why would you care? By now, it is more and more obvious why everybody should care.

Objective un-reality: quantum automata and decentralized computing

UPDATE: Questions? Just ask.

In his book The Cellular Automaton Interpretation of Quantum Mechanics, Gerard t’Hooft proposes a class of toy models of physics which are based on cellular automata and which are somewhere in between classical and quantum mechanics.

The idea is that for a synchronous reversible cellular automaton we may arrange that the update of the state of the automaton is obtained from a permutation of the previous state. As a permutation is a particular example of an unitary transformation, we can deduce formally a Hamiltonian from the chain of permutations which make the evolution of the automaton, with the property that it makes sense also for continuous time variables.

This idea is new for me, therefore I learn more about it as time passes. Therefore the choice of the most relevant references is more a reflection of my learning than an authoritative account of the state of this research field.

Previously I tried to study the work of Pablo Arrighi on quantum cellular automata. See An Overview of Quantum Cellular Automata, which is (to my limited knowledge) a development of t’Hooft proposal.

Suppose that I want to know what an asynchronous graph rewriting automaton looks like from this quantum point of view. Then I could take inspiraation from Arrighi, Martiel Quantum Causal Graph Dynamics, or I could take inspiration from a particular graph rewriting (from knot theory) as described by Kauffman, Lomonaco Quantizing Knots and Beyond or other articles about “mosaic quantum knots” as Quantum Knots and Knotted Zeros.

Be it in t’Hooft cogwheel models, Arrighi quantum cellula automata or Kauffman, Lomonaco mosaic quantum knots, there always is a global unitary (for example permutation) transformation. This is problematic because the same rewrite type, which acts locally, has to be associated with lots of global unitaries, depending where the rewrite is done.

It does not look physical to me. Or not geometrical, if you like.

Why do we need a global unitary transformation?

And then, there is the more subtle time serialization, which first makes us declare that we always consider a “single thread” time evolution and later we impose as a constraint that the Hamiltonian or other constructs have to lead to the same end result for any equivalent serialization.

I notice then exactly the same problem occurs when we speak about decentralized computing.

Which make me believe that here there is something cultural going on.

In quantum mechanics we need to know what is real and in decentralized computing we need some form of truth to save us from race conditions.

It is then, perhaps, useful to be clear about what is real and what is objective. I insisted on this historical solution, with roots in Ancient Greece and in Nordic cultures as well.

Reality is a discussion, or a trial, which happens in a public place, like a thinkstead. Reality is a trial which uses evidence. The past agreements are the evidence and they are objective.

In quantum mechanics the measurement is a trial and the evidence after the trial is prescribed by the Born rule. That we know.

In quantum automata and in decentralized computing we only have a limited form of reality, the one-to-one exchanges of messages and the problem is that we don’t have a clear way to obtain evidence. Here evidence is the global state of the system, which might not exist. For decentralized computing the trial rules, the rules of discussions, are too simple to cover all the needs of a real decentralized computing.

In all approaches we lack the thingstead. We either pass to the existence of a global state and of a global serialized transformation, or we require some form of a proof of objectivity, like in the blockchain technologies.

In my Artificial physics for artificial chemistry talk, I show how can one transform chemlambda into a t’Hooft like automaton, see also Chemlambda and hapax. Depending on the pattern detected, there is an unitary and local transformation (actually a pair of permutations). The evolution of the system is then not constrained to be described by a global unitary.

It lacks as well as thinkstead, because it has instead a mechanism based on tokens, like in the project hapax, or “made of money”. We can imagine a rewrite as a sort of Feynman diagram, where a photon (the token), indistiguishable from others, interacts with the system, which changes (locally) the state and perhaps expells another particle (token). The thinkstead is the diagram here, in a sense. In terms of decentralized computing there is also an interpretation possible, where we attribute property to tokens and local states, and because the input token becomes part of the state (after the rewrite), everything is made of money (tokens). We need here at least 6 one-to-one commnications to implement this, therefore a thinkstead begins to become visible.

We can conclude by saying that in the classical approaches we have models of objective un-reality. There is a cultural aversion of the discussion. There is evidence, therefore we may speak about objective, but the evidence does not come from a discussion. Objective, but not real. A sign of the times.

Logic in the infinitesimal place

This is the most extreme form of time circularity I witnessed. As I contemplate the page where I put together the anatomy of an infinitesimal emergent beta rewrite, I am amazed to see how edges turn into differences and how the beta rewrite is nothing but a difference which makes two differences.

Or, just look at this almost 10 years old post: Entering “chora”, the infinitesimal place. It was written as a support of the article Computing with space: a tangle formalism of chora and difference.

This whole blog (and it’s name) turns around the effort to understand computing with space.

Recently I finally understood that the limitation to tangle diagrams is (only a bit) misleading and I succeeded in the proof that (at least the multiplicative part of linear) logic is in a precise sense emergent from the geometric, algebraic or analytic content of emergent algebras, in the particular commutative case. Or, in the general case this emergence can be justified only infinitesimally, i.e. in the same sense as the Reidemeister 3 rewrite emerges infinitesimally from the other two (an old subject here).

Just put everything inside a chora and repeat the emergence of the beta rewrite. It works greatly,it involves some curvature correcting terms, which is as expected.

But in the end, after the passage to the limit, everything converges and, surprise! arrows are differences (in the emergent algebra sense).

What to make of these? An interactive version of the theory, of course!

Or, by the time circularity phenomenon manifest in this blog, this is clearly reminiscent of this quote from Bateson

Or we may spell the matter out and say that at every step, as a difference is transformed and propagated along its pathways, the embodiment of the difference before the step is a “territory” of which the embodiment after the step is a “map.” The map-territory relation obtains at every step.

which I took now fro the old “entering chora” post. Read also the supplementary material (section 9) in the “computing with space” article, for more.

So, while I am still amazed about the prescience of old, valuable thinkers, I am even more amazed because: (1) the rigorous math I do works, (2) how did they found out, without the math?

I hope that this year 2021 will be serene enough so that I will be permitted to write a clear, step by step account of the past research and a similarly clear description about how you get to the point where logic in the infinitesimal place is physics.

Fun with the non-commutative geometric series (II)

I was doing a lot of particular examples computations lately, because I want to understand better the quantities (curvature and co-linearity, see this) which have to be added in the non-commutative version of pure see emergent rewrites. These examples are fun by themselves and useful to do. Here is one computation which may help understanding a previous post.

[Today is also the 1 year anniversary of the posting of the salvaged chemlambda collection of animations, see this post from a year ago for more fun.]

In the post Fun with the non-commutative geometric series I wrote a proof of the convergence of an a priori non-commutative version of the geometric series. Here “non-commutative series” means the limit of a sequence of finite sums, where each sum is done with respect to a non-commutative addition operation. Part of the fun is that the proof is new even in the commutative case. But what about non-commutative cases, can we compute some examples?

One non-commutative case would be the Heisenberg group, but at closer inspection we see that we can reduce it to a commutative series computation. (We say, unpublished yet, that it has “commutative emergent numbers“, and there is a reason the H group is like this, to be explained).)

Here is a true non-commutative example, I don’t know if the expression of the convergence of the geometric series is straightforward or not in this particular case. Please tell me if you have alternative proofs of the final result. Here is it.

We are in the group N of real n \times n upper triangular matrices, with 1’s on the diagonal. This is a subgroup of real linear group.

For any scalar e \in (0,+\infty) we define a diagonal matrix

\mathbf{e}_{ij} = e^{i} \delta_{ij}

(here \delta_{ij} = 1 if i=j, otherwise \delta_{ij} = 0; please don’t make a confusion with the dilation, denoted also with the letter \delta, which is introduced in the following).

Conjugation with diagonal matrices is an automorphism of N, in particular for any e \in (0,+\infty) and any \mathbf{x} \in N we have

\mathbf{e}^{-1} \mathbf{x} \mathbf{e} \in N

This allows us to define the dilation:

\delta^{\mathbf{x}}_{e} \mathbf{y} = \mathbf{x} \mathbf{e}^{-1} \mathbf{x}^{-1} \mathbf{y} \mathbf{e}

Prove that:

(a) this defines an emergent algebra over N

(b) that this is a linear emergent algebra.

We can define now the geometric series with respect to the “addition” operation which is simply the matrix multiplication in the group N: (here I is the identity matrix, the neutral element of the group)

\sum_{k=0}^{\infty} \delta_{e^{k}}^{I} \mathbf{x}

and by the previous post, it does converge if e \in (0,1).

Let’s compute the finite sums. First notice that

\delta^{\mathbf{x}}_{e} \mathbf{y} = [\mathbf{x}, \mathbf{e}^{-1}]  \mathbf{e}^{-1} \mathbf{y} \mathbf{e}

where the square bracket denotes the commutator.

We then have:

\sum_{k=0}^{m} \delta_{e^{k}}^{I} \mathbf{x} = \mathbf{x}\mathbf{e}^{-1} \mathbf{x} \mathbf{e} \mathbf{e}^{-2} \mathbf{x} \mathbf{e}^{2} ... \mathbf{e}^{-m} \mathbf{x} \mathbf{e}^{m}

or equivalently:

\sum_{k=0}^{m} \delta_{e^{k}}^{I} \mathbf{x} = (\mathbf{x} \mathbf{e}^{-1})^{m} \mathbf{x} \mathbf{e}^{m}

All in all, our convergence of the geometric series result says that:

if e \in (0,1) and

[\mathbf{y}, \mathbf{e}^{-1}] = x

then

\mathbf{y} = \lim_{m \rightarrow \infty}  (\mathbf{x} \mathbf{e}^{-1})^{m} \mathbf{x} \mathbf{e}^{m}.

Remark that for n=2, i.e. for the case of 2 \times 2 upper triangular matrices with 1’s on the diagonal, the convergence result is the classical geometric series convergence. Compute by hand the partial sums in tis case to convince yourself that indeed the usual geometric (partial) sum appears in the (1,2) position in the matrices.

Enemies of knowledge: Twitter suspends Sci-hub account

On Hacker News is discussed Torrentfreak article Sci-Hub Founder Criticises Sudden Twitter Ban Over Over “Counterfeit” Content, I learned from this article that Twitter suspended Sci-Hub account.

Twitter rules are rules of a corporation. They don’t have any moral precedence over human rights. In a precedent post I collected links and text about

The Universal Declaration of Human Rights, adopted in 1948, formulates in the Article 19 the freedom of opinion and expression:“Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.”

Now I find that the proposal

Any provider of user created content is under the same constraints concerning free speech as the state is by the First Amendment.

is actually weak. Truth is that corporate media performs a cultural genocide at a global scale, after they started by providing us a better medium of communication.

It was a trap: promise better communication medium, replace the commons by a private version, then apply commercial rules.

I am not a lawyer, but on moral grounds they tricked us into believing we are using a common place of discussion. They got rich and they turned into a reality making machine.

Or now they just tried to erase the biggest collection of free scientific works. They side with Elsevier and others lawsuit in India (see also my previous post Breakthrough Science Society statement: Make knowledge accessible to all. No to banning Sci-Hub and LibGen).

Corporate media is now the enemy of the research people.

Corporate media is the enemy of knowledge.

There is no excuse, no way to deflect that.

It is very sad not only for Indian people, it is also extremely sad for their American fellow researchers. And for the image Twitter projects to the world, in their names, by trying to burn the Library of Alexandra.

Breakthrough Science Society statement: Make knowledge accessible to all. No to banning Sci-Hub and LibGen

UPDATE: Read An Interview With Sci-Hub’s Alexandra Elbakyan on the Delhi HC Case (or archived version).

The Indian Breakthrough Science Society made an online petition to protest against, quote:

“We are shocked to learn that three academic publishers — Elsevier, Wiley, and the American Chemical Society (ACS) — have filed a suit in the Delhi High Court on December 21, 2020, seeking a ban on the websites Sci-Hub and LibGen which have made academic research-related information freely available to all. Academic research cannot flourish without the free flow of information between those who produce it and those who seek it, and we strongly oppose the contention of the lawsuit.”

Read the whole petition and consider to sign it, if you think that this helps your fellow researchers and the whole society.

UPDATE: Thanks to my friend S for this link: Delhi High Court agrees to hear scientists, organisations in piracy suit by Elsevier and others against Sci-Hub, LibGen. You can see the list of the intervening scientists and their opinion about the copyright in relation to science.

After the IoT comes Gaia, version 2021

I found, via Scott Aaronson post My vaccine crackpottery: a confession, the very interesting Reverse Engineering the source code of the BioNTech/Pfizer SARS-CoV-2 Vaccine from berthub.eu.

In that post there is yet another funny link: DNA seen through the eyes of a coder. Generally, the whole post is very well written, with lots of other links to explore further.

While reading the post I had a curious sensation: I’ve seen something like this before 🙂 Of course, it reminded me about one of chorasimilarity what if? posts: After the IoT comes Gaia.

Long story. After the Iot comes Gaia contains the seed of the story which lated became The Internet of Smells, which in turn was used as the skeleton of the failed TED presentation Chemlambda for the people, whose slides came into my mind by looking at the Reverse Engineering… post.

UPDATE: Here’s a detail with the Eve’s sniffer ring (from an unreleased photo, the ring is done by me)

And here’s a hypothetical image (unreleased code used) which was used in the post Data, channels, contracts, rewrites and coins

___________

There is no causal connection between the imagined world of what if? and chemlambda, and the real world where real biochemists build the real vaccine.

It is troubling though… I may say I was right and that Internet of Smells is still in the (near, perhaps) future.

I noticed the ressemblance in a previous post from last October: Pharma meets the Internet of Things (2020 update), but that post is less precise than this one as concerns tracking of the sources.

Sometimes mathematicians are efficient armchair biochemists, or so they wish they were 🙂

The sources in the chemlambda for the people presentation are different, whatever pointed to the chemlambda collection of animations should go now to the saved collection on github. For example this animation which has a javascript simulation available for you to try.

Fun with the non-commutative geometric series

The geometric series is

\sum_{n=0}^{\infty} \varepsilon^{n}  = \frac{1}{1-\varepsilon}

for any \varepsilon \in (0,1).

With the notation for dilations:

\delta^{x}_{\varepsilon} y = x + \varepsilon (-x+y)

we can rephrase the geometric series as an existence result. Namely that the equation:

\delta^{S}_{\varepsilon} 0 = x

has the solution

S = \sum_{n=0}^{\infty} \left( \delta^{0}_{\varepsilon}\right)^{n} x

for any \varepsilon \in (0,1).

The non-commutative version of this fun result is given in proposition 8.4 from (journal) (pdf) (arxiv) – Infinitesimal affine geometry of metric spaces endowed with a dilatation structure, Houston Journal of Mathematics, 36, 1 (2010), 91-136

In that article is proposed a non-commutative affine geometry, where usual affine spaces (over a vector space) are replaced with their non-commutative versions over a conical group. The work uses dilation structures, which are metrical versions of emergent algebras.

I shall explain with few words what is the result in the new frame. We have, instead of the usual dilation, a more general one, from which all the other algebraic structure is deduced, sometimes by a passage to the limit.

One deduced, or “emergent” we say, mathematical object is a non-commutative replacement of the usual addition operation in vector spaces. This is a non-commutative group operation.

Instead of multiplication by scalars we have application of dilations. And moreover, we have distributivity of dilations over the addition operation. This is an emergent property (i.e. we can prove it by passsage to the limit) from a more basic one, namely that dilations are left-distributive (denoted as LIN, see the notes A problem concerning emergent algebras)

In few words, these are like vector spaces, but with a non-commutative addition of vectors. Therefore the geometric series, as seen in the reformulation with dilations, becomes a non-commutative series, which converges like the commutative version does.

But why?

In the mentioned article the proof uses the existence of a metric on the (non-commutative affine) space. We can do a simple proof without it.

Of course that the condition \varepsilon \in (0,1) will be reformulated as \varepsilon^{n} \rightarrow 0 as n \rightarrow \infty.

Pick a base point e , which will play the role of the 0. Then we want to solve the equation in S

\delta^{S}_{\varepsilon} e = x

for a given, arbitrary x and for an \varepsilon with the property that \varepsilon^{n} \rightarrow 0 as n \rightarrow \infty.

We want to prove that

(*) S = \sum_{n=0}^{\infty} \left( \delta^{e}_{\varepsilon}\right)^{n} x

where the sum is with respect to the non-commutative addition operation based at e. More precisely this operation is “emergent”:

\Sigma^{e} (v, w)  = \lim_{\varepsilon \rightarrow 0} \Sigma^{e}_{\varepsilon}(v,w)

where the approximate sum is

\Sigma^{e}_{\varepsilon}(v,w) = \delta^{e}_{\varepsilon^{-1}} \delta^{\delta^{e}_{\varepsilon} v}_{\varepsilon} w

Recall that the dilations satisfy the LIN property:

(LIN) \delta^{e}_{\varepsilon} \delta^{x}_{\mu} y = \delta^{\delta^{e}_{\varepsilon} x}_{\mu} \delta^{e}_{\varepsilon} y

The conclusion (*) can be reformulated as: define S_{0} = x and

S_{n+1} = \Sigma^{e}(x, \delta^{e}_{\varepsilon} S_{n})

Then S =\lim_{n \rightarrow \infty} S_{n}.

But this is simple, due to the identities coming from LIN, which I invite you to prove 🙂

The first identity uses the fact that, once we defined the addition from dilations and a passage to the limit, then we can prove that dilations themselves express via addition. This gives the first identity:

\Sigma^{e}_{\varepsilon}(v,w) = \Sigma^{e}(\delta^{v}_{\varepsilon} e, w)

The second identity is easiers, just use LIN, there is no passage to the limit involved.

\Sigma^{e}_{\varepsilon}(v, \delta^{e}_{\varepsilon} w)  = \delta^{v}_{\varepsilon} w

With these identities, the recurrence relation of the non-commutative geometric series becomes:

S_{n+1} = \Sigma^{e}(x, \delta^{e}_{\varepsilon} S_{n}) = \Sigma^{e}_{\varepsilon}(S, \delta^{e}_{\varepsilon} S_{n}) = \delta^{S}_{\varepsilon} S_{n}

therefore

S_{n} = \left( \delta^{S}_{\varepsilon} \right)^{n} S_{0}

and the proof ends by recalling that \varepsilon^{n} \rightarrow 0 as n \rightarrow \infty, which implies

\lim_{n \rightarrow \infty} S_{n} = \lim_{n \rightarrow \infty} \delta^{S}_{\varepsilon^{n}} S_{0} = \lim_{\varepsilon \rightarrow 0} \delta^{S}_{\varepsilon} S_{0} = S

So, secretly (or infinitely recursive-ly) the limit candidate S attracts the finite terms of the geometric series S_{n}.

KamiOS, a more AI version of MicrobiomeOS?

Keiichi Matsuda posted on Twitter a very interesting article about KamiOS. I quote from the article:

“Today, the big tech companies are represented on this earth by their supposedly all-powerful prophets (Siri, Alexa, Cortana, Google Assistant). Joining one of these almighty ecosystems requires sacrifice, and blind faith. You must agree to the terms and conditions, the arcane privacy policy. You submit your most intimate searches, friendships, memories.

From then you must pray that your god is a benevolent one. The big tech companies are monotheistic belief systems, each competing to be the One True God.


KamiOS is different. It is based in pagan animism, where there are many thousands of gods, who live in everything . You will form tight and productive relationships with some. But if a malevolent spirit living in your refrigerator proves untrustworthy or inefficient, you can banish it and replace it with another. Some gods serve corporate interests, some can be purchased from developers, others you might train yourself. Over time, you will choose a trusted circle of gods, who you may also allow to communicate with and manage one another.”

I like this proposal a lot, especially because in the tweet it is presented as a form of spatial computing.

It also recalls me the older MicrobiomeOS proposal. There is also this post at chorasimilarity, with the same name. Quote:

“The programs making the operating system of your computer are made up of around ten million code lines, but your future computing device may harbour a hundred million artificial life molecular beings. For every code line in your ancient windows OS, there will be 100 virtual bacterial ones. This is your unique MicrobiomeOS and it has a huge impact on your social life and even your ability to interact with the Internet of Things. The way you use your computer, in turn, affect them. Everything, from the places we visit to the way we use the Internet for our decentralized computations influences the species of bacteria that take up residence in our individual microbiome OS.”

Mind that my project moved to a new official page.

So, where I propose asemantic molecular computing, Keiichi wants an artificial animist kind of computing. A lesser, but more human form of AI.

Logarithm: the last piece

In mol notation, logarithm is a (graph) term which turns the pattern

FO e d b, A d a c

into the pattern

FO a d b, inv d e c

with inv emergent. Depending on the family of graphs considered, this can be embodied in lambda style (turning Church numbers into emergent numbers) or as a rewrite in itself (or even as a heuristic for the correct translation from a formalism to another).

This is the last piece, which together with dirIC, em-convex, pure see and zzh, gives us now all that is needed for a new, non-commutative, linear logic 🙂 Happy 2021!

Dear Santa, 2021 wishlist: search engine for patterns, asemantic computing, Open Science system driven by researchers needs

First of all, good health for all my family and dear friends! Then, in any possible order, I wish that 2021 brings:

  • a search engine for patterns in real chemical reactions. In order to invent molecular computers, or probably more likely, to discover them in real biochemistry, I need to be able to search in databases of chemical reactions (not only in databases of chemical molecules). There are only a handful of very general patterns which appear everywhere in mathematics, which would allow for universal computation with the most simple and real algorithm: random and local. I know that the problem is not trivial, but it is totally possible and it is mostly a problem of scale.
  • a viable proposal for asemantic decentralized computing. There are many such semantic proposals, but they are all doomed to fail. I wish for such an asemantic proposal because we have to get rid of centralized computing as soon as possible, projected until 2024. If you can’t find such a proposal then maybe a little replacement, a token system, would do for the moment.
  • an Open Science movement driven primarily by researchers needs. I think that we, researchers, were left behind because we were too polite. Everybody else has all sorts of needs: publishers have to make a profit, managers have to escape responsibility of their decisions, librarians are afraid of their future. I hope that in 2021 researchers will decide what is most important for them first, because the actual situation is not OK.

Again, about Bizarre wiki page on ISI (and comments about DORA and the Cost of Knowledge)

There is this post from 2013, where I repeat my mantra regarding Open Access, that the stumbling block of Open Access is in the academic realm, more precisely in the management branch: Bizarre wiki page on ISI (and comments about DORA and the Cost of Knowledge).

More than 7 years passed since then, so I revisited the bizarre page on ISI, where at the time there was this strange claim which I cited in the previous post: (I boldfaced some significant words)

“The database not only provides an objective measure of the academic impact of the papers indexed in it, but also increases their impact by making them more visible and providing them with a quality label. There is some evidence suggesting that appearing in this database can double the number of citations received by a given paper.”

This is BS, right?

I identified now the two edits [1] [2], from 2015, which changed the text into:

“The database provides some measure of the academic impact of the papers indexed in it, and increases their impact by making them more visible and providing them with a quality label. Some anecdotal evidence suggests that appearing in this database can double the number of citations received by a given paper.”

Now this is more likely to be true. In the present, the ISI service morphed into a more modern one, which is used worldwide. Many researchers are still judged by this service, which provides some measure of the academic impact. Despite DORA, which I invite you to sign!

The anti-chemlambda tag

As a light December posting, I introduce you to the anti-chemlambda tag. These are some public events which are slightly weird, against the chemlambda project.

There are far more strange events which are not public, moreover I believe that the disclosure of these series B movie happenings would be weighted against the project 🙂

Anyway, I just browsed a part of the list of posts and added the “anti-chemlambda” tag to some of them which fit the subject. Enjoy!

2016 Elbakyan’ Open Access talk: Robin Hood and Fair Trade

UPDATE: Alexandra Elbakyan recent 2020 talk is a more interesting read than her 2016 talk. Maybe I shall comment on it later.

____

I took the time to read (via google translate) the text of a talk given by Alexandra Elbakyan on Open Science and Open Access in 2016. The talk (in Russian) is available from Alexandra page. (here is the archived version of the page, just in case.)

The text left me with the impression that in 2016 Elbakyan was not very aware about the importance of Sci-hub, or why her (and her team) solution is so radical, far beyond others.

A short version of her talk would be (with my excuses for any misrepresentation): in the past knowledge was regarded as necessarily secret, until Bacon and later Oldenburg times. From then on the scientific journals system made knowledge available to anyone, but at some point, from capitalistic reasons, an elite confiscated again the knowledge by hiding it behind publication paywalls. BOAI comes to the rescue and the rest we know.

To me this is like watching Robin Hood pretending that he tries to support fair trade. I understand now why Elbakyan seemed surprised in the past that exactly the proponents of OA were among her critics.

No.

Scientific journals were great, at the time, but they are no longer sufficient. The scientific method is not peer-review, but independent validation. The invention of scientific journals was better than private correspondence among scholars. Articles and peer-reviews are no longer sufficient, as Open Science shows.

A special moment in the OA movement was the start of arXIv. Later, BOAI proponents misrepresented this great step forward as “green” OA, which is archival, not publication, like “gold” OA is.

Just as in fair trade, weird lapses in the BOAI style OA definitions allowed publishers to charge researchers large sums of money for their own work.

This is not just another example of capitalistic perversion of the ideal communist research system.

Sci-hub, Alexandra’ creation, shows by example that the whole two decades in the BOAI false OA fight were completely unnecessary.

It is of course not the last word of OA, for several reasons: it is illegal in some ways in some countries, it is illegal because it infringes copyright, it is useful for reading paywalled articles, not any article.

But it is blindigly obvious that technically Sci-hub is a 2 clicks solution to OA, something no other system achieved, and that it levels the field for legacy journals and gold OA journals, that new capitalistic perversion.

So we have two elegant technical solutions: arXiv as input, Sci-hub as output. Let’s put them together, legally, can’t we?

Plan U comes to mind, but it is not enough! because it relies on the same old, unnecessary format of articles and system of peer review.

We should jump directly to Open Science, based on independent validation means, which is public and has fees for bit hosting no greater than for any other bit.

Nature $10K article processing charges: send the bill to the Gold OA proponents

Bjoern Brembs asked for political endorsement [archived] some years ago. I was among the people who gave him such support, as a scientist. I would like to retire this endorsement, because in my opinion the years which followed did not bring anything beneficial for OA.

Bjoern has two recent posts: Are Nature’s APCs ‘outrageous’ or ‘very attractive’? and High APCs are a feature, not a bug where he describes and react to the last surprise coming from the gilded Gold Open Access realm. Please read these comical posts.

Fact: Nature demands approx. $10K as article processing charges (APC) per unit of publication.

Reaction: proponents of green and gold OA are now surprised or they are not really surprised, even if they publicly supported the BOAI flawed definition of OA.

Of course this is no surprise! Authors, send your bills to the proponents of the Gold and Green OA movement.

The proponents of the OA system in the form of green (archival) and gold (publication) Open Access were predictably wrong. Over the years, their actions resulted in advantages for the publishers. You can see that by looking at the outcome of their fight. Isn’t is surprising? Not at all!

Moreover, even if, predictably, the ridicule of their proposals will fade into oblivion, these proposals became part of state policies.

But this is not over.

Notice that already there are similar preparations for Open Science which look to turn the hosting of the scientific bit into a big affair. Again, you will find about the same people who gave us this sad perversion of OA. You can find definitions and policies proposals in OS, with strange lapses concerning the applications of DRM, prices for bit hosting, and so on.

So next time send the hosting bills to these propagandists.

More details. I explained several times that the separation of OA into green (archival) and gold (publication) is a move against open access. A whole generation of researchers was betrayed by a coalition of publishers, librarians and academic managers.

This is not unseen. At the end of the 19th century, academic art passed by a revolution. Like in the actual science academic world, it was not important what the artist [researcher] create, but where it was published. The management of art created a monster: l’art pompier.

Presently, researchers are turned into content creators for publishers. The academic management selects the researchers by using numbers designed for the evaluation of journals.

The metadata (like the name of the journal) is more important than data (content of article).

Research is not this! Research is discovery.

Why should we be surprised, when a publication in Nature helps a lot with the researcher evaluation, which affects the distribution of research funds? Nature could ask $100K, could ask any price that you, but mostly your manager, agree to pay.

Why do you force researchers to pay them? this is the real question.

Combinators: A 100-year Celebration

I watch right now Wolfram’s Combinators: a 100-year celebration livestream. I did not receive the zoom link to participate… Stephen invited me some days ago. See this previous post which is related.

As I watch the very interesting presentation, there are many things to say (some to disagree), some said in private conversations, some here.

There are several points where I disagree with Stephen’s image, but the main one seems to be the following (or maybe I don’t understand him well). He seems to say that the universe is a huge graph, which is the fabric of the objective reality, which evolves via graph rewriting. So the graph exists, but it is huge. There exists a state of the universe which is the huge graph.

I believe that Nature consists of similar asynchronous graph rewrite automata. There is no global state of the universe. The rules are the same, though, and a good solution emerges at different scales. For example the same explanation of what space [programs] is [are] appears again in chemistry, where we encounter it in the way life works.

Big claims, but here is an example.

Stephen says that, for example, the universe is a huge combinator. I just say that anything which happens in the universe can be expressed with combinators.

Then, there are several technical places where I disagree.

A term rewrite system with a reduction strategy is a very different thing than a graph rewrite system with a reduction strategy. There is no bijection among them, in the sense that one can’t translate back and forth between the term rewrite and the graph rewrite sides, by using only local rules and algorithms.

Another place where I disagree is the importance of confluence. When you have a rewrite system which is confluent, if a term (or graph) can be reduced to a state where there is no further reduction possible, then that state is unique.

Confluence can be encountered in lambda calculus (as an example of a term rewrite system) and in interaction combinators (as an example of a graph rewrite system).

Confluence does not say anything about what happens when there is no final state of a term, or graph.

Even in confluent systems, like in interaction combinators, one can find examples of graphs which don’t have a final state, and moreover, may have vastly different random evolutions. This is like a nature vs nurture phenomenon, where the same graph (nature) may evolve (nurture) to two different states, say one of then which has periodic evolution and another one which has a different periodic evolution.

As concerns the large scale of the universe-graph which turns to be related to general relativity, recall that apparently innocuous assumptions (for example what is a discrete laplacian, or what is a connection) beg the question. These contain inside the conclusion, that at large scale we get a certain behaviour, or equation. One reason I know this is from sub-riemannian geometry, which offers plenty of examples of this phenomenon.

This is not the place to discuss this. Go to the chemlambda page for more.

If you are interested in what I do right now: I try to understand a problem formulated in these notes, as well as in this post here at chorasimilarity. Even more details in the wonderful group of emergent numbers post.

Back now to the main subject of this post. I shall not even enter in the semantics aspects. As is, any semantics based project will not deliver, just as it happens with all classical ones, like category theory or types hype. Time will tell.

Time tells it already. China and India are very interesting places.

computing with space | open notebook