# Graphic lambda calculus and chemlambda (II)

Chemlambda v2 is an entirely different project than GLC and chemlambda v1. This post continues from the first part. It explains the passage towards chemlambda v2.

A problem of GLC and chemlambda v1 is that research articles are opinion pieces, not validated by programs and experiments. The attempt to use GLC with the Actor Model in order to build a decentralized computing proposal, aka distributed GLC, failed because of this. Does all of this work?

The CO-COMM and CO-ASSOC rewrites lead to the situation that,  in order to be useful, either:

• they have to be applied by a human or by a(n unknown) very clever algorithm
• or they are applied in both directions randomly, which implies that no GLC or chemlambda v1 reduction ever terminates.

Here is an early visual tutorial  which introduces the nodes of chemlambda v2.  At the end of it you are pointed to See also a gallery of examples which mixes chemlambda v1 with chemlambda v2, like these:

Or, another example, the Y combinator. In chemlambda v1, without using CO-COMM and CO-ASSOC, the Y combinator applied to an unspecified term behaves like this.  In chemlambda v2, where there is a supplimentary node and other rewrites, the Y combinator behaves almost identically, but some nodes (the yellow FOE here instead of the green FO before) are different:

# Graphic lambda calculus and chemlambda (I)

UPDATE: The article Graph rewrites, from emergent algebras to chemlambda explains the history of the subject, will all the needed informations.

________

Looks like there is a need to make a series of posts dedicated to the people who try to use this blog as a source in order to understand graphic lambda calculus (aka GLC) and chemlambda. This is the first one.

Sources for GLC:

• the best source is the article M. Buliga, Graphic lambda calculus. Complex Systems 22, 4 (2013), 311-360   (link to article in journal) (link to article in arXiv)
• you can see the GLC page (link) here, which has been updated many times after chemlambda appeared, but it is unmodified starting from the section “What is graphic lambda calculus?” and there are links to many others posts here which explain GLC as it is, that is before chemlambda.

GLC is a graph rewriting system for fatgraphs made of trivalent or 1-valent nodes. The trivalent nodes used are A (application), L (lambda), FO (fanout), epsilon (for dilations). There is one 1-valent node, T (termination). Loops with no nodes and arrows, i.e. oriented edges with no nodes are accepted as well. There is no algorithm proposed for the reduction of these graphs.

The graph rewrites are:

– local ones (i.e. involving only a finite number, a priori given, of nodes and edges)

•   graphic beta move, which is like Lamping graph rewrite, only purely local, i.e. there is no limitation on the global shape of the graph, thus it is  somehow more general than the beta rewrite from untyped lambda beta calculus; a possible interpretation is C= let x=B in A rewrites to x=B and C=A

• CO-COMM and CO-ASSOC rewrites for the FO (fanout) which are, due to the orientation of the edges, really like graphical, AST forms of co-commutativity and co-associativity
• local pruning group of rewrites, which describe the interaction of the trivalent nodes with the T node; incidentally T and FO interact like if T is a co-unit for FO
• a group of rewrites for the dilation nodes, which are those of emergent algebras, which involve the fanout FO and the dilation nodes

– global rewrites:

• global fan-out
• global pruning

In section 3 of the article on GLC is given an algorithm of conversion of untyped lambda terms into graphs which is a little more than a modification of the AST of the lambda term, in such a way that the orientation of the edges is respected. That is because the lambda node L has one incoming edge and two outgoing edges!

I prove then that GLC can be used for untyped lambda beta calculus, but also for emergent algebras and also for a variant of knot theoretic graphs where there is no need for them to be planar. Finally, I show that there might be other “sectors” of graphs which may be interesting, regardless of their meaning (or lack of it) with respect to lambda calculus.

The problem of GLC is that it has these global rewrites. Can these be replaced by local rewrites?

That’s how chemlambda appeared.

I was aware that some applications of global fanout can be done with local rewrites, but not all. Also, the interest in GLC was not extended to the interest into emergent algebras, to my dismay.

On the other side I became obsessed with the idea that if the global fanout can be replaced by local rewrites entirely then it should be possible in principle to see the rewrites as chemical reactions between individual molecules. A bigger problem would be then: would these reactions reduce the graph-molecules under the dumbest algorithm among all, the random one? Nature functions like this, so any algorithm which would be global in some sense is completely excluded.

This led me to write the article: M. Buliga, Chemical concrete machine (link to DOI) (link to arXiv version).  This is chemlambda v1, which is actually a transition from GLC to something else, or better said to another research subject in the making.

Other sources for this mix glc-chemlambda v1:

Chemlambda v1, or the “chemical concrete machine”, is a graph rewriting algorithm which uses the trivalent nodes A, L, FI (fan-in), FO, and the 1-valent node T and only local rewrites:

• the graphic beta rewrite (between the nodes L and A) and the fan-in rewrite (between the nodes FI and FO)

• the CO-COMM and CO-ASSOC rewrites

• two DIST (from distributivity) rewrites, for the pairs A-FO and L-FO

• local pruning rewrites (involving the 1-valent node T)

•  and elimination of loops

As you see, all rewrites except CO-COMM and CO-ASSOC are alike the ones from Lafont article Interaction combinators, except that the graphs are directed and the nodes don’t have a principal port for interaction. Here are the interaction combinators rewrites

In the chemical concrete machine article are mentioned Berry and Boudol chemical abstract machine and Fontana and Buss alchemy, but not Lafont.  My fault. From what I knew then the beta rewrite came from Lamping or better Wadsworth, the fan-in came from Turaev (knotted trivalent graphs), and the DIST rewrites came from the graphical version of linear emergent algebras, or even better from the Reidemeister 3 rewrite in knot theory.

In this article there is no algorithm for application of the rewrites. (You shall see that in chemlambda v2, which is an artificial chemistry,  there are actually several, among them the deterministic greedy one, with a list of priority of the rewrites, because there are collisions otherwise, and the random one, which interested me the most.)

However I suggest that the rewrites can be seen as done in a truly chemical sense, mediated by (invisible) enzymes, each rewrite type with it’s enzyme.

I proved that we can translate from GLC to chemlambda v1 and that global fan-out and global pruning can be replaced by sequences of rewrites from chemlambda v1. There was no proof that this replacement of the global rewrites with cascades of local rewrites can be done by the algorithms considered. I proved the Turing universality by using the BCKW system of combinators.

The chemical concrete machine article ends however with:

“With a little bit of imagination, if we look closer to what TRUE, FALSE and IFTHENELSE are doing, we see that it is possible to adapt the IFTHENELSE to a molecule which releases, under the detection of one molecule (like TRUE), the ”medicine” A, and under the detection of another molecule (like FALSE) the ”medicine” B.”

The chemlambda page (link) here is reliable for chemlambda v1, but it also contain links to newer versions.

[UPDATE: I retrieved this view from 2014  of the story.]

# A quine in Lafont’ Interaction combinators

I continue with a second post about Y. Lafont Interaction combinators. Here is the first one.

In the Figure 3 from the article is given an example of a nonterminating computation:

This is a quine. Indeed it is a graph which has a periodic evolution under the deterministic greedy reduction algorithm, using the interaction rules (i.e. the graph rewrites) of interaction combinators.

By comparison, a chemlambda quine is a molecule (graph in chemlambda) which has a periodic evolution under the deterministic greedy reduction algorithm which uses the chemlambda graph rewrites, with the priority of the rewrites set to “viral”, i.e. the DIST family of rewrites comes first. In chemlamdba is needed a prority of rewrites (for the deterministic algorithm) because there exist conflicts between the rewrites, i.e. overlaping left patterns.

About a third of the molecules from the library are chemlambda quines and they are interesting mostly when reduced with the random reduction algorithm. While for interaction combinators the random reduction algorithm brings nothing new (the system is confluent), for chemlambda with the random reduction algorithm the system is not confluent and the chemlambda quines may die. All of the quines from the library are at best immortal, i.e. the probability of death does not depend on the age of the molecule.

A reason for this phenomenon is that all these chemlambda quines don’t use termination nodes (which correspond to the epsilon nodes of interaction combinators). The smallest chemlambda quine without T nodes is the 9_quine, which has  9 nodes. But we may use termination nodes and produce a quine which is similar to Lafont’ example:

In mol notation this quine is:

FO 1 2 3
T 2
FOE 3 4 1
T 4

The animation is obtained with the scripts from the chemlambda repository, in the same way as those used for the comparison with the Dynamic GOI Machine. In order to obtain a deterministic reduction all weights (i.e. all parameters “wei_*” from the relevant awk script were set to 0.

You see that this is really a 6 nodes quine, why? Because even in Lafont example, a deterministic greedy reduction would lead at step 2 to the simultaneous application of rewrites which increase the number of nodes and of those which decrease the number of nodes, so a correct application of the deterministic greedy algorithm would be similar with the example from chemlambda.

Maybe it would be interesting (as a tool) and straightforward to modify the chemlambda scripts into an interaction combinators version.

# Universality of interaction combinators and chemical reactions

In the foundational article Interaction combinators, Yves Lafont describes interaction rules as having the form

He then gives three examples of particular families of interaction rules which can be used to simulate Turing Machines, Cellular Automata and Unary Arithmetics.

The main result of his article (Theorem 1) is that there is an algorithm which allows to translate any interaction system (i.e. collection of interaction rules which satisfy some natural conditions) into the very simple system of his interaction combinators:

In plain words, he proves that there is a way to replace the nodes of a given interaction system by networks of interaction combinators in such a way that any of the interaction rules of that interaction system can be achieved (in a finite number of steps) by the interaction rules of the interaction combinators.

Because he has the example of Turing Machines as an interaction system, it follows that the interaction combinators are universal in the Turing sense.

The most interesting thing for me is that Lafont has a notion of universality for interaction systems, the one he uses in his Theorem 1. This universality of interaction combinators is somehow larger than the universality in the sense of Turing. It is a notion of universality at the level of graph rewrite systems, or, if you want, at the level of chemical reactions!

Indeed, why not proceed as in chemlambda and see an interaction rule as if it’s a chemical reaction? We may add an “enzyme” per interaction rule, or we may try to make the reaction conservative (in the number of nodes and wires) as we did in chemlambda strings.

Probably the rewrites of chemlambda are also universal in the class of directed interaction networks. If we take seriously that graph rewrites are akin to chemical reactions then the universality in the sense of Lafont means, more or less:

any finite collection of chemical reactions among a finite number of patterns of chemical molecules can be translated into reactions among chemlambda molecules

But why keep talking about chemlambda and not about the original interaction combinators of Lafont. Let’s make the same hypothesis as in the article Molecular computers and deduce that:

such molecular computers which embody the interaction combinators rewrites as chemical reaction can indeed simulate any other finite collection of chemical reactions, in particular life.

For me that is the true meaning of Lafont universality.

# Kaleidoscope

Unexpectedly and somehow contrary to my fresh posting about my plans for 2019, during the week of Jan 7-12, 2019 a new project appeared, which is temporary named Kaleidoscope. [Other names, until now: kaleidos, morphoo. Other suggestions?]

This post marks the appearance of the project in my log. I lost some time for a temporary graphical label of it:

I have the opinion that new, very promising projects need a name and a label, as much as an action movie superhero needs a punchline and a mask.

So what is the kaleidoscope? It is as much about mechanical computers (or physically embedded computation) as it is about graph rewrite systems and about space in the sense of emergent algebras and about probabilities. It is a physics theory, a computation model and a geometry in the same time.

What can I wish more, research wise?

Yes, so it deserves to be tried and verified in all details and this takes some time. I do hope that it will survive to my bugs hunt so that I can show it and submit it to your validation efforts.

# Twitter lies: my long ago deleted account appears as suspended

9 months ago I deleted my Twitter account, see this post.  Just now I looked to see if there are traces left. To my surprise I get the message:

This is a lie. I feel furious about the fact that this company shows a misleading information about me, long after I deleted my account.

# Projects for 2019 and a challenge (updated at the end of 2019)

It’s almost the end of 2018, so I updated my expectations post from a year ago, you may find it interesting. Update (dec. 2019): And now I updated this post.

Now, here is a list of projects which are almost done on paper and which deserve attention or reserve some surprises for 2019. Then a challenge for you, dear creative reader.

• I mentioned Hydrogen previously. This is a project to build a realistic hydrogen atom purely in software. This means that I need a theory (a lambda calculus like) for state spaces, then for quantum mechanics and finally for a hydrogen atom. [UPDATE: too early at the moment, maybe for 2020]
• Space is of special interest (and needed to build hydrogen), a lambda calculus for space is proposed in the em project. Now I am particularly fascinated by numbers. [UPDATE: now there is anharmonic lambda and pure see, in the making]
• The needs project is a bridge towards chemlambda.  It’s entirely written, in pieces, it is about permutation automata. Only the main routine is public.[UPDATE: this project morphed into hapax]
• And what would life be without luck, aka computable probabilities? This is the least advanced project, for the moment it covers some parts of classical mechanics, but it is largely feasible and a pleasure to play with it in the year to come. [UPDATE: see arXiv:1902.04598 and these slides about chemlambda as a hamiltonian system with dissipation]
• [UPDATE: several other things happened, for example quine graphs]

I have a strong feeling that these projects look very weird to you, so I have a proposal and a challenge. The proposal for you is to ask for details. I am willing to give as much as (or perhaps more than) you need.

The challenge is the following.  As you know my banner is “can do anything”.  So let’s test it:

• propose a subject of research where you are stuck. Or better, you want to change the world (in a good way).
• I’ll do something about it as quick as possible, if you get me interested.
• Then I’ll ask for means. And for fairness.
• Then we’ll do it to the best of our capacities.

Well, happy 2019, soon!

# More experiments with the dynamic GoI abstract machine visualiser and chemlambda

This post continues from Diagrammatic execution models (Lambda World Cadiz 2018) compared with chemlambda . For the context, I quote from  this MonkeyPatchBlog post

Koko Muroya and Steven Cheung, working under the direction of Dan Ghica, gave a fantastic overview of their work on diagrammatic execution model.  […] There is a nice demo of their work hosted on github

The demo is the “GoI Visualiser”. In this post I continue to play with this visualiser and with chemlambda as well.

You can play too by using the link to the GoI Visualiser demo and an archive available here, which contains all is needed for running chemlambda and for producind anything from this post.

There is a readme.txt inside which you may enjoy, with instructions to use and some background.

OK, let’s play.

The Goi Visualiser comes with the untyped lambda term

A = ((λf. λx. f (f x)) ((λy. y) (λz. z))) (λw. w)

and in the preceding post I reduced this term in chemlambda. Now I take this term and remark that there are inside 3 identity terms: (λy. y) , (λz. z) and (λw. w). That is why I take the term

B = (λu.(((λf.λx.f(fx))(uu))u))(λw.w)

which has the property that by one BETA reduction it becomes A.

I filmed the reduction of B with the call-by-need, in the GoI Visualiser:

and, out of curiosity, remarked that call-by-name does not work actually it does but I misunderstood what I see!

What about chemlambda? The “molecule” (i.e. chemlambda graph) for the lambda term B can be either goi-5.mol, i.e.

A out2 in2 out3
L out in1 out2
A 1 2 out
A 3 4 1
L 5 f 3
FO f 7 9
A 7 8 6
A 9 x 8
L 6 x 5
A 10 11 4
FO in1 2 13
FO 13 10 11
L w w in2

or the goi-6.mol, i.e.

A out2 in2 out3
L out in1 out2
A 1 2 out
A 3 4 1
L 5 f 3
FO f 7 9
A 7 8 6
A 9 x 8
L 6 x 5
A 10 11 4
FO in1 13 2
FO 13 10 11
L w w in2

These two molecules differ in only one place:

FO in1 2 13  (goi-5.mol)   vs.  FO in1 13 2  (goi-6.mol)

Everything is in the archive! Go check for yourself, please 🙂

This difference is due to the fact that the algorithm for building a chemlambda molecule from a lambda term leaves freedom for the way to duplicate variables. Here in the case of the term B, this freedom is in relation with the variable “u”.  Indeed

B = (λu. … (uu))u) …

and to produce 3 u’s from one we need two FO (fanout) nodes, but we may arrange them in two ways.

The algorithm is the one to be expected, here the one from section 3, M. Buliga, Graphic lambda calculus, Complex Systems 22, 4 (2013), 311-360, arXiv:1305.5786.  In chemlambda we transform terms from lambda calculus by this algorithm, but mind that in Graphic Lambda Calculus (GLC) there is only one fanout node, the FO. In chemlambda there are two fanout nodes, the FO and the FOE. What is interesting is that there are no other nodes but the fanin FI, two fanouts FO, FOE, application A, lambda L, arrow Arrow, free input FRIN, free output FROUT, termination T. There are no brackets, boxes, tags!

You want to see again the rewrites of chemlambda? arXiv:1811.04960

So the algorithm of conversion is the one from GLC, with only FO nodes in the initial molecule for the term. There are two possible ways to convert the term B, these are goi-5.mol and goi-6.mol.

The goi-6.mol behaves very cool all the time (i.e. under the random reduction algorithm of chemlambda)

The goi-5.mol behaves very well in about 87.5% cases (i.e. in 7/8 cases), from experiments. In most of the cases the reduction ends with an identity, as it should, but in 1/8 cases we end with this siamese brothers identity:

which is in mol notation:

FI 1 2 out

L z z 1

L v v 2

i.e. a fanin FI is left alone and it does not know that his left and right inputs are the same.

Why is that?

From all examples where I reduced molecules from lambda terms, I encountered this phenomenon only once, see the story of The factorial and the little lisper  .  In that case, I was able to produce a working example, and as well some funny ones, like this version of the factorial of 5:

(taken from the Chemlambda for the people html/js slides)

This happens because there is a mix between execution and duplication which sometimes goes astray. For example if I take the molecule goi-2.mol

FO out out1 out2
A 1 2 out
A 3 4 1
L 5 f 3
FO f 7 9
A 7 8 6
A 9 x 8
L 6 x 5
A 10 11 4
L y y 10
L z z 11
L w w 2

which is exactly like the molecule goi.mol for the lambda term A (seen in the previous post), only with

FO out out1 out2

added, then the execution (reduction) of the two copies of A while they duplicate, it goes great:

That is because the nodes FO, FOE and FI satisfy the shuffle trick, which guarantees the duplication of FO trees from lambda terms molecules (in particular).

I suspect that there is a choice of the FO fanout trees in the conversion of a lambda term into a molecule which does the job.

Don’t know how to prove it 🙂

# Diagrammatic execution models (Lambda World Cadiz 2018) compared with chemlambda

Via this MonkeyPatchBlog post I learned about a keynote presented at the Lambda World Cadiz 2018, on Diagrammatic execution models, quote:

Koko Muroya and Steven Cheung, working under the direction of Dan Ghica, gave a fantastic overview of their work on diagrammatic execution model.   […]

There is a nice demo of their work hosted on github, which was used during the presentation. […]

Applied category theory is booming right now, and this work led me to wonder if they were considering describing their work in a categoretic way (yes, it seems). Some of the demos they showed were reminiscent of chemlambda: a graph evolving given rewriting rules (which incidently provided the illustration for the ACT 2019 announcement).”

So I wanted to see how does the lambda term reduce in chemlambda (with the random reduction).

Here is the GoI Visualiser in action: the term is ((λf. λx. f (f x)) ((λy. y) (λz. z))) (λw. w)

reduced with call-by-need looks like this:

Comparison with chemlambda. The mol file goi.mol for this lambda term is:

A 1 2 out

A 3 4  1

L 5 f 3

FO f 7 9

A 7 8 6

A 9 x 8

L 6 x 5

A 10 11 4

L y y 10

L z z 11

L w w 2

I prepared an archive with all needed, taken from the chemlambda repository. You may just download it and then you shall see in the mol folder the goi.mol file. To produce (a) reduction you write in terminal

bash quiner_shuffle.sh

then you write

goi.mol

then you see that a file goi.html appeared. You write

firefox goi.html &

and you see this:

or something equivalent. So that’s how the reduction of this term looks in chemlambda. 🙂 Well, the animated gif shows that again and again…

UPDATE: Thank you for the interest and nice words. If you don’t like my way of writing code, which is to be expected because geometer here, then there is this Haskell implementation.

My opinion is still that the most interesting ideas of chemlambda are:

As concerns the first point, this justifies the accent on the dumbest, random, local reduction algorithm, and the experiments outside graphs which represent lambda terms (from all those chemlambda quines to mixtures of busy beavers and lambda terms).

As for the second point, there is really a lot of mathematics, perhaps logic too, to be explored here.

# Chemlambda collection afterlife [updated]

UPDATE: Chemlambda collection of animations is the new version of the collection hosted on github. The original site of the revived collection is under very heavy traffic (in Jan 2020). Small images, about a 1/2 of the collection, due to memory limitations. But you can play the simulations in js!

Still, hundreds of posts available via reshares from the chemlambda collection. Recall that I deleted the collection some time ago, see here. [See also the photos from posts.]

I arrived at the conclusion that there is no reason to hide recent or (sometimes) older research from public, just because I believe the academic publishing is close to collapse.

So I started by posting on arXiv a text version of the experimental article Molecular computers, available now as arXiv:1811.04960.  The JS animations are replaced with links and there is a note to the reader added.

The same article is also posted at Figshare:

https://doi.org/10.6084/m9.figshare.7339103.v1

# Open Science is rwx science

Preamble: this is a short text on Open Science, written a while ago,  which I now put it here. It is taken from this place at telegra.ph. The link (not the content) appeared here at the Chemlambda for the people post. I can’t find other traces, except the empty github repository “creat”,  described as “framework for research output as a living creature“.

__________________

I am a big fan of Open Science. For me, a good piece of research is one which I can Read Write eXecute.

Researchers use articles to communicate. Articles are not eXecutable. I can either Read others’ articles or Write mine. I have to trust an editor who tells me that somebody else, whom I don’t know, read the article and made a peer-review.

No. Articles are stories told by researchers about how they did the work. And since the micromanagement era, they are even less: fungible units to be used in funding applications, by the number or by the keyword.

This is so strange. I’m a mathematician and you probably know that mathematics is the most economical way to explain something clearly. Take a 10 pages research article. It contains the intensive work of many months. Now, compress the article further more by the following ridiculous algorithm: throw away everything but the first several bits. Keep only the title, the name of the journal, keywords, maybe the Abstract. That’s not science communication, that’s massive misuse of brain material.

So I’m an Open Science fan, what should I do instead of writing articles? Maybe I should push my article in public and wait after that for somebody to review it. That’s called Open Access and it’s very good for the readers. So what? the article is still only Readable or Writable, pick only one option, otherwise it’s bad practice. What about my time? It looks that I have to wait and wait for all the bosses, managers, politicians and my fellow researchers to switch to OA first.

It’s actually much easier to do Open Science, remember! something that you can Read, Write and eXecute. As an author, you don’t have to wait for the whole society to leave the old ways and to embrace the new ones. You can just push what you did: stories, programs, data, everything. Any reader can pull the content and validate it, independently. EXecute what you pushed, Read your research story and Write derivative works.

I tried this! Want to know how to build a molecular computer which is indiscernible from how we are made? Use this playground called chemlambda. It’s a made up, simple chemistry. It works like the real chemistry does, that is locally, randomly, without any externally imposed control. My bet is that chemlambda can be done in real life. Now, or in a few years.

I use everything available to turn this project into Open Science. You name it: old form articles, html and javascript articles, research blog, Github repository, Figshare data repository, Google collection [update: deleted], this 🙂

Funny animations obtained from simulations. Those simulations can be run on your computer, so you can validate my research. Here’s what chemlambda looks like.

[Here come some examples and animations. ]

During this project I realized that it went beyond a Read Write Execute thing. What I did was to design many interesting molecules. They work by themselves, without any external control. Each molecule is like a theorem and the chemical evolution is the proof of the theorem, done by a blind, random, stupid, universal algorithm.

Therefore my Open Science attempt was to create molecules, some of them exhibiting a metabolism, some of them alive. Maybe this is the future of Open Science. To create a living organism which embodies in its metabolism the programs and research data. It’s valid if it lives, grow, reproduces, even die. Let it cross breed with other living creatures. In time the natural selection will do marvels. Life is not different than Science. Science is not different than life.

# I deleted the Google+ chemlambda collection

UPDATE: … and now the collection is back! 🙂  Chemlambda collection of animations is the version of the collection hosted on github. The original site is under very heavy traffic (in Jan 2020). Small images, about a 1/2 of the collection, due to memory limitations. But you can play the simulations in js!

_____

This 400 posts collection, 60 000 000 views,  was as much a work of research popularization as a work of art. Google cannot be trusted with keeping high density data (scientific, art, etc). Read here about this.

It pained me to delete it, but it had to be done. It was harder than when I quit Facebook, Twitter.

The collection and richer material exist, I have them. Still, the Github repository is available, as well as the github.io demos. For example, the dodecahedron multiplication animation used as background for a conference site of statebox.io was made from a screencast of a d3.js which can be seen  here.

Mail me for access to more material. I have to think what I am going to do with them, long term. Meanwhile look for updates at my professional homepage or the alternative page.

# John Baez’ Applied Category Theory 2019 post uses my animation without attribution [updated]

The post, dated Oct 2, appears at John Baez Azimuth blog. Here is what I see today Oct 4th:

UPDATE: now there is a link to the chemlambda repository, but see also the comments, there and here. The real problem is related to the attitude concerning  Open Science. Link to archived post.

This is the gif which illustrates the chemlambda github repository.

The original animation appeared for the first time in the chemlambda collection post Metabolism as failed replication. The later post (Sept 2016) contains more about this idea and useful links.

[ UPDATE: Recently, I deleted the chemlambda collection. The content of it will become public again in a new form. Meanwhile mail me for access. However, the github repo, libraries, demos and articles are public.]

The chemlambda molecule which is used is available at the chemlambda library of molecules, as tape_long_4653_2.mol . You can download the simulation itself (which was used to make the animation) from the Chemlambda collection of simulations at Figshare, the file tape_long_4653_2.js.

The last time when one of my animations was used withot attribution, the situations was quickly solved.  I explained then that the chemlambda project is an Open Science project and that correct attribution is what is fair to do.

Now, I would expect from an academic researcher more.

Anyway, again the magic of chemlambda strikes. Let me tell you what the animation is really about. Metabolism and replication are two fundamental ingredients of life. Which came first? Are these independent?  I prepared the molecule and experimented with it to show that (in the artificial toy chemistry chemlambda) metabolism and replication may be related, in the sense that metabolism may appear as failed replication.

The molecule in question is a “tape”, topologically the same as a DNA loop. On the tape there is a very small part which triggers the duplication of the tape molecule. The duplication works perfectly, there are several examples in the chemlambda collection. But this time I took a tape which duplicates without problems and I modified it in a single place. The result is a failed duplication which is spectacular in the sense that the tape molecule produces a number of disconnected graphs (i.e. other molecules), some of them are quines.

# Torsor rewrites

With the notation conventions from em-convex, there are 3 pairs of torsor rewrites.  A torsor, figured by a fat circle here, is a term $T$ of type  $T: E \rightarrow E \rightarrow E \rightarrow E$

with the rewrites:

and

Finally, there is a third pair of rewrites which involve terms of the form $\circ A$ for $A: N$

The rewrite T3-1 tells that the torsor is a propagator for $\circ A$, the rewrite T3-2 is an apparently weird form of a DIST rewrite.

Now, the following happen:

• if you add the torsor rewrites to em-convex then you get a theory of topological groups which have a usual, commutative smooth structure, such that the numbers from em-convex give the structure of 1-parameter groups
• if you add the torsor rewrites to em, but without the convex rewrite then you get a more general theory, which is not based on 1-parameter groups,  because the numbers from em-convex give a structure more general
• if you look at the emergent structure from em without convex, then you can define torsor terms whch satisfy the axioms, but of course there is no em-convex axiom.

Lots of fun, this will be explained in em-torsor soon.

For me this is the only sane reaction to the EU Copyright Directive. The only thing to do is to keep your copyright. Never give it to another. You can give non-exclusive rights of dissemination, but not the copyright of your work.

So: if you care about your piece of work then hodl copyright, if you don’t care about it (produced it to satisfy a job demand, for example) then proceed as usual, is trash anyway.

For my previous comments see this and this.

If you have other ideas then share them.

# The second Statebox Summit – Category Theory Camp uses my animation

UPDATE: the post was initially written as a reaction to the fact that the Open Science project chemlambda needs attribution when some product related to it is used (in this case an animation obtained from a dodecahedron molecule which produces 4 copies; it works because it is a Petersen graph). As it can be seen in the comments everything was fixed with great speed, thank you Jelle. Here’s the new page look

The rest of the post follows. It may be nice because it made me think about two unrelated little facts: (1) I was noticed before about the resemblance between chemlambda molecules and the “vajra chains” (2) well, I CHING hexagrams structure and rewrites are close to the two families of chemlambda rewrites, especially as seen in the “genes” shadow of a molecule. So putting these two things together, stimulated to find an even more halucinatory application of chemlambda, I arrived to algorithmic divination. Interested? Write to me!

__________________________________________________

I hope they’ll fix this, the animation is taken probably from the slides I prepared for TED Chemlambda for the people (html+js).

Here’s a gif I made from what I see today Saturday 20:20 Bucharest time.

Otherwise I’m interested in the subject and open to discussions, if any which is not category theory PR, but of substance.

UPDATE: second thoughts

• the halucinatory power of chemlambda manifests again 🙂
• my face is good enough for a TED conference (source), now my animation is good for a CT conference, but not my charming personality and ideas
• here is a very lucrative idea, contact me if you like it,  chemlambda OS research could be financed from that: I was notified about the resemblance between chemlambda molecules and the vajra chains of awareness, therefore what about making an app which would use chemlambda as a divination tool? Better than a horoscope, if well made, huge market. I can design some molecules and the algorithm for divination.

# On the origin of artificial species

I read Newton but not Darwin’s On the origin of species, until now. Chance was that, looking for new things to read in the tired landscape of libraries, I felt on a translation of Darwin’s famous book. Is wonderful.

While reading it I was striken by the fact that genetics was unknown to him, Though, what a genius. I’m almost a professional reader (if you understand what I mean) and I passed by Newton, as I said, in original, by some of the ancient greek philosophers (an even greater experience). Now, as I’m reading Darwin in a translation, I am aware of the translation limitations but I can’t stop to think that, before reading it, I lived this experience.

The main gain of the chemlambda project was for me the building of a world which undoubtedly has an autonomous existence, whatever your opinions may be. In my dreams, as I read Darwin, I see a rewrite of this book based on the observations of the chemlambda’s 427 valid molecules (eliminate from the chemlambda library of molecules those from this list, what you get are all valid molecules).

What I don’t see, perhaps because of my ignorance, is that the logical last implication of Darwin’s work is that the theory of evolution refutes any semantics, in particular the semantics of species.

It is in probabilities the possible blend of individual evolution and species evolution into a new theory which is not unlike the evolution theory, but as much as different as possible from any actual political theory. A dream, of course, a Hari Seldon dream 🙂 because probabilities look as much as semantics as space.

Who really knows? Funding bodies, especially these private high risk takers, don’t seem to have the balls to risk in the field of fundamental research, the most riskier activity ever invented. Who knows? I may know, if this little cog in the evolution machine ever had a chance to.

# Summer report 2018, part 2

Continues from Summer report 2018, part 1.

On the evolution of the chemlambda project and social context.

Stories about the molecular computer. The chemlambda project evolved in a highly unexpected way, from a scientific quest done completely in the open to a frantic exploration of a new territory. It became a stories generator machine. I was “in the zone” for almost two years. Instead of the initial goal of understanding the computational content of emergent algebras, the minimalisic chemlambda artificial chemistry concentrated on the molecular computer ideas.

This idea can be stated as: identify molecules and chemical reactions which work as the  interaction nets rewrites style of chemlambda. See the article Chemlambda strings for a simple explanation, as well as a recent presentation of the newest (available) version of chemlambda: v3. (It is conservative in the numbers of nodes and links, the presentation is aimed for a larger audience.)

This idea is new. Indeed, there are many other efforts towards molecular computing. There is the old ALCHEMY (algorithmic chemistry) where lambda calculus serves as inspiration, by taking the application operation as a chemical reaction and the lambda abstration as reactive sites in a molecule. There is the field of DNA and RNA computing where computations are embodied as molecular machines made of DNA or RNA building blocks. There is the pi calculus formalism, as pure in a sense as lambda calculus, based on communication channels names exclusively, which can be applied to chemistry. There is the idea of metabolic networks based on graph grammars.

But there is nowhere the idea to embed interaction networks rewrites into real chemical reactions. So not arbitrary graph grammars, but a highly selected class. Not metabolical networks in general, but molecules designed so individually compute. Not solutions well stirred in a lab. Not static or barely dynamic lego-like molecules. Not boolean gates computing but functional programming like computing.

From the side of CS, this is also new, because instead of concentrating of these rewrites as a tool for understanding lambda calculus reductions, we go far outside of the realm of lambda calculus terms into a pure random calculus with graphs.

But it has to be tried, right? Somebody has to try to identify this chemistry. Somebody has to try to use the functional programming basic concepts from the point of view of the machine, not the programmer.

For the mathematical and computing aspects see this mathoverflow question and answers.

For the general idea of the molecular computer see these html/js slides. They’ve been prepared for a TED talk with a very weird, in my opinion, story.

For the story side and ethical concerns see for example these two short stories posted at telegra.ph : Internet of smells, Home remodeling (a reuse of Proust).

In order to advance there is the need to find either, rather both funding and brain time from a team dedicated to this. Otherwise this project is stalled.

I tried very hard to find the funding and I have not succeeded (other weird stories, maybe some day will tell them).

I was stalled and I had to go back to my initial purpose: emergent algebras. However, being so close to inverse engineering of the  nature’s  OS gives new ideas.

After a year of efforts I understood that it all comes to stochastic luck, which can be groomed and used (somehow). This brings me to the stories of the present, for another post.

# Summer report 2018, part 1

In this report I intend to present explanations about the scientific evolution of the chemlambda project, more about the social context and about new projects (em and stochastic evolution). I shall also write about my motivations and future intentions.

On the evolution of the chemlambda project and social context.

Inception and initial motivations. This open notebook contains many posts wittnessing the inception of chemlambda. I started to learn about and understand some aspects of the theory of computation as a geometer. My goal was to understand the computational contents of working with the formalism of emergent algebras. I thought that basically any differential geometric computation reduces to a graph rewrite automaton, without passing through the usual road which is non-geometrical, i.e. reduction to cartesian numerical manipulations. The interest is obvious to me, although I discovered soon that it is not obvious to many. A preferred analogy which I used was the one concerning the fly and the researcher who tries to understand the visual system of the fly. The fly’s brain, a marvel of nature, does not work with, nor it contains a priori knowledge of cartesian geometry, while in the same time the explanations of the researcher are almost completely based on cartesian geometry considerations. Which is then the way of the fly’s brain? And why can it be explained by recourse to sophisticated (for a fly) abstractions?

That’s why I advanced the idea that there is an embedded mechanism in nature which makes abstractions concrete and runs them by the dumbest algorithm ever: random and local. In this sense, if all differential geometric computations can be executed by a graph rewrite automaton, by using only random rewrites, applied only locally (i.e. by using only a small number of nodes and arrows), then the fly’s brain way and the researcher brain way are simply the same, only the semantics (the researcher has) is different, being only a historical building based on centuries of inverse engineering techniques called geometry, physics, mathematics.

The emergent algebras formalism has actually two parts, the first which can be easily reduced to graph rewrites, the second one which concerns passing to the limit in a precise sense and therefore obtaining new, “emergent” rewrites and equivalences of rewrites. At that initial point I had nothing, not the pure graph rewrites formalism, nor the passing to the limit formalism, except some particular results (in metric geometry and intriguingly in some problems related to approximate groups, then a hot work of Tao and collaborators).

That is how GLC (graphic lambda calculus) appeared. It was, in retrospect, a particular formulation analoguous with interaction graphs, with the new ideas that it is applicable to geometry, via the fact that the emergent algebra rewrites are of the same kind as the interaction graphs rewrites. Interaction graphs are an old subject in CS, only that my point of view was completely different than the classical one. Where the functional programming wizards were interested in semantics, global, concepts and the power of humanly designed abstractions, I was interested into the minimal, machine (or fly’s brain) like, random and automatic aspects.

Because the approximate groups were a hot subject then, I embedded a little part of what I was thinking about into a grant collaboration financed locally. Recall that I was always an Open Science researcher, therefore I concentrated on openly (i.e. back then via arXiv) constructing the fundamentals from where particular applications on approximate groups would have been low hanging fruits. However, for what I believe are political reasons (I publicly expressed as usual my strong feelings against academic and political corruption, which debase me as a citizen of my country which I always loved very much), my grant funding was cancelled even if I did a lot of relevant work, by far the most publicly visible and original. Oh well, that’s life, I was never interested much in these political aspects.I learned the hard way a truth: my country has a great pool of talents which make me proud, but in the same time talent is here choked by a group of mediocre and opportunistic managers. They thrive not because their scientific talent, which is not inexistent, but only modest, they thrive because their political choices. This state of affairs created an inverted pyramid of power (as seen from the talent point of view).

I filed therefore  in my notebooks the problem of understanding how a linear dilation structure emerges  from an approximate group.  There was nothing to stop me to go full OS.

I wrote therefore the Chemical concrete machine paper because I meaned it: there should be a way to make a machine, the dumbest of all, which works like Nature. This was  an advance over GLC, because it had almost all rewrites local (excepting global fan-out) and because it advanced the idea of the dumbest algorithm and that the dumbest algorithm is the way the Nature works.

Moreover the interest in GLC soared and I had the ocasion to talk a lot with Louis Kauffman, a wonderful researcher which I always admired, the king of knot theory. There were also lots of CS guys interested into GLC and they tried to convince me that maybe GLC has the key to true decentralized computing. A project with some of them and with Louis (contained in this arXiv paper) was submitted to an american agency. Unfortunately, even if the theoretical basis was appreciated, the IT part was not well done, actually is was almost inexistent. My problem was that the ideas I advanced were not (even by Louis sometimes) accepted, I needed somebody (I am a mathematician, not a programmer, see?) to write some pretty simple programs and let them work to see if I’m right and semantics is just human BS or not.

For an an artificial life conference I wrote with Louis another presentation of chemlambda, after the GLC project was not accepted for US funding. The formalism was still not purely local. There, Louis presented his older and very interesting points of view about computation and knot theory. These were actually different than mine, because for me knot theory is yet another graph rewriting automaton (without a defined algorithm for functioning). Moreover, recall emergent algebras, I have not made Louis to be interested in my point of view that the Rademacher 3 move is emergent, not fundamental.

Louis Kauffman is the first programmer of chemlambda. Indeed, he succeded to make some reductions in chemlambda using Mathematica. I don’t have Mathematica, as I never use on my computers anything which is not open. I longed for somebody, a real programmer, to make those darned simple programs for chemlambda.

I was interested back then into understanding chemlambda quines and complex reductions. On paper that was very very hard to progress.

Also, I have not succeded to gather interest for the emergent algebras aspect. Chemlambda simplified the emergent algebra side by choosing a minimal set of nodes, some of them which had an emergent algebra interpretation, but nobody cared. It is hard though to find anybody familiar with modern metric geometry and analysis and also familiar with interaction nets.

After some depressing months I wrote the programs in two weeks and got the first chemlambda reduction made with a combination of awk programs and d3.js.  The final repository is here.

The version of chemlambda (call it v2) used is explained in the article Molecular computers. It is purely local.

From there my choice was to make chemlambda a flagship of Open Science. You know much of the story but you may not know how and why I built more that 400 chemlambda molecules. The truth is that behind the pretty animations, almost each molecule deserves a separate article, or otherwise stated, when you look at a chemlambda molecule in action you see a visual version of a mathematical proof.

The chemlambda formalism has been externally validated, first by chemlambda-py (which has though a rewrite wrongly implemented, but otherwise is OK) then by chemlambda-hask which is much more ambitious, being a platform for a haskell version.

As for the connection with knot theory you have the Zipper Logic article (though it is like chemlambda v1 not a purely local agorithm, but it can be easily made so by the same techniques as chemlambda v2).

I also used figshare for the chemlambda collection of simulations (which covers the animations shown in the chemlambda collection on G+, see them starting from an independent list).

As concerns the social communication aspects of this OS project, it was a huge success.

# A stochastic version for hamiltonian inclusions with convex dissipation

Appeared as arXiv:180710480  (it was previously available as (draft) )

A stochastic version and a Liouville theorem for hamiltonian inclusions with convex dissipation

Abstract: The statistical counterpart of the formalism of hamiltonian systems with convex dissipation arXiv:0810.1419  arXiv:1408.3102 is a completely open subject. Here are described a stochastic version of the SBEN principle and a Liouville type theorem which uses a minimal dissipation cost functional.

just in time for the anniversary of my son Matei 🙂

UPDATE: I asked again today (Sept 12) after the vote on the EU Copyright Directive.

___________

As a researcher I would very much appreciate answers to the following questions:

• suppose I put an article in arXiv, then it appears in a journal. Are the uses of the link to the arXiv version affected in any way?
• continuing, will the choices of licenses, among those used by arXiv, lead to different treatments?
• does the EU copyright reform apply to articles which are already available on arXiv  (and also in journals)?
• is there anything in the EU copyright reform which harms the arXiv?
• what about other repositories, like figshare for example? what about zenodo?

I insist with the arXiv example because in some research fields, like mathematics or physics, the usual way things happen re articles is this: first the researcher submits the article to arXiv, then the article is published in a legacy journal. Some times the article is not published in journals, but it is cited in other articles published in journals.  Most of the articles are available therefore in two places: arXiv (say) and journal. From what I read about the EU copyright reform, I can’t understand if the use of the arXiv version of an article will be affected by this reform.

While I can understand that there are many problems concerning open source software repositories, I would like to see a clear discussion about this subject which is close but different from the subject of open source software repositories.

# Groups are numbers (3). Axiom (convex)

This post will be updated as long as the em draft will progress. Don’t just look, ask and contribute.

UPDATE 3: Released: arXiv:1807.02058.

UPDATE 2: Soon to release. I found something so beautiful that I took two days off, just to cool down. Wish I release this first em-convex article in a week, because it does not modify the story told in that article. Another useful side-effect of writing this is that I found a wrong proof in arXiv:0804.0135 so I’ll update that too.

UPDATE: Don’t mind too much my rants, I have this problem, that what I am talking about is in the future with respect to what I show. For example I tried to say it several times, badly! that chemlambda may be indeed related to linear logic, because both are too commutative. Chemlambda is as commutative as linear logic because in chemlambda we can do the shuffle. Or the shuffle is equivalent with commutativity, that’s what I tried to explain last time in Groups are numbers (1). There is another, more elaborate point of view, a non-commutative version of chemlambda, in the making. In the process though, I went “oh, shiny thing, what’s that” several times and now I (humbly try to) retrace the correct steps, again, in a form which can be communicated easily. So don’t mind my bad manners, I don’t do it to look smart./

The axiom (convex) is the key of the Groups are numbers (1) (2) thread. Look at this (as it unfolds) as if this is a combination of:

• the construction of the field of numbers in projective geometry and
• the Gleason and Montgomery-Zippin solution to the Hilbert 5th problem

I think I’ll leave the (sym) axiom and the construction of coherent projections for another article.

Not in the draft available there are about 20 pages about the category of conical groups, why it is not compact symmetric monoidal (so goodbye linear logic) but it has as a sub-category Hilb. Probably will make another article.

I sincerely doubt that the article form will be enough. I can already imagine anonymous peer reviews where clueless people will ask me (again and again) why I don’t do linear logic or categorical logic (not that it is useless, but it is in the present form heavily influenced by a commutative point of view, is a fake generalization from a too particular particular case).

A validation tool would be great. Will the chemlambda story repeat, i.e. will I have to make, alone, some mesmerizing programs to prove what I say works? I hope not.

But who knows? Very few people deserve to be part of the Invisible College. People who have the programming skills (the easy part) and the lack of prejudices needed to question linear logic (the hard, hacker part).

# Groups are numbers (2). Pattern matching

As in divination, pattern matching. Continues from Groups are numbers (1).

We start from elementary variables, then we define number terms by two operations: substraction and multiplication.

• Variables are terms.
• Substraction (the first line): a is a variable and b is a term, then $a-b$ is a term.
• Multiplication (2nd line): a, b are terms, then $ab$ is a term.

By pattern matching we can prove for example this:

[update: figure replaced, the initial one was wrong by pattern matching only. The difference is that in this correct figure appears “(a-b)d” instead of the wrong “d(a-b)”]

What does it mean? These are just binary trees. Well let’s take a typing convention

where e, x, … are elements of a vector space and the variables are invertible scalars. Moreover take $e = 0$ for simplicity.

Then the previous pattern matching dream says that

$(1-(a-b))c x + (a-b)(c-d) x = (c - (a-b)d)x$

which is true, but from all the irrelevant reasons (vector space, associativity and commutativity  of addition, distributivity, etc):

$(c- ac + bc + ac -ad - bc + bd) x = (c - ad + bd) x = (c - (a-b)d)x$

With the previous typing conventions it reads:

$(c-(a-b))x = (1-b)(c-a)x + b(1- a^{-1})(c-a) x + (bc a^{-1})x$

which is true because the right hand side is:

$((1-b)(c-a) + b(1- a^{-1})(c-a) + bc a^{-1} )x =$

$= (c-a-bc+ab+bc-ab-bca^{-1} +b+bc a^{-1}) x = (c-a+b) x = (c-(a-b))x$

Which is funny because it does not make any sense.

# Groups are numbers (1), the shuffle trick and brackets

What I call the shuffle trick is this rewrite. It involves, at left, a pattern of 3 decorated nodes, with 5 ports (the root and 1, 2, 3, 4). At the right there is almost the same pattern, only that the decorations of the nodes changed and the ports 2, 3 are shuffled.

I have not specified what is the orientation of the edges, nor what the decorations (here the letters “a”, “b”) are. As previously with chemlambda or emergent algebras, these graphs are in the family of trivalent oriented ribbon graphs. You can find them everywhere, in physics, topology, knot theory or interaction graphs. Usually they are defined as a pair of two permutations, A and B, over the set of “half-edges” (which are really half edges). The permutation A has the property that AAA=id and the orbits of A are the nodes of the graph. Translated, this gives a circular order of the edges incident to a node. The permutation B is such that BB=id and the orbits are the unoriented edges. Indeed, an edge is made by to half edges. The orientation of the edges is made by picking one of the half edges which make an edge, or equivalently by replacing the set of two half edges, say half-edge and B(half-edge), by a list of the two half edges.

I prefer though another description of these graphs, by using sticks and rings, or, just the same, by using a partially defined SUCC function (which defines the sticks or rings) and a GLUE permutation with GLUE GLUE = id. That is why I use behind the description of the chemlambda strings and what you can grasp by looking at the needs repository.

With the sticks and rings notation, here is an example of the shuffle trick:

The shuffle trick is very important in chemlambda. It is this one, in a static diagram. You see that the decoration of the nodes are “FO” and “FOE” and that actually, in chemlambda, this is achieved via two rewrites.

More about the chemlambda shuffle trick in the all-in-one illustrated shuffle trick post. where is explain why the shuffle trick is so important for duplication. An animation taken from there is this one, the dynamical version of the previous static picture. [The molecule used is this, you can see it live here.]

But the shuffle trick is relevant to emergent algebras as well. This time we play with oriented binary trees, with nodes which can be white or black, decorated with “a”, “b” from a commutative group. To refresh a bit your memory, here are the rules of the game for emergent algebras, look at the first two columns and ignore the third one (because this is old notation from graphic lambda calculus). An emergent algebra over a set X is a collection of operations indexed with a parameter in a commutative group. We can represent these operations by using oriented trivalent ribbon graphs (left side) or just binary trees (right side), here with leaves at the right and the root at the left.

(image taken from this post).   (Image changed)

In this post we’ll use Reidemeister moves (they are related to the true Reidemeister moves).

(Emergent algebras have one more property, but for this we need to have an uniform structure on X, because we need to take limits wrt the parameter which are uniform wrt the leaves… not needed here for the moment.)

Further I’ll use the representation with binary trees, i.e. I’ll not draw the orientation of the edges, recall: from the leaves, at right, to the root, at left.

By using the Reidemester 2 move twice, we can make the following version of a shuffle trick (in the figure below the orientation of the edges is from right to left, or from the leaves 1, 2, 3, 4 to the root)

Encircled is a graph which quantifies the difference between the left and right sides of the original shuffle trick. So if we want to have a real shuffle trick in emergent algebras, then we would like this graph to transform to an edge, i.e. the following rewrite

If this rewrite is possible in an emergent algebra, then we’d have a shuffle trick for it. Conversely, if the shuffle trick would be valid, then the graph from the left would transform to the one from the right by that shuffle trick and two Reidemeister 2 moves.

But look closer at this graph: reading from right to left, it looks like a bracket, or commutator in a group. It has the form $b^{-1} a^{-1} b a$, or almost!

This is because we can prove that indeed it is an approximate version of the commutator and that the shuffle trick in emergent algebras is possible only in a commutative case. We shall apply this further for chemlambda, to show that it is in a sense valid in a commutative frame. Also, we can work non-commutatively as well, the first example being the Heisenberg group. Lot of fun!

# Open Access Movies

Dear Netflix, Dear Hollywood, Dear Cannes Competition, etc etc etc

Dear artists,

You face tough times.  Your audiences are bigger than you ever imagined.  Your movies, your creations are now very easy to access, by anybody, from everywhere. But you can’t monetize this as much as you want. The artists can’t be paid enough, the producers can’t profit enough. There is no respect left for your noble profession. A screaming idiot with a video camera is more popular than a well thought, well funded movie with a dream cast.

There is a solution which I humbly suggest. Take the exemple from academic scientists. They are an absolute disaster as concerns the communication talent, but, reluctantly, you may grant them some intellectual capacities, even if used in the most weird way and to their least profit.

They invented Gold Open Access and I believe that it is a great idea for your business.

You have to recognize the problem, which is that you can no longer ensure a good profit from selling your movies. The audience will follow the cheapest path and will not pay you. That’s the sad truth.

But, what about the movie makers? They have money.

People from the audience is always seeking for the best movies. Give them the best movies for free!

You are the gatekeepers. Dissemination of movies is trivial. Make the movie makers pay!

You are the ones which can select the best movies. From the thousands of movies made each year, only a hundred of them are among the best, for a global audience.

Therefore, artists, if you want to work in the best, then you have to be in the cast of the best 100 movies of the year. Then you’re good and with some chance (there are lots and lots of artists, you know) you’ll have the chance to be in the cast of next year’s best movies.

Producers, why don’t you use your connection with politicians and convince them to take money from taxes and give them to you, in a competition alike to the various research grant competions in the academic world.

Producers can always split the money with the dissemination channels. They will be both part of the juries which decide which movie takes more funding and part of the juries which decide which movie is transmitted by Netflix, which one will deserve to be called  a Hollywood production or, in the case of Europeans, in the list of movies from the next Cannes competition.

In this way the producers make profit before the movie is stolen by the content pirates.

In this way the dissemination channels (Netflix, etc) have the best movies to show, vetted by respected professionals, and already paid by the various competition budgets.

In this way politicians can always massage as they want their message to the populace. And finance future campains.

So the great idea, borrowed from the intelligent academic research community, is to make the creators compete and pay for the honor to be in the first 100 best creators, with the money from taxes, taxes taken from the audience who sees the movie for free.

# Groups are numbers (0)

I am very happy because today I arrived to finish a thread of work concerning computing and space. In future posts, with great pleasure I’ll take the time to explain that, with this post serving as an impresionistic introduction.

Eight years ago I was obsessing about approximate symmetric spaces. I had one tool, emergent algebras, but not the right computation side of it. I was making a lot of drawings, using links, claiming that there has to be a computational content which is deeply hidden not in the topology of space, but in the infinitesimal calculus. I reproduce here one of the drawings, made during a time spent at the IHES, in April 2010. I was talking with people who asked me to explain with words, not drawings, I was starting to feel that category theory does not give me the right tools, I was browsing Kauffman’s “Knots and Physics” in search for hints. (Later we collaborated about chemlambda, but knots are not quite enough, too.)

This a link which describes what I thought that is a good definition for an approximate symmetric space. It uses conventions later explained in Computing with space, but the reason I reproduce it here is that, at the moment I thought it is totally crazy. It is organic, like if it’s alive, looked like a creature to me.

There was an article about approximate symmetric spaces later, but not with such figures. Completely unexpected, these days I had to check something from my notes back then and I found the drawing. After the experience with chemlambda I totally understand the organic feeling, and also why it does resemble to the “ouroboros” molecules, which are related to Church encoding of numbers and to the predecessor.

Because, akin to the Church encoding in lambda calculus, there is an encoding in emergent algebras, which makes these indeed universal, so that a group (to simplify) encodes numbers.

And it is also related to the Gleason-Yamabe theorem discussed here previously. That’s a bonus!

# Quines in chemlambda (2)

Motivated by this comment I made on HN, reproduced further, I thought about making a  all-in-one page of links concerning various experiments with quines in chemlambda. There are too many for one post though. In the Library of chemlambda molecules about 1/5 of the molecules are, or involve quines.

[EDIT: see the first post Quines in chemlambda from 2014]

If you want to see some easy to read (I hope) explanations, go to the list of posts of the chemlambda collection and search for “quine”. Mind that there are  several other posts which do not have the word “quine” in the title, but do have quine-relevant content, like those about biological imortality, or about senescence, or about “microbes”.

There’s a book to be written about, with animated pages. Or a movie, with uniformised style simulations. Call me if you want to start a project.

Here is the comment which motivated this post.

on “autocatalitic quines”. The Introduction section explains very nice the history of uses of quines in artificial life.

There are some weird parts though in all this, namely that we may think about different life properties in terms of quines:

1) Metabolism, where you take one program, consume it and produce the same program

2) Replication, where you take one program, consume it and produce two copies.

3) Death

I thought about this a lot during my chemlambda alife project, where I have a notion of a quine which might be interesting, seen the turn of these comments.

A chemlambda molecule is a particular trivalent graph (imagine a set of real molecules, the graphs don’t have to be connected), chemical reactions are rewrites, like in reality, when if there is a certain pattern detected (by an enzyme, say) then the patern is rewritten.

There are two extremes in the class of possible algorithms. One extreme is the deterministic one, where rewrites are done whenever possible, in the order of preference from a list, so that the possible conflicting patterns are always solved in the same way. The other extreme is the purely random one, where patterns are randomly detected and then executed or not acording to a coin toss.

Now, a quine in this world is by definition a graph which has a periodic evolution under the deterministic algorithm.

The interesting thing is that a quine, under the random algorithm, has some nice properties, among them that it has a metabolism, can self-replicate and it can also die.

Here is how a quine dies. Simple situation. Take a chemlambda quine of period 1. Suppose that there are two types of rewrites, the (+) one which turns a pattern of 2 nodes into a pattern of 4 nodes, the other (-) which turns a pattern of 2 nodes into a pattern of 0 nodes (by gluing the 4 remaining dangling links in the graph).

Then each (+) rewrite gives you 4 possible new patterns (one/node) and each (-) rewrite gives you 2 possible new patterns (because you glued two links). Mind that you may get 0 new patterns after a (+) or (-) rewrite, but if you think that a node has an equal chance to be in a (+) pattern or in a (-) pattern, then there is twice as possible that a new pattern comes from a (+) rewrite than from a (-) rewrite.

Suppose that in the list of preferences you always put the (+) type in front of the (-) one. It looks that in this way graphs will tend to grow, right? No!

In a quine of period 1 the number of (+) patterns = number of (-) patterns.

Hence, if you use the random algorithm, the non execution of a (+) rewrite is twice more probable to affect future available rewrites than the non-execution of a (-) rewrite.

In experiments, I noticed lots of quines which die (there are no more rewrites available after a time), some which seem immortal, and no example of a quine which thrives.”

I’m fine. I still exist and my life is better. I write this after several years of experiments with Open Science in social media. I still keep a presence with Google because I don’t want to delete the chemlambda collection. But in no way am I satisfied with this.

UPDATE 4: In January 2019, my long ago deleted Twitter account appears as suspended. So these liars pretend that my account is not deactivated, but suspended by them.

UPDATE 3: I deleted the chemlambda collection.

UPDATE 1: I deleted my Medium account, it was just another Twitter sht.

UPDATE 2: The fight of legacy media against FB is so stupid. But amusing. May be useful, like an infection with a gut parasite which makes the immune system able to kill a more dangerous viral invection. When this is done, the gut parasite is easy to get rid of…

I explained what I think is wrong with corporate social media, from the point of view of a researcher who wants to share scientific content and discuss about it. For example in the Twitter “moment” which no longer exists. (I think very few people saw it because it was hidden by Twitter 🙂 )

Dissatisfied with the careless treatment of scientific data, precious data, by corporate social media, in this “moment” I explain that I tried, probably successfully, to socially hack Google Plus. It worked, reasons here.

The other reason for which social media is not good for Open Science is that successful collaborations via said media are very rare. Most of the interactions have almost no scientific value. It is an endless stream of hot air bubbles coming from a school of bored goldfishes.

The reason is not that people [who are willing to interact via social media] are stupid. Don’t believe this shallow explanation. I think this is because of the frame of mind cultivated by social media. People there consume, they don’t build. They are encouraged to click, not to reason. They have to do everything as quick as possible. Why? they don’t know. I imagine though that [some hackers excepted] there is not much rational thought in the brains of a casino client.

There are therefore two reasons which make social media bad for use for Open Science:

• bad, disrespectful treatment of scientific data, despite low volume and high density
• bad medium for rational interaction, despite being presented as an enhanced one.

I’ll go to a liitle bit of detail concerning Facebook and Twitter, because until now I wrote mainly about my experience with Google.

Facebook. I tried several times to use Facebook for the same purpose. But I failed, because of a complete lack of a chance for visibility. Even the 10s animations from real simulations were badly presented in FB (hence tehnical reasons). Moreover the algorithms were clearly not in favor for the kind of posts I made on Google Plus. But I have to admit that there was a matter of chance that Google had this idea of collections, plus the superior technical possibilities, which made the chemlambda collection to be very visible.

Twitter. I had a presence on Twitter since, I think, 2011. I intentionally kept a low count of people I followed, varying and keeping only those who posted interesting tweets. However, it was clear for me since a long time that there is heavy censorship, or call it editing, same, on what I see and what my followers see.

From time to time I made or consumed political tweets. I am free to do this, as far as I know, and I am a grownup whose youth was spent under heavy thought police, so allow me to be furious to see the new thought police enforced exactly by those whom I admire in principle.

Going back to Google, the same thing happened btw, here is a  clear case where I was allowed a rare glimpse over the algorithmic wall, where I and my interlocutor saw each comment censored by Google in the other’ worldview:

Story here.

People are more and more furious now (i.e. 2 years after), especially about Facebook and Cambridge Analitica. But let’s ignore politics and go back to using social media for science.

Well, corporate social media does not care about this. Moreover, censorship (aka algorithmic editing) can have very bad consequences for scientific communication, like: inhibition of better scientific ideas in order to protect a worse technical solution which brings money, or straight scientific theft, when a big data company obtains for free good scientific ideas.

OK, what about the Invisible College?

I think we really are on the brink of a scientific revolution. We do have technical means to interact scientifically. Most of the scientists who ever eisted on Earth are alive. Rational, educated thought and brain power, from professionals of many fields, from inquisitive minds, from creative freaks, these are in an unprecedented quantity. Add computing power to that mix, add the holy Internet. Here we are, ready to pass to a new level.

If you look back to the last scientific revolution, then one of the places where it happened was in a precursor group of the Royal Society of London called The Invisible College.

What a great idea!

Look, these people really were like us. In a past post I shared the front page of a famous book by Newton (appeared posthumously) where you can recognize the same ideas as today.

I am sure that there are others members, most of them perhaps future ones, of the Invisible College of the 21st century, where we have to solve the problems: how to treat scientific information fairly, among us, the members, and how to interact rationally and thoughtfully. Long term.

Because social media failed us. Because who cares about politics?

Here, there are lots of good names from that period, like the related “College for the Promoting of Physico-Mathematical Experimental Learning” from 1660, but my liking goes to the Invisible college.

I end with two invitations for more private discussions

but I fully encourage to discuss here as well, if you want.

# Blockchain categoricitis

The following conditions predispose you to categoricitis and this can be very bad for your savings:

• baby boomer
• you were a programmer once or you made money from that
• you are a known researcher but your last original idea was last century
• interested in pop science
• you think an explanation by words can be shorter than one by mathematics
• don’t know mathematics at the researcher level
• you think you’re smart
• you are all about internet, decentralization and blockchains
• you believe in ether or may have pet alternative theories about the universe
• you are not averse to a slight cheating, if this is backed by solid institutions or people.

More of these conditions present, more are you at risk.

(If you work in the money business then you are immune. For you, those people with categoricitis are a golden opportunity.)

The most dangerous is when you feel the need to be blockchain cool, but you missed the Bitcoin train. Your categoricitis is then grabbing you, making you smell like money. You feel the need to invest. Hey, what’s the problem? Is backed by math, banks and M$. You’ll be relieved rather sooner than later 🙂 # The “Chemlambda for the people” PM Let’s have some fun with the release of the original recording of the talk rehearsal “Chemlambda for the people”. I’ve told the story in this post, you can see the slides I used here (needs js!). The rehearsal took approx 31 min with discussions, so I split the original mp4 into 3 parts, video and sound as they were. Enjoy: # A question about binary trees I need help to identify where does appear the following algebra of trees. UPDATE: seems that it does not appear anywhere else. Thanks for input, even if it led to this negative result. Please let me know, though, if you recognize this algebra somewhere! We start with the set of binary trees: • the root I is a tree • if A and B are trees then AB is the tree which is obtained from A and B by adding o the root I the LEFT child A and the RIGHT child B We think therefore about oriented binary trees, so that any node which is not a leaf has two childs, called the left and the right child. On the set of these binary trees we define two operations: • a 1-ary operation denoted by a * • a 2-ary operation denoted by a little circle The operations are defined recursively, as in the following picture: I am very interested to learn about the appearance of this algebra somewhere. My guess is that this algebra has been studied. For the moment I don’t have any information about this and I kindly request for help. If not, what can be said about it? It is easy to see that the root is a neutral element for the operation “small circle”. Probably the operation “small circle” is associative, however this is less clear than I first thought. If you think this structure is too dry to be interesting, then just try it, take examples and see what gives. How much does it take to compute the result of an operation, etc… Thank you for help, if any! # Creepy places to be Google is creepy. Facebook, I heard is creepy. Maybe you don’t know yet but Firefox is creepy. Twitter is a creepy bad joke. Hacker News is creepy by omission, if that matters to anybody. If you want to talk then mail me at one of the addresses down the first page of this article. Or open an issue at one of my repositories. Or come see me. Or let me see you. See you 🙂 # Chemlambda strings I uploaded Chemlambda strings at Figshare. “Chemlambda is an asynchronous graph rewrite automaton which uses a carefully selected family of graph rewrites of the kind encountered in Interaction Nets (IN). In this article is given a version of the graphs and rewrites which is more chemistry friendly. It is argued that real chemistry has enough place for accomodating chemlambda. The use of IN rewrite patterns in real chemistry, as templates of concrete chemical reactions, is an unexplored direction towards molecular computers. The simulations which validate chemlambda as a toy chemistry show that there is a big potential in this direction.” The article is paired with the needs repository. Look down the first page of the article for contact mail. So what’s new with respect to chemlambda? 1. It is conservative. I said previously that it can be done, but here is the proof now. 2. It is open to vast generalization. I explained previously that there is not much lambda in chemlambda, as a proof see Turing machines, chemlambda style. Now it has the form (can be easily put into the form) of a permutation automaton. A permutation automaton is simply a program which takes as input a (maybe huge) permutation, probably with decorations on it (i.e. is a permutation of some big set, specified, not only a permutation of 1, …, N) and then it applies (randomly) pre-defined templates of permutations, whenever it detects a pattern into the permutation and moreover the random number generator produces an output of a certain difficulty 🙂 3. The paired needs repository contains already the main program. You can figure out how it functions, even if I have not added yet the functions libraries. 4. It is chemically friendly… Read the article. Why chemlambda strings? Because now we think about chemlambda molecules as being made by lists with sticky ends. These are the strings. Each list (string) has two ends. So if there are N strings, then there are 2N nodes (ends of strings) and 3N edges. An edge is given by the fact that every list end appears in another list (interior). Attention, it is not forbidden (actually happens, but not for graphs associated to lambda terms) to also have loops. A loop is a list which you take and cut it’s start and end then you glue it back. So if you take such a structure then you shall have a succ and pred functions, as well as a function gamma which taked a list end and gives you the list end place into another list. For simplicity one can duplicate the nodes (so that now we have 4N nodes instead of 2N) and think about gamma as connecting the node which is an end of a list with the node which is an element of another (or the same) list. Tell me if that rings a bell to you! # What I expect from 2018 (updated at the end of 2018) In the About section I wrote: “This blog contains ideas from the future”. Well let me look into my crystal ball. Then, at the end of 2018 I shall update this post and compare. This is about stuff I expect to do in 2018, and stuff I expect to happen in 2018, or even things I hope to happen in 2018. Before that a short review of what I think is significant to remember at the end of 2017. • all soft and hard is wrecked beyond any paranoid dream. There is nothing we can trust. Trust does not exist any more, as an effect. • in particular there is no trust in deterministic randomness 😉 so boy, how safe are your bitcoins… • all corporate Net is dead for the few intelligent people, but in the same time many discover the Net today and they love it! It is the new TV, don’t tell me that you expect from TV to be interactive. You hold the remote, but otherwise you “Got thirteen channels of shit on the T.V. to choose from“. • corporate Net is hands in in hands with the legacy publishers, because of a simple reason: science brings authority, so you don’t mess with science dissemination. If you mess with it then you question authority, or in this human world there is nothing else than authority which keeps things going as expected. Now, what I expect to do in 2018: • [true, see arXiv:1807.02058, arXiv:1807.10480, arXiv.1811.04960] write articles in the classic arXiv style, short if possible, with programs repositories if possible (projected: NN, space, thermodynamics, engineering, computing) • [unexpected things happened] if these articles (on new subjects! or on older subjects but with new techniques) make wou want to change the world together then I exist in the meatspace, 3d version and I welcome you to visit me or me to visit you, all else is futile • shall do mostly mathematics, but you know the thing about mathematics… What I expect to happen in 2018: • [true] the Net will turn in TV completely • [not yet true] “content creators”, i.e. those morons who produce the lowest possible quality (lack of) content for TV and cinema, will be fucked by big data. It is enough to get all the raw cinema footage and some NN magic in order to deliver, on individual demand, content better than anything those media professional morons can deliver. And of course much cheaper, therefore… • [partially true, see also blockchain categoricitis forecast] I expect a big fat nice money bubble to burst, because money laundering is a fundamental part of the actual economy What I hope to happen in 2018: • [not true] new hardware • [only very limited] real meatspace randomness devices • [happened, then burst] more distributed automata (like bitcoin) to suck up the economy • [not true] diversification, because not anybody can be among the handful of deep state corporate beings. OK, what do you think? Lay here your forecast, if you wish… or dare. # Open Science: “a complete institution for the use of learners” The quote is from 1736. You can see it on the front page of the book “The method of fluxions and infinite series” by Newton, “translated from the author’s Latin original not yet made publick” (nobody is perfect, we know now where this secrecy led in the dispute with Leibniz over the invention of the differential calculus). That should be the goal of any open science research output. What we have at the end of 2017? • Sci-hub. Pros: not corporate. It does not matter where you output your article, as long as it becomes available to any learner. Cons: only old style articles, not more. So not a full solution. • ArXiv. Pros: simple, proved to be reliable long term. Cons: only articles. • Zenodo. Pros: not corporate, lots of space for present needs. Cons: not playable. • Github. Pros: good for publicly and visibly share and discuss over articles and programs. Cons: corporate, not reliable in the long term. • Git in general. Pros: excellent tool. • Blockchain. Pros: excellent tool. I have not added anything about BOAI inspired Open Access because it is something from the past. It was just a trick to delay the demise of legacy publishing style, it was done over the heads of researchers, basically a deal between publishers and academic managers, for them to be able to siphon research$  and stiffle the true open access movement.

Conclusion: at the moment there are only timid and partial proposals for open science as “a complete institution for the use of learners”. Open science is not a new idea. Open science is the natural way to do science.

There is only one way to do it: share. Let’s do it!

# Genocide as a win-win situation

Imagine. A company appears in your town and starts by making the public place more welcoming. Come play, says the company, come talk, come here and have fun. Let us help you with everything: we’ll keep your memories, traditions, we’ll take care to remind you about that friend you lost track some years ago. We’ll spread your news to everybody, we’ll get customers for your business. Are you alone? No problem, many people are like you, what if you could talk and meet them, whenever you want?

We don’t want anything important, says the company, just let us put some ads in the public place. It’s a win-win situation. Your social life will get better and we’ll make profit from those ads.

Hey, what if you let us manage your photos? All that stuff you want to keep, but there’s too more of it and it’s hard for you to preserve. We’ll put it on a cloud. Clouds are nice, those fluffy things which pass over your head in a sunny morning, while you, or your kids play together in the public place.

Remember how it was before? The town place was not at all as alive as now. You had not as many friends as now, your memories were less safe. Let us take care about all your cultural self.

Let us replace the commons. We are the commons of the future. We, the company…

We’ll march together and right all wrongs. Organize yourselves by using the wonderful means we give you. Control the politicians! Keep an eye on those public contracts. Do you have abusive neighbours? Shame their bad habits in the public place.

The public place of the future. Kindly provided by us, the company. A win-win situation.

# The IHX relation in the chemlambda strings

While hunting for RNA graph rewrites in the literature, it dawned on me that some chemlambda strings rewrites have all the ingredients of an IHX relation, only one sign is wrong. But this sign can be corrected by an AS relation. For example the FI-FO rewrite (seen as a chemical reaction) appears like this:

You can see the IHX (related to Jacobi identity) and the AS relation in the wiki page about the Kontsevich invariant.

Compare this image with the FI-FO rewrite as it appears here:

in the Chemlambda strings draft.

Or check out this handmade animation 🙂

# Reality soon to become illegal, what’s next

All kinds of intermediaries, from media to banks, see their existence threatened by the web. Their increasing panic created this bubble of apparent silence. It is now official, from the point of view of the corporate media the reality is illegal.

This can’t last.

For the record:

• Bitcoin is not a bubble
• Censorship is not compatible with democracy
• Cloud based web is ridiculous
• People are moral beings

Only later I became thankful about all this. Because even that’s only one step away, is clear that people are far from idiots (even the corporate kind, even if they behave like ones).

People are far more moral than officially accepted, like reality, even if illegal, both.

Therefore I’m very optimistic about the future. Very optimistic about the new possibilities which now become visible due to the gift of available time, otherwise lost in censored places.

All this to say thank you 🙂

# Transparency is superior to trust

I am fascinated by this quote. I think it’s the most beautiful quote, in it’s terseness, I’ve seen since a long time. Wish I invented it!

It is not, though, the motto of Wikileaks, it’s taken from the section on Reproducibility of this Open Science manifesto.

To me, this quote means that validation is superior to peer review.

It is also significant that the quote says nothing about the publishing aspects of Open Science. That is because, I believe, we should split publishing from the discussion about Open Science.

Publishing, scientific publishing I mean, is simply irrelevant at this point. The strong part of Open Science, the new, original idea it brings forth is validation.

Sci-Hub acted as the great leveler, as concerns scientific publication. No interested reader cares, at this point, if an article is hostage behind a paywall or if the author of the article paid money for nothing to a Gold OA publisher.

But science communication is a far greater subject of interest. And validation is one major contribution to a superior scientific method.

# Chemlambda for the people (with context)

UPDATE:Here are  the slides associated. The  context of this is very weird and it still continues as I write this update (Aug 6 2017).

UPDATE 2: With even more details, on Medium: How I became a face model for TED. (archived version, I’ve deleted my Medium account, as well as Twitter, FB, see this)

In Jan 2017 I was contacted by a curator of the TEDGlobal 2017 and asked if I would like to give a talk. To be read with the slides (which have quotes from Neuromancer).

“We can program a computer to do anything. What if we had the same power over the molecules of our bodies? Let’s imagine how this could change our lives.

For example… this version of the scenario [3].

Adam and Eve meet at a party. She likes him. Her sniffer ring can sense Adam’s biomolecules floating in the air between them. One of them triggers a warning. Eve forwards the warning to Adam’s phone.

Back home, Adam files a bug report with his internet slash health provider. The bug report contains his biological ID and the DNA code received by the warning message.

The bug report is opened.

The ID and DNA code are converted to a digital chemistry. Technical staff manipulate this chemistry, as hackers about to debug a program in Neuromancer style.

“still he’d see the matrix in his sleep, bright lattices of logic unfolding across that colorless void”
William Gibson, Neuromancer

Things like making lists, just, fold up inside themselves. Come out the other way around. Crazy things.”
Pseudo — William Gibson
https://aphyr.com/posts/340-reversing-the-technical-interview#comment-2763

They find a digital molecule which solves Adam’s problem. A medicine. They convert the solution back to a DNA code which they send to Adam’s router.

The router can turn DNA code back into real biomolecules. Why? It’s a Venter 9000 digital-to-biological converter. Version one looks like this [1].

Is a bit larger than a router, for the moment. But, in few years, the 9000 version will be in everybody’s home.

The router emits these biomolecules into Adam’s bedroom. They enter the body and so the bug report is solved, the medicine is delivered and Adam is in perfect health again.

Can we really do this?

I think so, there are 3 steps to make.

Step 1. Build a digital chemistry which we can program. In a digital chemistry data and programs are all graph like structures, digital molecules which “fold up inside themselves and come out the other way around” only they do it randomly, like in real chemistry.

We would create and manipulate digital molecules as if we write programs made from a very few elementary bricks. Then we could simulate their behaviour on a computer, to be sure they work right.

Step 2. Use Nature to simulate this digital chemistry. There’s no computer as powerful as Nature, let’s use it. Find a digital-to-biological dictionary from the elementary bricks of the digital chemistry to real biomolecular bricks.

Step 3. Build digital-to-biological converters and biological-to-digital sensors. Craig Venter gave us the first generic DBC converter. Sensors as performant as Eve’s sniffer ring, as a part of the Internet of Things, are possible.

OK, so the program is simple. Let’s do it right away!

Well, I’m not a chemist, I’m a mathematician and I built a digital chemistry which does work like real chemistry. It is indeed inspired from stuff related to Lisp and Haskell (but goes in wild directions). Is called chemlambda [6], is an Open Science project and I hope it can be used in reality.

Molecules in chemlambda are graphs made by colored nodes and links between them. The chemical reactions are done by enzymes rewiring small patterns in these graphs.

Chemlambda is Turing universal, meaning that you can translate any computer program into one of these molecules and execute it via random digital chemical reactions.

In my simulations I used things like the Ackermann function or the factorial, but think: any program! You could do anything with the Nature’s computer.

More general, going far outside the small world of computer programs interesting for the neighbourhood programmer, you could design molecules from first principles.

Instead of shooting in the dark by doing many experiments with real world molecules, kind of like a barbarian who finds new uses for the tiny things discovered in a clock workshop, instead of this you could design what you need, then turn it into reality.

Colonize Mars? Deposit all Netflix shows in lichen spores?

Just applications.

Some frightening, of course.

But: understand life at molecular level? What a worthy goal. This may (or may not) help.

If the step 2 is realized, here’s the bottleneck.

I am very willing to try the step 2 of the program. I think this can be done by a combination of clever searches in available chemical databases and collaborative work.

After all, chemlambda it’s an Open Science project. Means that it may scale, with chance.”

________

[1] Digital-to-biological converter for on-demand production of biologics, Kent S Boles, Krishna Kannan, John Gill, Martina Felderman, Heather Gouvis, Bolyn Hubby, Kurt I Kamrud, J Craig Venter and Daniel G Gibson
https://www.nature.com/nbt/journal/vaop/ncurrent/full/nbt.3859.html

[2] The chemlambda repository README is the entry point to the project.

[3] Internet of Smells, http://telegra.ph/Internet-of-Smells-04-26

…. so that was the talk proposal.

I was very surprised because I had split feelings:

(positive) this could boost even more the interest in my proposal of molecular computers based on chemical reactions which mimic interaction nets rewrites

(negative) this is a subject which is fundamental science with possible worrying consequences,

(positive) but maybe I could talk about my personal experience with Open Science?

(negative) definitely not the touchy-feely kind, there’s nothing to be happy about yet, the fight for OS is tough and it continues,

so I agreed.

In May 2017 I was announced that indeed I’ll talk at the TED event. Yay, because since January I realised I could use this talk for several good purposes. Something fishy though, my feelings are complex, you see? there was this question in the back of my head: OK so this is the best public talks organization. We are in 2017. They need 4 months to start organizing? Hm, OK.

And from that point on I felt into a bureaucracy nightmare.

They baptized me BULIGIA for some time, even if my google mail is Marius.Buliga@gmail.com. They sent me official invitations and other stuff… To me? nah, to BULIGIA. I sent them something like 5-6 polite mails until they finally figured out what’s wrong.

I was joking with my friends: maybe they think I’m Italian. Marius Buligia! Somebody said that “buligia” sounds like the name of a disease. So I’m going to spread the buligia disease in the fancy circles of TED. Funny! So be it.

Then they told me that I’ll have 6 min. … right, what can I do with less than 900 words? I can talk about Open Science. I quickly wrote a draft, give it the name ringo.txt  😉 and sent them.

This draft is now available as Open Science is RWX science. A quote from the original:

“I use everything available to turn this project into Open Science. You name it: old form articles, html and javascript articles, research blog, Github repository, Figshare data repository, Google collection, this 🙂 TED talk”

No, they wanted me to talk about chemlambda. Moreover (and this was one expression they kept repeating, Borg like) GitHub, Figshare, micromanagement, etc are “insider words”. Don’t use insider words.

Well, this is strange because one thing which made me very interested into this talk was the audience. You see, apparently the best thing about a main TED event is the audience in the room. I could see that they are in one of these categories: founders of the main web or computers services, rich from less well known but clever businesses  involving computers, representing investment funds, NY or Silicon Valley intellectuals, others in proportion of 10 percents. (And the speakers.)

So maybe a regular editor thinks that it reaches a bigger audience if the talk is made for people using 1500 words, drooling over their keyboard while they look at the talk with empty eyes.

Does not apply here, right?

Moreover the bigger audience is not reached, because presently there are huge audiences for anything interesting. There are so many people on the Net today that you don’t have to go to the barely human level to attract them. No public talks show has a 2 billion audience. If they have several million people interested in a talk, that’s great. Or there are always several million intelligent people interested in a public talk on science, computing, biology, whatever, which are bored to death by the stupidity of the generic shows proposed by the regular media.

OK, so what can I do with less than 900 words to explain chemlambda? Images, of course 🙂

I proposed them a choice between a second version of the script (more chemlambda, less Open Science) and a more bold one based on the Internet of Smells story.

They picked the first choice and after talking with the science editor (who’s a nice guy), after encountering some more “insider words” remarks, I said what the heck and just sent them a script based on the Internet of Smells.

They liked it a lot! It is basically what became “Chemlambda for the people”.

Great! This took them almost 2 months. (The year is 2017.)

I was very flexible concerning their suggestions not because I was forced to, but because I wanted to take all this as a challenge and also to learn from this interaction.

I prepared the slides, sent them and waited for the first rehearsal. At this point they had my script, which they liked, and my slides, html with high res movies and animations included.

They proposed to use for the rehearsal something professional (they said), won’t give the name.

Then I had a first rehearsal with the TED team, where they have not used my slides. They asked me to use screencapture from my laptop, I couldn’t see what they see. I could barely hear them (and see them) but I imagined that they know what they do.

For those quick to point out that maybe the connection was bad because I was in Romania, think again. Romania, and especially Bucharest, is one of those little places in the world with one of the top speed web connections. Moreover, many people whom I talk to by video know that we can have a decent and alive exchange even if I’m based in Romania. So leave your stupid racism at the door please and continue.

I was very worried inside, something is wrong. I was screaming to them and speaking very rarely, because the conditions I experienced were like wind howling during a storm.

The movies and animations I prepared and they had, but not used them? I didn’t know then, but learned later that the screencast turned into random renderings at the frequency of 1/s.

Anyway, after this professional rehearsal they were not happy to learn that I’ll be away for 2 weeks. People with small kids know that you have to make reservations months before the vacation.

Next day I received the edited  side of the transmission (so I could see what they saw, hear what they heard).

Finally, just before leaving for the vacation I had a (normal, not professional) video talk with the main curator and the scientific editor where they said that there’s not enough time to prepare the talk more and they decided to not let me talk. But I still participate at the event, in the public.

That hurt! For those who don’t know me, I’m a professional mathematician. I gave hundreds of public talks, most of them in English or French, I gave university courses, so I definitely never had a problem with public speaking. My pride is hurt!

Nevermind, is their show I said. I’ll think about coming and announce them. Them I mailed them and refused to come, just to be in the public.

During the vacation I started to have doubts about all this. Wait a moment, that was not a rehearsal, that was a set up, or it looked like that. Or maybe is my pride, again? Hm…

I was still receiving their general announcements and I saw they made public the list of speakers.

I did not wanted to link to their site, for privacy reasons, but I could still use a search engine. Surprise! I appear as a speaker on CNN. What’s this?

I went to the CNN site and I was not there. However, Google caches the page seen by the crawler. I got it and … I was there. For some time, then the page was edited.

I saved the cached page. Photos, like this one

are still available see this link. (archived version)

OK, so they made a mistake, right? They just sent to CNN the list of speakers (after they took me off), but they forgot to take me off from that list.

Back home I opened the cached html of the CNN page and look closer. Do you notice something weird here?

Yes, I appear twice, haha. At the position 33 and then 44.

OK, mistakes are done all the time, even by the most professional teams. Forget about it. Let’s stop with the thinking about wtf was all this.

Yesterday, Aug 5, more than 2 weeks after they announced the speakers, I visited the Tedglobal2017 site.

Somebody familiar was looking at me. Yeah, that guy, second row in the middle!

Huh? Naah, my figure again. Is it? Let’s look closer, use a small window like a phone or tablet. Gives this. (for the original image this link and archived version)

Yep, that’s me!

# The Library of Alexandra

“Hint: Sci-Hub was created to open papers that are not available online at all. You cannot find these papers in Google or in open access” [tweet by @Sci_Hub]

“Public Resource will make extracts of the Library of Alexandra available shortly, will present the issues to publishers and governments.” [tweet by Carl Malamud]

# More experiments with Open Science

I still don’t know which format is better for Open Science. I’m long past the article format for obvious reasons. Validation is a good word and concept because you don’t have to rely absolutely on opinions of others and that’s how the world works. This is not all the story though.

I am very fortunate to be a mathematician, not a biologist or biochemist. Still I long for the good format for Open Science, even if, as a mathematician, I don’t have the problems biologists or chemists have, namely loads and loads of experimental data and empirical approaches. I do have a world of my own to experiment with, where I do have loads of data and empirical constructs. My mind, my brain are real and I could understand myself by using tools of chemists and biologists to explore the outcomes of my research. Funny right? I can look at myself from the outside.

That is why  I chose to not jump directly to make Hydrogen, but instead to treat the chemlambda  world, again, as a guinea pig for Open Science.

There are 427 well written molecules in the chemlambda library of molecules on Github. There are 385 posts in the chemlambda collection on Google+, most of them with animations from simulations of those molecules. It is a world, how big is it?

It is easy to make first a one page direct access to the chemlambda collection. It is funnier to build a phylogenetic tree of the molecules, based on their genes. That’s what I am doing now, based on a work in progress.

Each molecule can be decomposed in “genes” say, by a sequencer program. Then one can use a distance between these genes to estimate first how they cluster and later to make a phylogenetic tree.

Here is the first heatmap (using the edit distance between single occurrences of genes in molecules) of the 427 molecules.

Is a screenshot, proving that my custom programs work 🙂 (one understands more by writing some scripts than by taking tools ready made from others, at least at this stage of research).

By using the edit distance I can map the explored chemlambda molecules. In the following image the 427 molecules from the library are represented as nodes and for each pair of molecules at an edit distance at most 20 there is a link. The nodes are in a central gravitational field, each node has the same charge and the links between nodes act as springs.

This is a screenshot of the result, showing clusters and trees, connecting them. Not very sophisticated, but enough to give a sense of the explored territory. In the curated collection, such a map would be useful to navigate through the molecules, as well as for giving ideas about which parts are not as well explored. I have not yet made clear which parts of the map cover lambda terms, which cover quines, etc.

Moreover, I see structure! The 427 molecules are made of copies of  605 different linear “genes” (i.e. sticks with colored ends)  and 38 ring shaped ones.  (Is easy to prove that lambda terms have no rings, when turned into molecules.) There are some interesting curved features visible in the edit distance of the sticks.

They don’t look random enough.

Is clear that a phylogenetic tree is in reach, then what else than connecting the G+ collection posts with the molecules used, arranged along the tree…?

Can I discover which molecules are coming from lambda terms?

Can I discover how my mind worked when building these molecules?

Which are the neglected sides, the blind places?

I hope to be able to tell by the numbers.

Which brings me to the main subject of this post: which is a good format for an Open Science piece of research?

Right now I am in between two variants, which may turn out to not be as different as they seem. An OS research vehicle could be:

• like a viable living organism, literary
• or like a viable world, literary.

Only the future will tell which is which. Maybe both!

# Update the Panton Principles please

There is a big contradiction between the text of The Panton Principles and the List of the Recommended Conformant Licenses. It appears that it is intentional, I’ll explain in a moment why I write this.

Here is the evidence.

1. The second of the Panton Principles is:

“2. Many widely recognized licenses are not intended for, and are not appropriate for, data or collections of data. A variety of waivers and licenses that are designed for and appropriate for the treatment of data are described [here](http://opendefinition.org/licenses#Data). Creative Commons licenses (apart from CCZero), GFDL, GPL, BSD, etc are NOT appropriate for data and their use is STRONGLY discouraged.

*Use a recognized waiver or license that is appropriate for data.* ”

As you can see, the authors clearly state that “Creative Commons licenses (apart from CCZero) … are NOT appropriate for data and their use is STRONGLY discouraged.”

2. However, if you look at the List of Recommended Licenses, surprise:

Creative Commons Attribution Share-Alike 4.0 (CC-BY-SA-4.0) is recommended.

3. The CC-BY-SA-4.0 is important because it has a very clear anti-DRM part:

“You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, the Licensed Material if doing so restricts exercise of the Licensed Rights by any recipient of the Licensed Material.” [source CC 4.0 licence: in Section 2/Scope/a. Licence grant/5]

4. The anti-DRM is not a “must” in the Open Definition 2.1. Indeed, the Open Definition clearly uses “must” in some places and “may” in another places.  See

“2.2.6 Technical Restriction Prohibition

The license may require that distributions of the work remain free of any technical measures that would restrict the exercise of otherwise allowed rights. ”

5. I asked why is this here. Rufus Pollock, one of the authors of The Panton Principles and of the Open Definition 2.1, answered:

“Hi that’s quite simple: that’s about allowing licenses which have anti-DRM clauses. This is one of the few restrictions that an open license can have.”

“Thanks Rufus Pollock but to me this looks like allowing as well any DRM clauses. Why don’t include a statement as clear as the one I quoted?”

Rufus:

“Marius: erm how do you read it that way? “The license may prohibit distribution of the work in a manner where technical measures impose restrictions on the exercise of otherwise allowed rights.”

That’s pretty clear: it allows licenses to prohibit DRM stuff – not to allow it. “[Open] Licenses may prohibit …. technical measures …”

Then:

“Marius: so are you saying your unhappy because the Definition fails to require that all “open licenses” explicitly prohibit DRM? That would seem a bit of a strong thing to require – its one thing to allow people to do that but its another to require it in every license. Remember the Definition is not a license but a set of principles (a standard if you like) that open works (data, content etc) and open licenses for data and content must conform to.”

I gather from this exchange that indeed the anti-DRM is not one of the main concerns!

6. So, until now, what do we have? Principles and definitions which aim to regulate what Open Data means which avoid to take an anti-DRM stance. In the same time they strongly discourage the use of an anti-DRM license like CC-BY-4.0. However, on a page which is not as visible they recommend, among others, CC-BY-4.0.

There is one thing to say: “you may use anti-DRM licenses for Open Data”. It means almost nothing, it’s up to you, not important for them. They write that all CC licenses excepting CCZero are bad! Notice that CC0 does not have anything anti-DRM.

Conclusion. This ambiguity has to be settled by the authors. Or not, is up to them. For me this is a strong signal that we witness one more attempt to tweak a well intended  movement for cloudy purposes.

The Open Definition 2.1. ends with:

Richard Stallman was the first to push the ideals of software freedom which we continue.

Don’t say, really? Maybe is the moment for a less ambiguous Free Science.

# Back to the drawing board: all strings

UPDATE: Better look at “chemlambda strings”, eliminates enzymes, is conservative! Link to original  and  link to archived version.
All is strings. Make and break strings.

Define backbone moves.

But any machine would do.

# The price of publishing with GitHub, Figshare, G+, etc

Three years ago I posted The price of publishing with arXiv. If you look at my arXiv articles then you’ll notice that I barely posted on arXiv.org since then. Instead I went into territory which is even less recognized as serious by a big part of academia. I used:

The effects of this choice are put in front of my homepage, so go there to read them. (Besides, it is a good exercise to remember how to click on links and use them, that lost art from the age when internet was free.)

In this post I want to explain what is the price I paid for these choices and what I think now about them.

First, it is a very stressful way of living. I am not joking, as you know stress comes from realizing that there are many choices and one has to choose. Random reward from the social media is addictive. The discovery that there is a way to get out from the situation which keeps us locked into the legacy publishing system (validation). The realization that the problem is not technical but social. A much more cynical view of the undercurrents of the social life of researchers.

The feeling that I can really change the world with my research. The worries that some possible changes might be very dangerous.

The debt I owe concerning the scarcity of my explanations. The effort to show only the aspects I think are relevant, putting aside those who are not. (Btw, if you look at my About page then you’ll read “This blog contains ideas from the future”. It is true because I already pruned the 99% of the paths leading nowhere interesting.)

The desire to go much deeper, the desire to explain once again what and why, to people who seem either lacking long term attention capability or having shallow pet theories.

Is like fishing for Moby Dick.

# Synergistics talks through his chemlambda Haskell version

… in a very nice and clear, 9:30 presentation. I especially enjoyed from 5:32, when he describes what enzymes are and further, but all of the presentation is instructive because it starts from 0.

The video talk is this

His github repository chemlambda-hask is this

Thank you J, very nice!

# Pharma meets the Internet of Things

Pharma meets the Internet of Things, some commented references for this future trend. Use them to understand

[0] After the IoT comes Gaia
https://chorasimilarity.wordpress.com/2015/10/30/after-the-iot-comes-gaia/

There are two realms of computation, which should and will become one: the IT technology and biochemistry.

General stuff

The notion of computation is now well known, we speak about what is computable and about various models of computation (i.e. how we compute) which always turned out to be equivalent in the sense that they give the same class of computable things (that’s the content of the Church-Turing thesis).

It is interesting though how we compute, not only what is computable.

In IT perhaps the biggest (and socially relevant) problem is decentralized asynchronous computing. Until now there is no really working solution of a model of computation which is:
– local in space (decentralized)
– local in time (asynchronous)
– with no pre-imposed hierarchy or external authority which forces coherence

In biochemistry, people know that we, anything living, are molecular assemblies which work:
– local in space (all chemical interactions are local)
– local in time (there is no external clock which synchronizes the reactions)
– random (everything happens without any external control)

Useful links for an aerial view on molecular computing, seen as the biochemistry side of computation:

Some history and details provided. Quote from the end of the section “Biochemistry-based information technology”

“Other experiments have shown that basic computations may be executed using a number of different building blocks (for example, simple molecular “machines” that use a combination of DNA and protein-based enzymes). By harnessing the power of molecules, new forms of information-processing technology are possible that are evolvable, self-replicating, self-repairing, and responsive. The possible applications of this emerging technology will have an impact on many areas, including intelligent medical diagnostics and drug delivery, tissue engineering, energy, and the environment.”

A detailed historical view (written in 2000) of the efforts towards “molecular electronics”. Mind that’s not the same subject as [1], because the effort here is to use biochemistry to mimic silicon computers. While [1] also contains such efforts (building logical gates with DNA, etc), DNA computing does propose also a more general view: building structure from structure as nature does.

– “Microscopic machine mimics the ribosome, forms molecular assembly line”
– “Biological computer can decrypt images stored in DNA”

Article about Craig Venter from 2016, found by looking for “Craig Venter Illumina”. Other informative searches would be “Digital biological converter” or anything “Craig Venter”

Interesting talk by an interesting researcher Lee Cronin

[6] The Molecular Programming Project http://molecular-programming.org/

Worth to be browsed in detail for seeing the various trends and results

Sitting in the middle, between biochemistry and IT:

[1] Algorithmic Chemistry (Alchemy) of Fontana and Buss
http://fontana.med.harvard.edu/www/Documents/WF/Papers/alchemy.pdf

Walter Fontana today: http://fontana.med.harvard.edu/www/index.htm

[2] The Chemical Abstract Machine by Berry and Boudol

http://www.lix.polytechnique.fr/~fvalenci/papers/cham.pdf

[3] Molecular Computers (by me, part of an Open Science project, see also my homepage http://imar.ro/~mbuliga/ and the chemlambda github page https://github.com/chorasimilarity/chemlambda-gui/blob/gh-pages/dynamic/README.md )

http://chorasimilarity.github.io/chemlambda-gui/dynamic/molecular.html

On the IT side there’s a beautiful research field, starting of course with lambda calculus by Church. Later on this evolved in the direction of rewriting systems, then graph rewriting systems. I can’t even start to write all that’s done in this direction, other than:

[1] Y. Lafont, Interaction Combinators
http://iml.univ-mrs.fr/~lafont/pub/combinators.ps

but see as well the Alchemy, which uses lambda calculus!

However, it would be misleading to reduce everything to lambda calculus. I came to the conclusion that lambda calculus or Turing machines are only two among the vast possibilities, and not very important. My experience with chemlambda shows that the most relevant mechanism turns around the triple of nodes FI, FO, FOE and their rewrites. Lambda calculus is obtained by the addition of a pair of A (application) and L (lambda) nodes, along with standard compatible moves. One might use as well nodes related to a  Turing Machine instead, as explained in

http://chorasimilarity.github.io/chemlambda-gui/dynamic/turingchem.html

Everything works just the same. The center, what makes things work, is not related to Logic or Computation as they are usually considered. More later.

# How to use the chemlambda collection of simulations

The chemlambda_casting folder (1GB) of simulations is now available on Figshare [1].

How to use the chemlambda collection of simulations? Here’s an example. The synthesis from a tape video [2] is reproduced here with a cheap animated gif. The movie records the simulation file 3_tape_long_5346.html which is available for download at [1].

That simple.

If you want to run it in your computer then all you have to do is to download 3_tape_long_5346.html from [1], download from the same place d3.min.js and jquery.min.js (which are there for your convenience). Put the js libs in the same folder as the html file. Open the html file with a browser, strongly recommend Safari or Chrome (not Firefox which blocks with these d3.js animations, for reasons related to d3). In case your computer has problems with the simulation (I used a macbook pro with safari) then slow it like this: edit the html file (with any editor) and look for the line starting with

return 3000 + (4*(step+(Math.random()*

and replace the “4” by “150”, it should be enough.

Here is a longer explanation. The best would be to read carefully the README [4].
“Advanced”: If you want to make another simulation for the same molecule then follow the steps.

1. The molecule used is 3_tape_long_5346.mol which is available at the library of chemlambda molecules [3].

2. So download the content of the gh-pages branch of the chemlambda repository at github [4] as explained in that link.

3. then follow the steps explained there and you’ll get a shiny new 3_tape_long_5346.html which of course may be different in details than the initial one (it depends on the script used, if you use the random rewrites scripts then of course the order of rewrites may be different).

[1] The Chemlambda collection of simulations
https://doi.org/10.6084/m9.figshare.4747390.v1

[2] Synthesis from a tape

[3] The library of chemlambda molecules
https://github.com/chorasimilarity/chemlambda-gui/tree/gh-pages/dynamic/mol

# The chemlambda collection is a social hack, here’s why

People from data deprived places turn to available sources for scientific information. They have the impression that Social Media may be useful for this. Reality is that it is not, by design.

But we can socially hack the Social Media for the benefit of Open Science.

Social Media is not fit for Open Science by design. They are Big Data gatherers, therefore they are interested not in the content per se, but in the metadata. The huge quantity of metadata they suck from the users tells them about the instantaneous interests and social links or preferences. That is why cat pics are everywhere: the awww moment is data poor but metadata rich.

Open Science has as aim to share scientific data and rigorous validation means. For free! Therefore Open Science is data rich. It is also, by design, metadata poor, because at least if a piece of research is not yet popular, there is not much interaction (useful for example to advertisers or to tech companies or govenrnments) to be encoded in

The public impression is that science is hard and many times boring. There are however many people interested in science, like for example smart kids or creative people living in data deprived places. There are so many people with access to the Social Media so that, in principle, even the most seemingly boring science project may gather the attentions of tens of thousands of them. If well done!

Such science projects may never see the light of the media attention because classical media works with big numbers and very low level content. Classical media has still to adapt to the new realities of the Net. One of them is that the Net people are in such a great number that there is no need to adapt a message for a majority of people which is not, generically, interested in science.

Likewise, Social Media is by design driven by big numbers (of metadata, this time). They couldn’t care less about the content provided that it generates big data exhaust (Zuboff, Big other: surveillance capitalism and the prospects of an information civilization).

They can be tricked!

This was the purpose of the chemlambda collection [deleted]

… and now revived:

UPDATE: Chemlambda collection of animations is the version of the collection hosted on github. The original site is under very heavy traffic (in Jan 2020). Small images, about a 1/2 of the collection, due to memory limitations. But you can play the simulations in js!

… beautiful animations, data rich content hidden behind for those interested. My previous attempts to use classical channels for Open Science gave only very limited results. Indeed, the same is true for a smart kid or a creative person from Africa.

If you are not born in the right place, studied at the right university and made the right friends then your ideas will not spread through the classical channels, unless your
ideas are useful to a privileged team. You, smart kid or creative person from Africa, will never advance your ideas to the world unless they are useful first not to you, but to privileged people from far away places. If this happens, the best you can expect is to be an useful servant for them.

So, with these ideas and experiences, I tried to socially hack the Big Data gatherers. I presented short animations (under 10s) obtained from real scientific simulations. I chose them among those which are visually appealing. Each of them can be reproduced and researched by anybody interested via a GitHub repository.

It worked. The Algorithmic Gods from Google decided to make chemlambda a featured collection. I had more than 50 000 followers and more than 50 millions views of these scientific, original simulations.

To compare, another collection, dedicated to censorship on social media, had no views!

I shall make, acording to my access to data, which is limited, an analysis of people who saw the collection.

It seems to me that there were far more women that men. Probably the algorithms used the prior that women, stupid as they are, are more interested in pictures than text. Great, let’s hack this stupid prior and turn it into a chance to help Women access to science 🙂

There were far more people from Asia and Africa than from the West. Because, of course, they are stupid and don’t speak the language (English), but they can look at the pictures. Great, let’s turn this snobbery into an advantage, because they are the main public which could benefit from Open Science.
The amazing (for me) popularity of this experiment showed that there is something more to dig in this direction!
Science can be made interesting and remain rigorous too.

Science and art are not as different as they look, in particular for this project the visual arts.

And the chemlambda project is very interesting, of course, because it a take on life at molecular level done by a mathematician. The biologists need this, not only mathematical tools, but also mathematical minds. Biologists, as the Social Media companies, sit on heaps of Big Data.

Finally, there is the following question I’d like to ask.
Scientific data is, in bits, a tiny proportion of the Big Data gathered everyday. Is tiny, ridiculously tiny.

Question: where to put it freely, so that it stays free and is treated properly, I mean as visible and easy to access as a cat pic? Would it be so hard to dedicate something like 1/10 000 of the servers used for Big Data in order to keep Open Science alive? In order to not let it rot along with older cat pics?

# Preparing the microscope

For a new experiment.

If you want to discuss then open an issue at my chemlambda repository and propose me your way.

# Easy rules to estimate censorship in social media

Shared from here.

# Google segregation should take blame

Continuing from the last post, here is a concrete example of segregation performed by the corporate social media. The result of the US election is a consequence of this phenomenon.

Yesterday I posted on Google+ the article Donald Trump is moving to the White House, and liberals put him there | Thomas Frank | Opinion | The Guardian    and I received an anti-Trump comment (reproduced at the end of this post). I was OK with the comment and did nothing to suppress it.

Today, after receiving some more comments, this time bent towards Trump, I noticed that the first one disappeared. It was marked as spam by a Google algorithm.

I restored the comment classified as spam.

The problem is, you see, that Google and Facebook and Twitter, etc, all corporate media are playing a segregation game with us. They don’t let us form opinions based on facts which we can freely access. They filter our worldview.  They don’t provide us means for validation of their content. (They don’t have to, legally.)

The idiots from Google who wrote that piece of algorithm should be near the top list of people who decided the result of these US elections.

______________________

UPDATE: Bella Nash, the identity who posted that comment, now replies the following:

“It says the same thing on yours [i.e. that my posts are seen as spam in her worldview] and I couldn’t reply to it. I see comments all over that  google is deleting posts, some guy lost 28 new and old replies in an hour. How the hell can comments be spam? I’m active on other boards so I don’t care what google does, it’s their site and their ambiguous rules.”

Theory of spam relativity 🙂

______________________

To be clear, I’m rather pleased about the results, mainly because I’m pissed beyond limits by these tactics. This should not limit the right to be heard of other people, at least not in my worldview. Let me decide if this comment is spam or not:

“In Chicago roughly a thousand headed for the Trump International Hotel while chanting against racism and white nationalism. Within hours of the election result being announced the hashtag #NotMyPresident spread among half a million Twitter users.

UPDATE 2: Some people are so desperate that I’m censored even on 4.chan 🙂 I tried to share there this post, several times, I had a timeout. I tried to share this ironical Disclaimer

which should be useful on any corporate media site, and it disappeared.

The truth is that the algorithmic idiocy started with walled garden techniques. If you’re on one social media site, then it should be hard to follow a link to another place. After that, it became hard to know about people with different views. Discussions became almost impossible. This destroys the Internet.

There is much more about these chemical transactions and their proofs. First is that transactions are partially independent on the molecules. The blockchain may be useful only for having a distributed database of transactions and proofs, available for further use. But there’s more.

Think about this database as one of valid computations, which can then be reused in any combination or degree of parallelism. Then, that’s the field of several competitions.

The same transaction can have several proofs, shorter or longer. It can have big left pattern therefore costly to use it in another computation. Maybe a transaction goes too long and therefore it is not useful to use in combination with others.

When there is a molecule to reduce, the application of a transaction means:
– identify a subgraph isomorphic with the left pattern and pick one such subgraph
– apply the transaction to this particular subgraph (which is equivalent with: reduce only that subgraph of the molecule, and freeze the rest of the molecule, but do it in one step because the sequence of reductions is already pre-computed)

Now, which is more convenient, to reduce the molecule by using the random algorithm and the available graph rewrites, or to use some transactions which fit, which is fast (as concerns step 2) but costly (as concerns step 1), moreover it may be that there is a transaction with shorter proof for that particular molecule, which mixes parts of several available precomputed transactions.

Therefore the addition of transactions and their proofs (needed to be able to validate them) into the database should be made in such a way which profit from this competition.

If I see the reduction of a molecule (which may be itself distributed) as a service then besides the competition for making available the most useful transactions with the shortest proofs, there is another competition between brute force reducing it and using the available transactions, with all the time costs they need.

If well designed, these competitions should lead to the emergence of clusters of useful transactions (call such a cluster a “chemlisp”) and also to the emergence of better strategies for reducing molecules.

This will lead to more and more complex computations which are feasible with this system and probably fast enough they will become very hard to understand by a human mind, or even by using IT tools on a limited part of the users of the system.

# Chemical transactions and their proofs

By definition a transaction is either a rewrite from the list of
accepted rewrites (say of chemlambda) or a composition of two
transaction which match. A transaction has a left and a right pattern
and a proof (which is the transaction expressed as a cascade of
accepted rewrites).

When you reduce a molecule, the output is a proof of a transaction.
The transaction proof itself is more important than the molecule from
the start. Indeed, if you think that the transaction proof looks like
a list

rm leftpattern1

where leftpattern1 is a list of lines of a mol file, same for the rightpattern1,

then you can deduce from the transaction proof only the following:
– the minimal initial molecule needed to apply this transaction, call
it the left pattern of the transaction
– the minimal final molecule appearing after the transaction, call it
the right pattern of the transaction

and therefore any transaction has:
– a left pattern
– a right pattern
– a proof made of a chain of other transaction which match (the right
pattern of transaction N contains the left pattern of transaction N+1)

It would be useful to think in term of transactions and their proofs
as the basic objects, not molecules.

# An exercice with convex analysis and neural networks

This is a note about a simple use of convex analysis in relation with neural networks. There are many points of contact between convex analysis and neural networks, but I have not been able to locate this one, thanks for pointing me to a source, if any.

Let’s start with a directed graph with set of nodes $N$ (these are the neurons) and a set of directed bonds $B$. Each bond has a source and a target, which are neurons, therefore there are source and target functions

$s:B \rightarrow N$   , $t:B \rightarrow N$

so that for any bond $x \in B$ the neuron $a = s(x)$ is the source of the bond and the neuron $b = t(x)$ is the target of the bond.

For any neuron $a \in N$:

• let $in(a) \subset B$ be the set of bonds $x \in B$ with target $t(x)=a$,
• let $out(a) \subset B$ be the set of bonds $x \in B$ with source $s(x)=a$.

A state of the network is a function $u: B \rightarrow V^{*}$ where $V^{*}$ is the dual of a real vector space $V$. I’ll explain why in a moment, but it’s nothing strange: I’ll suppose that $V$ and $V^{*}$ are dual topological vector spaces, with duality product denoted by $(u,v) \in V \times V^{*} \mapsto \langle v, u \rangle$ such that any linear and continuous function from $V$ to the reals is expressed by an element of $V^{*}$ and, similarly, any linear and continuous function from $V^{*}$ to the reals is expressed by an element of $V$.

If you think that’s too much, just imagine $V=V^{*}$ to be finite euclidean vector space with the euclidean scalar product denoted with the $\langle , \rangle$ notation.

A weight of the network is a function $w:B \rightarrow Lin(V^{*}, V)$, you’ll see why in a moment.

Usually the state of the network is described by a function which associates to any bond $x \in B$ a real value $u(x)$. A weight is a function which is defined on bonds and with values in the reals. This corresponds to the choice $V = V^{*} = \mathbb{R}$ and $\langle v, u \rangle = uv$. A linear function from $V^{*}$ to $V$ is just a real number $w$.

The activation function of a neuron $a \in N$ gives a relation between the values of the state on the input bonds and the values of the state of the output bonds: any value of an output bond is a function of the weighted sum of the values of the input bonds. Usually (but not exclusively) this is an increasing continuous function.

The integral of an increasing continuous function is a convex function. I’ll call this integral the activation potential $\phi$ (suppose it does not depends on the neuron, for simplicity). The relation between the input and output values is the following:

for any neuron $a \in N$ and for any bond $y \in out(a)$ we have

$u(y) = D \phi ( \sum_{x \in in(a)} w(x) u(x) )$.

This relation generalizes to:

for any neuron $a \in N$ and for any bond $y \in out(a)$ we have

$u(y) \in \partial \phi ( \sum_{x \in in(a)} w(x) u(x) )$

where $\partial \phi$ is the subgradient of a convex and lower semicontinuous activation potential

$\phi: V \rightarrow \mathbb{R} \cup \left\{ + \infty \right\}$

Written like this, we are done with any smoothness assumptions, which is one of the strong features of convex analysis.

This subgradient relation also explains the maybe strange definition of states and weights with the vector spaces $V$ and $V^{*}$.

This subgradient relation can be expressed as the minimum of a cost function. Indeed, to any convex function $phi$ is associated a sync  (means “syncronized convex function, notion introduced in [1])

$c: V \times V^{*} \rightarrow \mathbb{R} \cup \left\{ + \infty \right\}$

$c(u,v) = \phi(u) + \phi^{*}(v) - \langle v, u \rangle$

where $\phi^{*}$ is the Fenchel dual of the function $\phi$, defined by

$\phi^{*}(v) = \sup \left\{ \langle v, u \rangle - \phi(u) \right\}$

This sync has the following properties:

• it is convex in each argument
• $c(u,v) \geq 0$ for any $(u,v) \in V \times V^{*}$
• $c(u,v) = 0$ if and only if $v \in \partial \phi(u)$.

With the sync we can produce a cost associated to the neuron: for any $a \in N$, the contribution to the cost of the state $u$ and of the weight $w$ is

$\sum_{y \in out(a)} c(\sum_{x \in in(a)} w(x) u(x) , u(y) )$.

The total cost function $C(u,w)$ is

$C(u,w) = \sum_{a \in N} \sum_{y \in out(a)} c(\sum_{x \in in(a)} w(x) u(x) , u(y) )$

and it has the following properties:

• $C(u,w) \geq 0$ for any state $u$ and any weight $w$
• $C(u,w) = 0$ if and only if for any neuron $a \in N$ and for any bond $y \in out(a)$ we have

$u(y) \in \partial \phi ( \sum_{x \in in(a)} w(x) u(x) )$

so that’s a good cost function.

Example:

• take $\phi$ to be the softplus function $\phi(u) =\ln(1+\exp(x))$
• then the activation function (i.e. the subgradient) is the logistic function
• and the Fenchel dual of the softplus function is the (negative of the) binary entropy $\phi^{*}(v) = v \ln(v) + (1-v) \ln(1-v)$ (extended by $0$ for $v = 0$ or $v = 1$ and equal to $+ \infty$ outside the closed interval $[0,1]$).

________

[1] Blurred maximal cyclically monotone sets and bipotentials, with Géry de Saxcé and Claude Vallée, Analysis and Applications 8 (2010), no. 4, 1-14, arXiv:0905.0068

_______________________________

# Euclideon Holoverse virtual reality games revealed

Congratulations! Via a comment by roy.  If there is any other news you have then you’re welcome here, as in the old days.

Bruce Dell has a way to speak, to choose colors and music which is his own. Nevertheless, to share the key speaker honor with Steve Wozniak is just great.

It rubs me a bit in the wrong direction when he says that he has the “world first new virtual lifeforms” at 7:30. Can they replicate? Do they have a metabolism? On their own, in random conditions?

If I sneeze in a Holoverse room, will they cough the next day? If they run into me, shall I dream new ideas about bruises later?

# A library of chemlambda molecules

More than 400 molecules are now  available at the the github repository for chemlambda, at this link. Many of them have been used to produce the animations from the chemlambda collection at google+.

There are more than 200 animations in that collection, which attracted an average stream of 150000 views/day and more than 30000 followers. I am proud about that because the subject is rather hard and almost all posts contain original research animations.

If you want to identify the mol file (i.e. the molecule) which has been used to create a certain animation, then follow the  path:

• click on the animation, you’ll be presented with a page where the animated gif runs
• try to save the gif, you’ll see a name.gif
• in another window go to the library of molecules and look for name.mol.

In most of the cases this works, but there might be rare cases where I forgot to preserve the correspondence between name.gif and name.mol.

During the time these animations have been produced, I used various versions of the scripts (all available at the repository). They should be all compatible, but it is possible that some mol files will not work as input for the scripts. If this happens, then it is because I used, mistakenly, a port variable in a bad place and then I forgot to delete the faulty version. Please excuse me for that, in case it happens (maybe, maybe 4 or 5 mol files from the about 440 are like this).

To see how to use the repository please go to the README.md file.

It is important to understand how I made the animations.

• you need a linux or a mac, because the scripts are in shell or awk
• I used a mac. I went to the folder where the scripts and the mol files are (so if you copy the mol files from the library, then copy  them in the same folder as the folder called “dynamic”, before you use the scripts). In a terminal window I typed, for example “bash quiner_shuffle.sh”. A list of all the mol files in that folder appear.
• I type the complete name.mol and hit enter
• then the main script does the magic and I obtain name.html
• mind that the parameters for the computation are in the most important part, the script quiner_shuffle.awk (and quiner_shuffle.sh is just a wrapper of this, same for all the pairs of scripts .sh and .awk)
• I used a browser to see name.html. Important: Safari works the best, by far, then Chrome. Firefox sucks for very obscure reasons. There is a solution for making the name.html to work on Firefox as well, is to find in the quiner_shuffle.awk the line “time_val=4; ” and to modify it into something like “time_val=120; “, for example. This variable controls the speed of the javascript animation, bigger is it, slower the animation.
• You’ll see that the d3.js animation can take, depending on the molecule (and on the number of steps given by this line “cycounter=10000;” in quiner_shuffle.awk), from minutes to hours.
• I made a screen capture of the animation and then I sped it, for example with ffmpeg.

Enjoy!

If you make nice stuff with it, then tell me and I’ll be glad to host your creation here and in the chemlambda collection.

# Tay has been retired, Blade Runner style

It is always unexpected when fiction becomes real. Tay, the adolescent AI, survived for about 24 hrs on Twitter. She turned into something socially unacceptable. Then she has been retired. See Gizmodo story.

Almost two years ago I posted Microbes take over and then destroy the HAL 9000 prototype. I gave 9 seconds as an estimate for the chance of life of an AI in the real world, where there is no control and faced with  the “extremely dynamic medium of decentralized, artificial life based computation we all use every day“(that post suggests an artificial life, not AI version of the future internet).

Now, the story of Tay seems unbelievably close to the Blade Runner world. The genius of Philip K. Dick manifests here because  he mixes AI with synthetic life with real life.

Real people socially hacked Tay. Virtual microbes destroy HAL 9000. The common themes are: in a decentralized environment and AI vs life (real or virtual).

Not many people understand that today obsessions with security, control, privacy are all lost battles.

# Let’s discuss the 3 Sci-Hub ideas

The site http://sci-hub.io/ has a part called “Sci-Hub ideas”. I have not seen any discussion about this in the commercial social networks, where almost everybody is a lawyer, apparently.
What if we look at these ideas, a bit more?

Further are my opinions about those:

1. Knowledge to all. I totally support this idea. That is why I always supported Green OA and not Gold OA. Open Science, which is a far more general and future oriented concept than OA, proposes the same, because the only scientific knowledge is the one which can be independently validated. This is not possible if there are walls around knowledge.

A more sensible point is the “inequality in knowledge access across the world”. This inequality has to be recognized as such and we should fight it.

2. No copyright for scientific and educational resources. It is very convenient to forget that the copyright has been a barrier for progress, several times in the past. Aviation and PC hardware are two examples. Some people understand that: “All our Patents are Belong to You”.

3. Open access. The most puzzling reaction against Sci-Hub, at least for me, was the one coming from some of the proponents of OA. I agree that Sci-Hub is not a solution for OA publishing of new articles. It is not a OA publishing model. OK. But OA itself is a very murky thing. Is arXiv.org OA? According to many OA advocates, it is not, is only an open repository. However, arXiv.org was a real solution for publishing, i.e. fast dissemination of knowledge. People used arXiv.org (and they use it now as well) in order to learn and communicate, via scientific articles, open and fast. There was no publishing revolution, just people using a better system than what the legacy publishers proposed. Likewise, Sci-Hub responded to a big need of many researchers, as witnessed by the fact that the site is heavily used. I think the support of Sci-Hub for OA is only lip service, what they really want to say is that they created a solution for a real problem which is not solved by OA.

# SciHub and patent wars

The Wright brothers used their patents to block the building of new airplanes. The historical solution was a pool of patents, eventually. Now everybody can fly.

We all have and use PCs because the patent wars around computer hardware were lost by those who tried to limit the production of it.

Elon Musk announced in 2014 that All Our Patent Are Belong To You.

These days publishers  complain that SciHub  breaks their paywalls. They have the copyrights for the  research works which are publicly funded mostly.

This is a new version of a patent war and I believe it will end as others in the past.

# Sci-Hub is not tiny, nor special interest

“Last year, the tiny special-interest academic-paper search-engine Sci-Hub was trundling along in the shadows, unnoticed by almost everyone.” [source: SW-POW!, Barbra Streisand, Elsevier, and Sci-Hub]

According to the info available in the article Meet the Robin Hood of science, by Simon Oxenham:

[Sci-Hub] “works in two stages, firstly by attempting to download a copy from the LibGen database of pirated content, which opened its doors to academic papers in 2012 and now contains over 48 million scientific papers.”

“The ingenious part of the system is that if LibGen does not already have a copy of the paper, Sci-hub bypasses the journal paywall in real time by using access keys donated by academics lucky enough to study at institutions with an adequate range of subscriptions. This allows Sci-Hub to route the user straight to the paper through publishers such as JSTOR, Springer, Sage, and Elsevier. After delivering the paper to the user within seconds, Sci-Hub donates a copy of the paper to LibGen for good measure, where it will be stored forever, accessible by everyone and anyone. ”

“As the number of papers in the LibGen database expands, the frequency with which Sci-Hub has to dip into publishers’ repositories falls and consequently the risk of Sci-Hub triggering its alarm bells becomes ever smaller. Elbakyan explains, “We have already downloaded most paywalled articles to the library … we have almost everything!” This may well be no exaggeration.”

Is that tiny? I don’t think so. I have near me the comparisons I made in
ArXiv is 3 times bigger than all megajournals taken together and, if we would trust the publicly available numbers, then:

• Sci-Hub is tiny
• arXiv.org is minuscule with about 1/40 of what (is declared as) available in Sci-Hub
• all the gold OA journals have no more than 1/100 of the “tiny” baseline, therefore they are, taken together, infinitesimal

Do i feel a dash of envy? subtle spin in favor of gold OA? maybe because Alexandra Elbakyan is from Kazakhstan? More likely is only an unfortunate formulation, but the thing is that if this info is true, then it’s huge.

UPDATE: putting aside all legal aspects, where I’m not competent to have an opinion, so putting aside these, it appears that the 48 million collection of paywalled articles is the result of the collective behaviour of individuals who “donated” (or whatever the correct word should be used) them.

My opinion is that this collective behaviour shows a massive vote against the system. Is not even intended to be a vote, people (i.e. individual researchers) just help one another. Compare this behaviour with the one of academic managers and with the one of all kinds of institutions which a) manage public funds and negociate prices with publishers, b) use metrics which are based on commercial publishers for distributing public funds as grants and promotions.

On one side there is the reality of individual researchers, who create and want to read what others like them created (from public funds basically) and on the other side there is this system in academia which rewards the compliance with this obsolete medium of dissemination of knowledge (presently turned upside down and replaced with a  system which puts paywalls around the research articles, it’s amazing).

Of course, I am not discussing here if Sci-hub is legal, or if commercial publishers are doing anything wrong from a legal point of view.

All this seems to me very close to the disconnection between politicians and regular people. These academic managers are like politicians now, the system ignores that it is possible to gauge the real opinion of people, almost in real time, and instead pretends that everything is OK, on paper.

____________________

# Neurons rewrites

A real neural network is a huge cascade of chemical rewrites. So I can try my chemlambda with that task. From a programming point of view, the problem is to understand neural networks as a graph rewrite model of computation, together with a (yet undiscovered) discipline of using them.

Further are some pretty images showing the first tries. They are all made by filming real simulations obtained with chemlambda.

Before giving them, I tell you that this task seems hard and now I believe that an easier one would be to use the ideas of chemlambda in the frame of quantum computing. (Do I have to add that in a new way, different from what was proposed in the many graphical formalisms associated to category theory? Probably! All those formalisms fall into the family: topological changes do not compute. Wait and see.)

# Open peer review is something others should do, Open science is something you could do

This post follows Peer review is not independent validation, where it is argued that independent validation is one of the pillars of the scientific method. Peer review is only a part of the editorial process. Of course that peer review is better than nothing, but it is only a social form of validation, much less rigorous than what the scientific method asks.

If the author follows the path of Open science, then the reader has the means to perform an independent validation. This is great news, here is why.

It is much easier to do Open science than to change the legacy publishing system.

Many interesting alternatives to the legacy publishing have been proposed already. There is green OA, there is gold OA (gold is for \$), there is arXiv.org. There are many other versions, but the main problem is that research articles are not considered really serious unless they are peer reviewed. Legacy publishing provides this, it is actually the only service they provide. People are used to review for established journals and any alternative publishing system has to be able to compete with that.

So, if you want to make an OA platform, it’s not serious unless you find a way to make other people to peer review the articles. This is hard!

People are slowly understanding that peer review is not what we should aim for. We are so used with the idea that peer review is that great thing which is part of the scientific method. It is not! Independent validation is the thing, peer review is an old, unscientific way (very useful, but not useful enough to allow research finding to pass the validation filter).

The alternative, which is Open science, is that the authors of research findings make open all the data, procedures, programs, etc, everything they have. In this way, any other group of researchers, anybody else willing to try can validate those research findings.

The comparison is striking. The reviewers of the legacy publishing system don’t have magical powers, they just read the article, they browse the data provided by the very limited article format and they make an opinion about the credibility of the research findings. In the legacy system, the reviewer does not have the means to validate the article.

In conclusion, it is much simpler to do Open science than to invent a way to convince people to review your legacy articles. It is enough to make open your data, your programs, etc. It is something that you, the author can do.

You don’t have to wait for the others to do a review for you. Release your data, that’s all.

# Peer review is not independent validation

People tend to associate peer review with science. As an example, even today there are still many scientists who believe that an arXiv.org article is not a true article, unless it has been peer reviewed. They can’t trust the article, without reading it first, unless it passed the peer review, as a part of the publishing process.

Just because a researcher puts a latex file in the arXiv.org (I continue with the example), it does not mean that the content of the file has been independently validated, as the scientific method demands.

The part which slips from the attention is that peer review is not independent validation.

Which means that a peer reviewed article is not necessarily one which passes the scientific method filter.

This simple observation is, to me, the key for understanding why so many research results communicated in peer reviewed articles can not be reproduced, or validated, independently. The scale of this peer reviewed article rot is amazing. And well known!

Peer review is a part of the publishing process. By itself, it is only a social validation. Here is why: the reviewers don’t try to validate the results from the article because they don’t have the means to do it in the first place. They do have access only to a story told by the authors. All the reviewers can do is to read the article and to express an opinion about it’s credibility, based on the reviewers experience, competence (and biases).

From the point of view of legacy publishers, peer review makes sense. It is the equivalent of the criteria used by a journalist in order to decide to publish something or not. Not more!

That is why it is very important for science to pass from peer review to validation. This is possible only in an Open Science frame. Once more (in this Open(x) fight) the medical science editors lead. From “Journal Editors To Researchers: Show Everyone Your Clinical Data” by Harlan Krumholz, a quote:

“[…] last Wednesday, the editors of the leading medical journals around the world made a proposal that could change medical science forever. They said that researchers would have to publicly share the data gathered in their clinical studies as a condition of publishing the results in the journals. This idea is now out for public comment.

As it stands now, medical scientists can publish their findings without ever making available the data upon which their conclusions were based.

Only some of the top journals, such as The BMJ, have tried to make data sharing a condition of publication. But authors who didn’t want to comply could just go elsewhere.”

This is much more than simply saying “peer review is bad” (because is not, only that it is not a part of the scientific method, it is a part of the habits of publishers). It is a right step towards Open Science. I repeat here my opinion about OS, in the shortest way I can:

There are 2 parts involved in a research communication:   A (author, creator, the one which has something to disseminate) and R (reader). The legacy publishing process introduces a   B (reviewer).  A puts something in a public place, B expresses a public opinion about this and R uses B’s opinion as a proxy for the value of A’s thing, in order to decide if A’s thing is worthy of R’s attention or not.  Open Access is about the direct interaction of A with R, Open Peer-Review is about transparent interaction of A with B, as seen by R and Validation (as I see it) is improving the format of A’s communication so that R could make a better decision than the social one of counting on B’s opinion.

That’s it! The reader is king and the author should provide everything to the reader, for the reader to be able to independently validate the work. This is the scientific method at work.

# The replicant

This is a molecular machine designed as a patch which would upgrade biological ribosomes. Once it attaches to a ribosome, it behaves in an almost similar ways as the synthetic ribosome Ribo-T, recently announced in  Nature 524,119–124(06 August 2015) doi:10.1038/nature14862  [1].  It thus enables an orthogonal genetic system, (i.e., citing from the mentioned Nature letter “genetic systems that could be evolved for novel functions without interfering with native translation”).

The novel function is designed for is more ambitious than specialized new proteins synthesis. It is, instead, a  two-ways translation device, between real chemistry and programmable artificial chemistry.

It behaves like a bootstrapper in computing. It is itself simulated in chemlambda, an artificial chemistry which was recently proposed as a means towards molecular computers [2].  The animation shows one of the first successful simulations.

With this molecular device in place, we can program living matter by using living cells themselves, instead of using, for example, complex, big 3D DNA printers like the ones developed by Craig Venter.

The only missing step, until recently, was the discovery of the basic translation of the building blocks of chemlambda into real chemistry.

I am very happy to make public a breakthrough result by Dr. Eldon Tyrell/Rosen, a genius who went out of academia some years ago and pursued a private career. It appears that he got interested early in this mix of lambda calculus, geometry and chemistry and he arrived to reproduce with real chemical ingredients two of the chemlambda graph rewrites: the beta rewrite and one of the DIST rewrites.

He tells me in a message  that he is working on prototypes of replicants already.

He suggested the name “replicant” instead of a synthetic ribosome because a replicant, according to him, is a device which replicates a computer program (in chemlambda molecular form) into a form compatible with the cellular DNA machine, and conversely, it may convert certain (RNA) strands into chemlambda molecules, i.e. back into  synthetic form corresponding to a computer program.

[1] Protein synthesis by ribosomes with tethered subunits,  C. Orelle, E. D. Carlson, T. Szal,  T. Florin,  M. C. Jewett, A. S. Mankin
http://www.nature.com/nature/journal/v524/n7563/full/nature14862.html

[2] Molecular computers, M Buliga
http://chorasimilarity.github.io/chemlambda-gui/dynamic/molecular.html

[This post is a reply to +Yonatan Zunger  post
where he shows that the INCEPT DATE of the Blade Runner replicant Roy Batty appears to be 8 Jan, 2016.
So here is a replicant, in the inception phase 🙂 ]

PS: The post appeared as well in the chemlambda collection:

# Mind tricks

One of my goals is to uncover the geometry in the computations. Most people see visualizations as cute, but unnecessary  additions.  Maybe with some very limited pedagogical value. Not the real thing.

The really funny thing is that, on average, people tend to take too seriously a visualization. Some animations trigger all sorts of reflexes which mislead the viewers into seeing too much.

The animal is there, skin deep. Eye deep.

A recent example of using visualizations for research  is how I arrived to build a kinesin like molecule by looking at the Y combinator and permutations.

This is a phenomenon which appeared previously in the artificial chemistry chemlambda. Recall how the analysis of the predecessor lambda term,  led to the introduction of chemlambda quines?

Same here.

Chemical computation is a sort of combinatorial movement, if this makes any sense. Lambda calculus or other means towards rigorous notions of computation clash with the reality: chemical computations, in living organisms, say, are horrendously complex movements of atoms and rearrangements of bonds. There is no input, nor output written with numbers. That’s all there is: movements and rearrangements.

Chemlambda marks some points by showing how to take lambda calculus as inspiration, then how we can see some interesting, movements and rearrangements related thing in the behaviour of the lambda term, then how we can exploit this for designing some pure (artificial) chemistry tour de force of unsupervised cascades of reactions which achieve some goal. Unsupervised, random!

OK, so here is a kinesin built in chemlambda. I see it works and I want to play a bit with it and also to show it.

The following animation has stirred some attention on 4chan, and less attention on google+, of course compared with others from the amazing chemlambda collection 🙂 (why I deleted it)

It makes sense, you can relate with the two kinesins which meet together, they salute each other, then they go their way. One of them detaches from the microtubule (a particularly designed one, which allows kinesins to go in both directions, hm, because I can). The other roams a bit, quick, concerned.

It’s the result of randomness, but it conveys the right info, without introducing too much unrelated stuff.

The next one is about 4 kinesins on a circular microtubule.

This is a bit to much. They look like suspiciously quick moving spiders… Not something to relate to.

But still, there is no false suggestion in it.

People love more the following one, where there are 8 kinesins.

It looks like a creature which tries to feel the boundary of the frame. Cool, but misleading, because:

• the coordinates of nodes of the graph in this representation are irrelevant
• the boundary of the frame is not part of the model, it means nothing for the molecule.

In chemlambda there is a choice made: chemistry is separated from physics. The chemistry (so to say) part, i.e. the graph rewrites and the algorithm of application, is independent from the d3.js rendering of the evolution of the graph.

But people love to see graphs in space, they love to see boundaries and they relate with things which have an appearance of life (or some meaning).

That’s how we are made, no problem, but it plays mind tricks on us.

A clever influencer would play these tricks in favor of the model…

The viewers, if asked to support the research, would be less willing to do it after seeing the fast moving spiders…

I find this very entertaining!

For the record, here is another direction of thinking, inspired by the same permutations which led me to kinesins.

# Do triangulations of oriented surfaces compute?

In a precise sense, which I shall explain, they do. But the way they do it is hidden behind the fact that the rewrites seem non local.

1. They compute, because ribbon graphs with colored, trivalent nodes and directed edges do compute, via the encoding of untyped lambda terms into this family of graphs, provided by chemlambda. Indeed, a chemlambda molecule is a ribbon graph with these properties. If you want to encode a lambda term into chemlambda then there is a simple procedure: start from the lambda term on a form which eliminates the need of any alpha conversion. Then build the syntactic tree and replace the nodes by A nodes for application and L nodes for lambda abstraction (don’t forget that L nodes have one in and 2 out ports, differently from the syntactic tree node for lambda abstraction). Then eliminate the variables which are at the leaves by grafting trees of FO (green fanout) nodes from the lambda abstraction node to the places where the variables occur, or by grafting T (terminal) nodes to the lambda node which issues a variable which does not occur later, or simply by just erasing the variable label for those variables which are not issued from an abstraction. That’s it, you get a ribbon graph which is open (it has at least the root half-edge and maybe the half-edges for the variables which don’t come from an abstraction), but then you may add FRIN (free in) and FROUT (free out) nodes and think about them as tadpoles and you get a trivalent ribbon graph. The dual of this graph is (equivalent to) a triangulated, oriented surface, which has faces colored (corresponding to the nodes of the graph), directed edges, such that there are no faces with the 3 edges directed in a cyclic way.
2. How they compute? Chemlambda uses a set of graph rewrites which has some classic ones, like the Wadsworth-Lamping graphical version of the beta move, but it has two types of fanouts (FO and FOE), one FANIN, and different than usual rules for distributivity. Look at the moves page to see them. All these rewrites are local, in the sense that there is a small number, fixed a priori, which is an upper bound for the number of nodes and edges which enter (in any way) into the graph rewrite (as a condition or as the left pattern, or as the right pattern). The algorithm of application of the rewrites is a very important piece which is needed to make a model of computation. The algorithm is very simple, it can be deterministic or random, and consists, in the deterministic case, into the application of as many rewrites as possible, with a priority for the distributivity moves in case of conflict, and in the random case, it’s just random application of rewrites.

Here is an example, where I play with the reduction of false omega id in chemlambda

1. Now let’s pass to the duals, the triangulated surfaces. The nodes of the triangulated surface correspond to the faces of the ribbon graph. Or the faces of the ribbon graph are global notions, because they are the orbits of a permutation. After one of the rewrites, the faces (of the ribbon graph) change in a way which has to be non local, because one has to compute again the orbits of the permutation for the new graph, and there is no upper bound on the number of half-edges which have to be visited for doing that.
2. So triangulated, oriented surfaces do compute, but the rewrites and the algorithm of application are hidden behind this duality. They are non-local for triangulated surfaces, but local for ribbon graphs.
3. Finally, a word of attention: these surfaces do compute not by being arrows in a category. They don’t compute in this usual, say Turaev kind of way. They compute by (the duals of) the rewrites, there is nothing else than triangulated surfaces, colored by 3 colors (red, green, yellow), there is no decoration which actually does the computation by substitution and evaluation. I don’t know why, but this seems very hard to understand by many. Really, these surfaces compute by rewrites on the triangulations, not by anything else.

ADDED: If you look at the tadpoles as pinches, then make the easy effort to see what  the SKI formalism looks like, you’ll see funny things. The I combinator is the sphere with one pinch (the plane), the K combinator is the sphere with two pinches (cylinder) and the S combinator is the torus with one pinch. But what is SKK=I? What is KAB=A? What you see in the dual (i.e in the triangulation) It depends globally on the whole term, so these reductions do not appear to be the same topological manipulations in different contexts.

# Res vs objectus

Objects are evidence. If reality is the territory, then objects are on the map.  Objective reality is to be compared with bureaucracy.
If reality is not objective, then how is it? Real, of course. Evidence is a map  of the real. Passive, done already, laid further in the court, ready to be used in the argumentation.
Who makes the map has the power over reality, in the same way as bureaucrats have the power over the people.
The confusion  between res and objectus has very concrete effects in our society.
We communicate on the net via evidence. The technical solutions are these, issued from historical reasons, like wars and analytic philosophy.
We are discontent about the lack of privacy of evidence.
Objects as evidence of happiness are not the same as happiness. We are discontent because objects are not enough, when we are told that they should be.
In this setting, who controls the map making mechanism, who controls the data processing, has the power.
Ultimate bureaucracy presented as the unique way. As the only real way. A lie.

# After the IoT comes Gaia

They say that sneakernet does not scale. If you think about the last product of Amazon, the AWS Import/Export Snowball, this clumsy suitcase contains less than a grain of pollen.

Reason from these arguments:

• the Internet of Things is an extension of the internet, where lots of objects in the real world will start to talk and to produce heaps of data
• so there is a need for a sneakernet solution in order to move these data around,  because the data are only passive evidence and they need to be processed,
• compared though with biology, this quantity of data is tiny
• and moreover biology does not function via signal transmission, it functions via signal transduction, a form of sneakernet,

you’ll get to the unavoidable conclusion that the IoT is only a small step towards a global network which works with chemical like interactions, transports data (which are active themselves) via signal transduction and it extends the real world biology.

After the IoT comes Gaia. A technological version, to be clear.

Some time in the future, but not yet when we could say that the Gaia extension appeared, there will still be a mixture of old ways IoT and new ways biological like. Maybe there will be updates, say of the informational/immunity  OS, delivered via anycasts issued from  tree like antennas, which produce pollen particle. The “user” (what an ugly reductionistic name) breaths them and the update start to work.

The next scene may be one which describes what happens if somebody find out that some antennas produce faulty grains. Maybe some users have been alerted by their (future versions of) smartwatches that they inhaled a possible terminal vector.

The faulty updates have to be identified, tracked (chemically, in real world) and anihilated.

The users send a notification via the old internet that something is wrong and somewhere, perhaps on the other side of the planet, a mechanical turk identifies the problem, runs some simulations of the real chemistry with his artificial chemistry based system.

His screen may show something like this:

Once a solution is identified, the artificial chemistry solution is sent to a Venter printer close to the location of the faulty antenna and turned real. In a matter of hours the problem is solved, before the affected users metabolisms go crazy.

# Local machines

Suppose there is a deep conjecture which haunts the imagination of a part of the mathematical community. By the common work of many, maybe even spread over several centuries and continents, slowly a solution emerges and the conjecture becomes a theorem. Beautiful, or at least horrendously complex theoretical machinery is invented and put to the task. Populations of family members experienced extreme boredom when faced to the answers of the question “what are you thinking about?”. Many others expressed a moderate curiosity in the weird preoccupations of those mathematicians, some, say, obsessed with knots or zippers or other childish activities. Finally, a constructive solution is found. This is very very rare and much sought for, mind you, because once we have a constructive solution then we may run it on a computer. So we do it, perhaps for the immense benefit of the finance industry.

Now here is the weird part. No matter what programming discipline is used, no matter which are programmers preferences and beliefs, the computer which runs the program is a local machine, which functions without any appeal to meaning.

I stop a bit to explain what is a local machine. Things are well known, but maybe is better to have them clear in front of the eyes. Whatever happens in a computer, it is only physically local modifications of it’s state. If we look at the Turing machine (I’ll not argue about the fact that computers are not exactly TMs, let’s take this as a simplification which does not affect the main point), then we can describe it as well as a stateless Turing machine, simply by putting the states of the machine on the tape, and reformulating the behaviour of the machine as a family of rewrite rules on local portions of the tape. It is fully possible, well known, and it has the advantage to work even if we don’t add one or many moving heads into the story, or indirection, or other ingredient than the one that these rewrites are done randomly. Believe it or not (if not then read

Turing machines, chemlambda style
http://chorasimilarity.github.io/chemlambda-gui/dynamic/turingchem.html

for an example) but that is a computer, indifferently of what technological complexities are involved into really making one.

(this is an animation showing a harmonious interaction between a chemical molecule derived from a lambda term, in the upper side of the image, and a Turing machine whose tape is visible in the lower side of the image)

Let’s get back to the algorithmic form of the solution of the mathematical problem. On the theoretical side there are lots of high meanings and they were discovered by a vast social collaboration.

But the algorithm run by the computer, in the concrete form it is run, edits out any such meaning. It is a well prepared initial tape (say “intelligently designed”, hope you have taken your daily dose of humour), which is then stupidly, randomly, locally rewritten until there’s no more reaction possible. Gives the answer.

If it is possible to advance a bit, even with this severe constraint to ignore global semantics, then maybe we find really new stuff, which is not visible under all these decorations called “intelligent”, or high level.

[Source:

# Life at molecular scale

Recently there are more and more amazing results in techniques allowing the visualization of life at molecular scale. Instead of the old story about soups of species of molecules, now we can see individual molecules in living cells [1], or that the coiled DNA has a complex chemical configuration, or that axons and dendrites interact in a far more complex way than imagined before. Life is based on a complex tangle of evolving individuals, from the molecular scale onwards.

To me, this gives hope that at some point chemists will start to  consider seriously the possibility to build such structures, such molecular computers [4] from first principles.

The image is a screencast of a chemlambda computation, done with quiner mist.

[1] Li et. al., “Extended Resolution Structured Illumination Imaging of Endocytic and Cytoskeletal Dynamics,” Science.

[2] Structural diversity of supercoiled DNA, Nature Communications 6,Article number:8440doi:10.1038/ncomms9440,
http://www.nature.com/ncomms/2015/151012/ncomms9440/full/ncomms9440.html

[3] Saturated Reconstruction of a Volume of Neocortex, Cell, Volume 162, Issue 3, p648–661, 30 July 2015
http://www.cell.com/cell/abstract/S0092-8674%2815%2900824-7
and video:

[4] Molecular computers
http://chorasimilarity.github.io/chemlambda-gui/dynamic/molecular.html

# Molecular computers in real life

A molecular computer [1]  is a single molecule which transforms into a predictable another one, by a cascade of random chemical reactions mediated by a collection of enzymes, without any external control.

We could use the artificial chemistry chemlambda to build real molecular computers. There is a github repository [2] where this model is implemented and various demos are available.

By using molecular bricks which can play the role of the basic elements of chemlambda we can study the behaviour of real molecules which suffer hundreds or thousands of random chemical reactions, but without having to model them on supercomputers.
A molecule designed like this will respect for a while the chemlambda predictions… We don’t know for how much, but there might be a window of opportunity which would allow a huge leap in synthetic biology. Imagine instead of simple computations with a dozen of boolean gates, the possibility to chemically compute with recursive but not primitive recursive functions.

More interesting, we might search for chemlambda molecules which do whatever we want them to do. We can build arbitrarily complex molecules, called chemlambda quines, which have all the characteristics of living organisms.

We may dream bigger. Chemlambda can unite the virtual and the real worlds… Imagine a chemical lab which takes as input a virtual chemlambda molecule and outputs the real world version, much like Craig Venter’s printers. The converse is a sensor, which takes a real chemical molecule, compatible with chemlambda and translates it into a virtual chemlambda molecule.

Applications are huge, some of them beneficial and others really scary.

For example, you may extend your immune system in order to protect your virtual identity with your own, unique antibodies.

As for using a sensor to make a copy of yourself, at the molecular level, this is out of reach in the recent future, because the real living organism works by computations at a scale which dwarfs the human technical possibilities.

The converse is possible though. What about having a living computer, of the size of a cup, which performs at the level of the whole collection of computers available now on Earth? [3]

References:

[1] this is the definition which I use here, taken from the articles Molecular computers and Build a molecular computer (2015)

# How I hit a wall when I used the open access and open source practices when applying for a job

UPDATE 11.10.2015. What happened since the beginning of the “contest”? Nothing. My guess is that they are going to follow the exact literary sense of their announcement. It is a classic sign of cronyism. They write 3 times that they are going to judge according to the file submitted (the activity of the candidate as it looks from the file), but they don’t give other criteria than the ones from an old law. In my case I satisfy these criteria, of course, but later on they write about “candidates considered eligible”, which literary means candidates that an anonymous board considers they are eligible and not simply eligible according to the mentioned criteria.

Conclusion: this is not news, is dog bites man.

I may be wrong. But in the case I’m right then the main subject (namely what happens in a real situation with open access practices in case of a job opening) looks like a frivolous, alien complaint.

The split between:
– a healthy, imaginative, looking to the future community of individuals and
– a kafkian old world of bureaucratic cronies
is growing bigger here in my country.

__________

UPDATE 14.10.2015: Suppositions confirmed. The results have been announced today, only verbally, the rest is shrouded in mystery. Absolutely no surprise. Indeed, faced with the reality of local management, my comments about open access and open source practices are like talking about a TV show to cavemen.

Not news.

There is a statement I want to make, for those who read this and have only access to info about Romanians from the media, which is, sadly, almost entirely negative.

It would be misleading to judge the local mathematicians (or other creative people, say) from these sources. There is nothing wrong with many Romanian people. On the contrary, these practices which show textbook signs of corruption are typical for the managers of state institutions from this country. They are to be blamed. What you see in the media is the effect of the usual handshake between bad leadership and poverty.

Which sadly manifest everywhere in the state institutions of Romania, in ways far beyond the ridicule.

So next time when you shall interact with one such manager, don’t forget who they are and what they are really doing.

I am not going to pursue a crusade against corruption in Romania, because I have better things to do. Maybe I’m wrong and what is missing is more people doing exactly this. But the effects of corrupt practices is that the state institution becomes weaker and weaker. So, by psycho historic reasons 🙂 there is no need for a fight with dying institutions.

Let’s look to the future, let’s do interesting stuff!

________________________

This is real: there are job openings at the Institute of Mathematics of the Romanian academy, announced by the pdf file

The announce is in Romanian but you may notice that they refer to a law from 2003, which asks for a CV, research memoire, list of publications and ten documents, from kindergarden to PhD. On paper.

That is only the ridicule of bureaucracy, but the real problems were somewhere else.

There is no mention of criteria of selection, members of the committee, but in the announcement is written 3 times that every candidate’s work will be considered only as it appears from looking at the file submitted.

They also ask that the scientific, say, part of the submission to be sent by email to two addresses which you can grasp from the announcement.

So I did all the work and I hit a wall when I submitted by email.

I sent them the following links:

– my homepage which has all the info needed (including links to all relevant work)
http://imar.ro/~mbuliga/

– link to my arxiv articles
http://arxiv.org/a/buliga_m_1
because all my published articles and all my cited articles, published or not) are available at arXiv

– link to the chemlambda repository for the programming, demos, etc part

I was satisfied because I finished this, when I got a message from DanTimotin@imar.ro telling me that I have to send them, as attachment, the pdf files of at least 5 relevant articles.

In the paper file I put 20+ of these articles (selected from 60+), but they wanted also the pdf files.

I don’t have the pdfs of many legacy published articles because they are useless for open access, you can’t distribute them publicly.
Moreover I keep the relevant work I do as open as possible.

Finally, how could I send the content of the github repository? Or the demos?

So I replied by protesting about the artificial difference he makes between a link and the content available at that link and I sent a selection of 20 articles with links to their arXiv version.

He replied by a message where he announced that if I want my submission to be considered then I have to send 5 pdfs attached.

I visited physically Dan Timotin to talk and to understand why a link is different from the content available to that link.

He told me that these are the rules.

He told that he is going to send the pdfs to the members of the committees and it might happen that they don’t have access to the net when they look for the work of the candidate.

He told me that they can’t be sure that the arXiv version is the same as the published version.

He has nothing to say about the programming/demo/animations part.

He told that nobody will read the paper file.

I asked if he is OK if I make public this weird practice and he agreed to that.

Going back to my office, I arrived to find 9 pdfs of the published articles. In many other cases my institute does not have a subscription to journals where my articles appeared, so I don’t think that is fair to be asked to buy back my work, only because of the whims of one person.

Therefore I sent to Dan Timotin a last message where I attached these 9 pdfs, I explained that I can’t access the others, but I firmly demand that all the links sent previously to be sent to the (mysterious, anonymous, net deprived, and lacking public criteria) committee, otherwise I would consider this an abuse.

I wrote that I regret this useless discussion provoked by the lack of transparency and by the hiding behind an old law, which should not stop a committee of mathematicians to judge the work of a candidate as it is, and not as it appears by an abuse of filtering.

After a couple of hours he replied that he will send the files and the links to the members of the committee.

I have to believe his word.

That is what happens, in practice, with open access and open science, at least in some places.

What could be done?

Should I wait for the last bureaucrat to stop supporting passively the publishing industry, by actively opposing open access practices?

Should I wait for all politicians to pass fake PhDs under the supervision of a very complacent local Academia?

Should I feel ashamed of being abused?

# Deterministic vs random, an example of grandiose shows vs quieter, functioning anarchy

In the following video you can see the deterministic, at the right random evolution of the same molecule, duplex.mol from the chemlambda repository. They take about the same time.

The deterministic one is like a ballet, it has a comprehensible development, it has rhythm and drama. Clear steps and synchronization.

The random one is more fluid, less symmetric, more mysterious.

What do you prefer, a grand synchronized show or a functioning, quieter anarchy?

Which one do you think is more resilient?

What is happening here?

The molecule is inspired from lambda calculus. The computation which is encoded is the following. Consider the lambda term for the identity function, i.e. I=Lx.x. It has the property that IA reduces to A for any term A. In the molecule it appears as a red trivalent node with two ports connected, so it looks like a dangling red globe. Now, use a tree of fanouts to multiply (replicate) this identity 8 times, then build the term

(((II)(II))((II)(II)))(((II)(II))((II)(II)))

Then use one more fanout to replicate this term into two copies and reduce all. You’ll get two I terms, eventually.
In the deterministic version the following happens.

– the I term (seen as a red dangling node) is replicated (by sequence of two rewrites, detail) and gradually the tree of fanouts is destroyed

– simultaneously, the tree of applications (i.e. the syntactic tree of the term, but seen with the I’s as leaves) replicates by the fanout from the end

– because the reduction is deterministic, we’ll get 16 copies of I’s exactly when we’ll get two copies of the application tree, so in the next step there will be a further replication of the 16 I’s into 32 and then there will be two, disconnected, copies of the molecule which represents ((II)(II))((II)(II))

– after that, this term-molecule reduces to (II)(II), then to II, then to I, but recall that there are two copies, therefore you see this twice.

In the random version everything mixes. Anarchy. Some replications of the I’s reach the tree of applications before it has finished to replicate itself, then reductions of the kind II –> I happen in the same time with replications of other pieces. And so on.
There is no separation of stages of this computation.
And it still works!

I used quiner_experia, with the mol file duplex.mol. The first time I modified all the weights to 0 (to have deterministic application) and took the rise parameter=0 (this is specific to quiner_experia, not present in quiner) too, because the rise parameter lower the probabilities of new rewrites, exponentially, during the same step, in order to give fair chances to any subset of all possible rewrites possible.
Then I made a screencast of the result, without speeding it, and by using safari to run the result.
For the random version I took all the weights equal to 1 and the rise parameter equal to 8 (empirically, this gives the most smooth evolution, for a generic molecule from the list of examples). Ran the result with safari and screencasted it.
Then I put the two movies one near the other (deterministic at left, random at right) and made a screencast of them running in parallel. (Almost, there is about 1/2 second difference because I started the deterministic one first, by hand).
That’s it, enjoy!
For chemlambda look at the trailer from the collections of videos I have here on vimeo.

# Replication, 4 to 9

In the artificial chemistry chemlambda  there exist molecules which can replicate, they have a metabolism and they may even die. They are called chemlambda quines, but a convenient shorter name is: microbes.
In this video you see 4 microbes which replicate in complex ways. They are based on a simpler microbe whose life can be seen live (as a suite of d3.js animations) at [1].
The video was done by screencasting the evolution of the molecule 5_16_quine_bubbles_hyb.mol and with the script quiner_experia, all available at the chemlambda GitHub repository [2].

[1] The birth and metabolism of a chemlambda quine. (browsers recommended: safari, chrome/chromium)
chorasimilarity.github.io/chemlambda-gui/dynamic/A_L_eggshell.html

# In the mood for a rant: attention conservation notices

I see attention conservation notices at the beginning of posts  belonging to some rather interesting collections. And I wonder: what is the goal of the author of such announcements?

Should I put one too?

Well, if I would put one, then it would be like this:

Wait, let me first give you some context, in the form of a rant. Then I’ll write down the attention conservation notice.

Context. I am one of those researchers who want to create new things, in new ways, in this new connected world. I got in love with the Net the first time I saw a glimpse of it.

My position is the following: research needs to pass by a liberating process exactly like art did a hundred years ago. At a much bigger scale, of course, but the idea is the same.

Much like a revolutionary impressionist at the time of the Art Academies, this is a thrilling and also ridiculous position.

Besides the mediocre but respectable art channels provided by the exhibitions of art academies, there is only worse. The revolutionary painters did have the street to show their works. On the street, the cute portraits and the boooring visual memes are the rule. Not to say also that, on the street there are many other revolutionaries who are either too cool to paint, or just looking for relief from the monkey inheritance who pushes all of us to pretend we are really different.

Art academies are full of good, but statistically mediocre painters, who want to advance in their career with great determination. For them painting is not the goal, but the means towards ensuring a comfortable life. They are job oriented, like everybody else on the street. They speak the language of the street: they are professionals who, incidentally, spend their time splatting pigments on rectangular surfaces. The works are then reviewed by other professionals and finally shown (at different heights, the best ones at the eye level) to their peers, mostly.

Also  to anybody else willing to spend a free afternoon in a pleasant way, by visiting a reputable exhibition. Going back home, then, acquainted with the professional artistic last trends, the enlightened art lover may pick, from the street, something which is surely less expensive, but cute enough or modern enough to deserve the eye level place in the art lover’s home.

These guys are certainly not going to feed a Van Gogh, except by accident. First because is on the street. Secondly because it does not look professional, don’t you see that the guy uses randomly splashed colours, and worse even, you can see the traces of the brushes, instead of the polished, varnished, shitty brown finish. Thirdly, look at that cuute little boy pissing! Or that cat, btw.

You see where I’m going, right? The art lover just wants to spend some pleasant time off work. Just want to feel he or she has human interests. And to show to the Joneses he or she has that special artistic bend.

Now tell me, is an attention conservation notice going to help? Certainly, for somebody who does 5 min portraits for a living. And for that portraits, not for the other stuff. Not for the really good stuff, because the really good stuff takes work to appreciate it.

In conclusion, even if I wish sometimes to put the following attention notice:

This is openly shared work. You have to sweat to get it. If you are an academic looking for promotion please don’t steel it because you’ll be easy to find. If you are just looking for distraction then watch TV, not this post. If you want to discuss then do it after you spent the time to accommodate with the content, you clicked, read and understood all sources. Because otherwise you either show disrespect for my work or you look stupid

but I refrain from it.

# The first trailer of the whole artificial cell movie

Continues Who wants to make a movie.

Compare with the wonderful, real chemistry visualization from Li et. al., “Extended Resolution Structured Illumination Imaging of Endocytic and Cytoskeletal Dynamics,” Science.

______

# Tree traversal magic

UPDATE: you can see and play with this online in the salvaged collection of g+ animations.

Also, read the story of the ouroboros here.

This is an artificial creature which destroys a tree only to make it back as it was. Random graph rewrites algoritm!

The creature is, of course, a walker, or ouroboros, check out the live demo, with the usual proviso that it works much better in chrome, chromium or safari than in firefox.

# Artificial chemistry suggests a hypothesis about real biochemistry

What is the difference between a molecule encoded in a gene and the actual molecule produced by a ribosome from a copy of the encoding?

Here is a bootstrapping hypothesis based on the artificial chemistry chemlambda.

This is a universal model of computation which is supposed to be very simple, though very close to real chemistry of individual molecules. The model is implemented by an algorithm, which uses as input a data format called a mol file.

The language use does not matter, although there may be more elegant versions than mine, like the one by +sreejith s  (still work in progress though) [1].

Since the model is universal it implies that the algorithm and the mol data structure can be themselves turned into an artificial molecule which reacts with the usual invisible enzymes from the model and does the computation of the reduction of the original molecule.

The boostrapping hypothesis is that the original molecule is like the synthetized molecule and that the mol file format turned into a molecule is the stored version of the molecule.

In the post there  is mentioned a previous post [2], where this was tried by hand for a molecule called the 20 quine, but now there are new scripts in the chemlambda repository which allow to do the same for any molecule (limited by the computer and the browser of course).

The final suggestion is that the hypothesis can be continued along these lines, by saying that the “enzymes” which do the rewrite are (in this boostrapping sense) the rewriting part of the algorithm.

Synthetic stuff. Now there is a script which allows to synthetize any chemlambda molecule, like in the previous Invisible puppeteer post.
Look in the chemlambda repository, namely at the pair synthetic.sh and synthetic.awk from this branch of the repository (i.e. the active branch).
In this animation you see the result of the “synthetic” bigpred.mol (which was the subject of the recent hacker news link).

What I did:
– bash synthetic.sh and choose bigpred.mol
– the output is the file synt.mol
– bash quineri.sh and choose synt.mol (I had a random evolution, with cycounter=150, time_val=5 (for safari, but for chromium I choose time_val=10 or even 20).
– the output is synt.html
– I opened synt.html with a text editor to change a bit some stuff: at lines 109-110 I choose a bigger window         var w = 800,   h = 800; and smaller charges and gravity (look for  and modify to .charge(-10)
.gravity(.08) ).

Then I opened the synt.html with safari. (Also worked with chromium). It’s a hard time for the browser because the initial graph has more than 1400 nodes (and the problem is not coming from setTimeout because, compared with other experiments, there are not as many, but it comes from an obscure problem of d3.js with prototypes; anyway this makes firefox lousy, which is a general trend at firefox, don’ know why, chromium OK and safari great. I write this as a faithful user of firefox!).
In this case even the safari had to think a bit about life, philosophy, whatnot, before it started.

I made a screencast with Quicktime and then sped it up progressively to 16X, in order to be able to fit it into less than 20s.

Then I converted the .mov screencast to .gif and I proudly present it to you!

It worked!

Now that’s a bit surprising, because, recall, what I do is to introduce lots of new nodes and bonds, approx 6X the initial ones, which separate the molecule which I want to synthetize into a list of nodes and a list of edges. The list of edges is transformed into a permutation.

Now during the evolution of the synt molecule, what happens is that the permutation is gradually applied (because if randomness) and it will mix with the evolution of the active pieces which start already to rewrite.

But it worked, despite the ugly presence of a T node, which is the one which sometimes may create problems due to such interferences if the molecule is not very well designed.

At the end I recall what I believe is the take away message, namely that the mol file format is a data structure which itself can be turned into a molecule.

# Summer news, Ackermann related observations and things to come

1. after a chemlambda demo appeared on hackernews  (link) I saw a lot of interest from hacker brains (hopefully). Even if slightly misplaced (title reads: D3.js visualization of a lambda calculus) it is an entry gate into this fascinating subject.

2. Sreejith S (aka 4lhc) works on a python port for chemlambda, called chemlambda-py. It works already, I look forward to try it when back home. Thank you Sreejith! Follow his effort and, why not, contribute?

3. Herman Bergwerf set out to write a chemlambda IDE, another great initiative which I am very eager to see it working, just by looking at Herman beautiful MolView.

4. During discussions with Sreejith, I noticed some funny facts about the way chemlambda computes the Ackermann function. Some preliminaries: without cheats (i.e. closed forms) or without memoization, caching, etc, it is hopeless to try to compute Ack(4,2). The problem is not as much the fact that the function takes huge values, but the fact that in order to compute even modest values, there are lots and lots of calls. See the rosettacode entry for the Ackermann function about that. Compared with those examples, the implementation of chemlambda in awk does not behave bad at all. There are now several mol files for various ackermann function values which you may try. The only one which takes lots of time (but not huge memory, if you except the html output, which you can eliminate by commenting with a # all printf lines in the awk script) is ackermann_4_1. `This one works, but I still have to see how much time it takes. The interesting observation is that the number of steps (in the deterministic version) of chemlambda (mind: steps not rewrites!) is at 1 or 2 difference than the number of steps of this beautiful stacks based script: ackermann script. It means that somehow the chemlambda version records in space the intermediary values, instead of stacking them for further use. Very strange, to explore!

5. There exist, on paper, the chemlambda v3 “enzo”. It’s very very nice, you’ll see!

# Permutation-replication-composition all-in-one

This is the permutation cycle 1 – > 2 – > 3 – > 4 – > 5 – > 6 – > 7 – > 8 – > 1 which is replicated and composed with itself in the same time.

Done with pwheel_8_compo.mol from the chemlambda repo, and with quiner.sh, in the deterministic variant (all weights set to 0. The result is a pair of cycles 1 – > 3 – > 5 – > 7 – > 1  and 2 – > 4 – > 6 – > 8 – > 2.

See other plays with permutations in the collection deleted!

_________________________

The article

# The mesh is a network of microtubule connectors that stabilizes individual kinetochore fibers of the mitotic spindle

announces the discovery of a new structure in the cell: the mesh.

From the abstract:

Kinetochore fibers (K-fibers) of the mitotic spindle are force-generating units that power chromosome movement during mitosis. K-fibers are composed of many microtubules that are held together throughout their length.
Here, we show, using 3D electron microscopy, that K-fiber microtubules (MTs) are connected by a network of MT connectors. We term this network ‘the mesh’.
The K-fiber mesh is made of linked multipolar connectors. Each connector has up to four struts, so that a single connector can link up to four MTs.  […]
Optimal stabilization of K-fibers by the mesh is required for normal progression through mitosis.
We propose that the mesh stabilizes K-fibers by pulling MTs together and thereby maintaining the integrity of the fiber. “

My speculation is that the mesh has not only the role of a scaffold for the microtubule structure.

Together with the microtubules and with some other (yet undiscovered or, on the contrary, very well known) parts, this is the computer.

DNA, which fascinates us, is more like a memory device.

But the computation may be as in  chemlambda. The dynamical reorganization of the mesh-microtubule-other proteins structure is very much resembling to a chemlambda molecule (or even to a chemlambda quine.

Mentioned this here, because there is an evocative image

____________________

# Inceptionism: an AI built on pragmatic principles turns out to be an artificial Derrida

Not willing to accept this, now they say that the artificial neural network dreams. Name: Google Deep Dream.

The problem is that the Google Deep Dream images can be compared with dreams by us humans.  Or in the general case of an automatic classifier of big data there is no term of comparison.  How can we, the pragmatic scientists, know that the output obtained from data (by training a neural network on other data) is full of dreamy dog eyes or not?

If we can’t trust the meaning obtained from big data, by pragmatic means, then we might as well renounce at analytic philosophy and turn towards continental (so called) philosophy.

That is seen as not serious, of course, which means that the ANN dreams, whatever that means. With this stance we transform a  kick in the ass of our most fundamental beliefs into a perceived progress.

___________________________________________________________________

# The inner artificial life of a cell, a game proposal

The  inner life of a cell is an excellent, but passive window

It is also scripted according to usual human expectations: synchronous moves, orchestrated reactions at a large scale. This is of course either something emergent in real life, or totally unrealistic.

As you know, I propose to use the artificial chemistry chemlambda for making real, individual molecular computing devices, as explained in Molecular computers.

But much more can be aimed at, even before the confirmation that molecular computers, as described there,  can be built by us humans.

Of course that Nature builds them everywhere, we are made of them. It works without any external control, not as a sequence of lab operations, asynchronously, in a random environment, and it is very hard to understand if there is a meaning behind the inner life of a living cell, but it works nevertheless without the need of a semantics to streamline its workings.

So obvious, however so far from IT way of seeing computation.

Despite the huge and exponential advances in synthetic biology, despite the fact that many of these advances are related to IT, despite that more and more ways to control biological workings, I believe that there has to be a way to attack the problem of computations in biology from the basis. Empirical understanding is great and will fuel for some time this amazing synthetic biology evolution, but why not thinking about understanding how life works, instead of using biological parts to make functions, gates and other components of the actual IT paradigm?

As a step, I propose to try to build a game-like artificial life cell, based on chemlambda. It should look and feel like the Inner life of a cell video, only that it would be interactive.

There are many bricks already available, some molecules (each with its own story and motivation) are in the chemlambda repository, other are briefly described, with animations, in the chemlambda collection.

For example:
– a centrosome and the generated microtubules like in

kinesins  as adapted walkers like in

– molecules built from other ones like RNA from DNA

– programmed computations (mixing logic and biologic)

– all in an environment looking somehow like this

Like in a game, you would not be able to see the whole universe at once, but you could choose to concentrate to this part or that part.
You could build superstructures from chemlambda quines and other bricks, then you could see what happens either in a random environment or in one where, for example, reductions happen triggered by the neighbourhood of your
mouse pointer (as if the mouse pointer is a fountain of invisible enzymes).

Videos like this, about the internal working of a neuron

would become tasks for the gamer.

______________________________________________________________

# Artificial life which can be programmed

Artificial life

which can be programmed

__________________________________________________________________________________

__________________________________________________________________________________

__________________________________________________________________________________

__________________________________________________________________________________

# An apology of molecular computers and answers to critics

This is how a molecular computer would look like, if seen with a magically powerful microscope. It is a single molecule which interacts randomly with other molecules, called “enzymes”, invisible in this animation.

There is no control over the order of the chemical reactions. This is the idea, to compute without control.

The way it works is like this: whenever a reaction happens, this creates the conditions for the next reaction to happen.

There is no need to use a supercomputer to model such a molecule, nor is it reasonable to try, because of big number of the atoms.

It is enough instead to find real molecular assemblies for nodes, ports and bonds, figured here by colored circles and lines.

The only computations needed are those for simulating the family of rewrites – chemical reactions. Every such rewrite involves up to 4 nodes, therefore the computational task is handy.

Verify once that the rewrites are well done, independently of the situation where you want to apply them, that is all.

Once such molecular compounds are found, the next task is to figure out how to build (by chemical reactions) such molecules.

But once one succeeds to build one molecule, the rest is left to Nature way of computing: random, local, asynchronous.

From this stage there is no need to simulate huge molecules in order to know they work. That is something given by the chemlambda formalism.

It is so simple: translate the rewrites into real chemistry, they are easy, then let go the unneeded control from that point on.

This animation is a screencast of a part of the article Molecular computers
http://chorasimilarity.github.io/chemlambda-gui/dynamic/molecular.html
and everything can be validated (i.e. verified by your own) by using the chemlambda repository
https://github.com/chorasimilarity/chemlambda-gui/tree/gh-pages/dynamic

Now I’ll pass to a list of critics which, faced with the evidence, they look uninformed:
1. Chemlambda is one of those rewriting systems everybody knows. Ignorant claim, while it is true that some rewrites appear all over the place, from string theory to knot theory to category theory to geometry of interaction, the family of graphs considered is not the same, because those graphs are purely combinatorial object and they don’t need a global embedding, like all other formalism do, in a schematic space-time. Moreover, the choice of the rewrites is such that the system works only by local rewriting and no global control on the cascade of rewrites. No other formalism from the family does that.

2.  Is well known that all this is already done in the category theory treatment of lambda calculus.

False, if one really reads what they do in category theory with lambda calculus, then one figures quick that they can’t do much for untyped lambda beta calculus, that is without eta reduction. This is mentioned explicitly in Barendregt, for example, but the hype around categories and lambda calculus is so pervasive that people believe more than what actually is.

3.  Chemical computing is old stuff: DNA computing, membrane computing, the chemical abstract machine, algorithmic chemistry.

Just because it is chemical computing, it does not mean that it is in the family mentioned.

The first name of chemlambda was “chemical concrete machine” and there there are comparison with the chemical abstract machine
http://arxiv.org/abs/1309.6914
(btw I see that some people discover now “catalysts” without credits in the written papers)
The cham is a formalism working with multisets of molecules, not with individual ones, and the computation is done by what corresponds to lab operation (splitting a solution in two, heating, cooling, etc)
The membrane computing work is done around membranes which enclose containers of multisets of molecules, the membrane themselves being some abstract concepts, of a global nature, whil ein reality, as well as in chemlambda, everything is a molecule. Membranes exist in reality but they are made of many molecular compounds.
DNA computing is an amazing research subject, which may be related to chemlambda if there is a realization of chemlambda nodes, ports and bonds, but not otherwise, because there is not, up to my knowledge, any model in DNA computing with the properties: individual molecules, random reactions, not lab operations.
Algorithmic chemistry is indeed very much related to chemlambda, by the fact that it proposes a chemical view on lambda calculus. But from this great insight, the paths are very different. In algorithmic chemistry the application operation from lambda calculus represents a chemical reaction and the lambda abstraction signals a reactive site. In chemlambda the application and lambda abstraction corresponds to atoms of molecules. Besides, chemlambda is not restricted to lambda calculus, only some of the chemlambda molecules can be put in relation with lambda terms, but even for those, the reactions they enter don’t guarantee that the result is a molecule for a lambda term.

Conclusion: if you are a chemist, consider chemlambda, there is nothing like it already proposed. The new idea is to let control go and instead chain the randomly appearing reactions by their spatial patterns, not by lab operations, nor by impossibly sophisticated simulations.
Even if in reality there would be more constraints (coming from the real spatial shapes of the molecules constructed from these fundamental bricks) this would only influence the weights of the random encounters with the enzymes, thus not modifying the basic formalism.
And if it works in reality, even for only situations where there are cascades of tens of reactions, not hundreds or thousands, even that would be a tremendous advance in chemical computing, when compared with the old idea of copying boolean gates and silicon computers circuits.

______________________________________

Appeared also in the chemlambda collection microblog

______________________________________

# What if… it can be done? An update of an old fake news post

In May 2014 I made a fake news post (with the tag WHAT IF) called Autodesk releases Seawater. It was about this big name who just released a totally made up product called Seawater.

“SeaWater is a design tool for the artificial life based decentralized Internet of Things.”

In the post it is featured this picture

[source]

… and I wrote:

“As well, it could be  just a representation of the state of the IoT in a small neighbourhood of you, according to the press release describing SeaWater, the new product of Autodesk.”

Today I want to show you this:

or better go and look in fullscreen HD this video

The contents is explained in the post from the microblogging collection chemlambda

27 microbes. “This is a glimpse of the life of a community of 27 microbes (aka chemlambda quines). Initially the graph has 1278 nodes (atoms) and 1422 edges (bonds). There are hundreds of atoms refreshed and bonds made and broken at once.”

Recall that all this is done with the most simple algorithm, which turns chemlambda into an asynchronous graph rewrite automaton.

A natural development would be to go further, exactly like described in the Seawater post.

Because it can be done 🙂

_________________________________

# The Internet can be your pet

or  you could have a pet which embeds and run a copy of the whole Internet.

The story from this post  starts from this exploratory posting on Google plus from June 2nd, 2015, which zooms from sneakernet to sneakernet  delay-tolerant networking to  Interplanetary Internet to Nanonetworks to DNA digital data storage to molecular computers.

I’ll reproduce the final part, then I’ll pass to the Internet as you pet thing.

“At this stage things start to be interesting. There is this DNA digital data storage technique
http://en.wikipedia.org/wiki/DNA_digital_data_storage
which made the news recently by claiming that the whole content of the Net fits into a spoon of DNA (prepared so to encode it by the technique).

I have not been able to locate the exact source of that claim, but let’s believe it because it sounds reasonable (if you think at the scales involved).

It can’t be the whole content of the net, it must mean the whole passive content of the net. Data. A instant (?) picture of the data, no program execution.

But suppose you have that spoonful of DNA, how do you use it? Or what about also encoding the computers which use this data, at a molecular level.

You know, like in the post about one Turing Machine, Two Turing Machines https://plus.google.com/+MariusBuliga/posts/4T19daNatzt
if you want classical computers running on this huge DNA tape.

Or, in principle, you may be able to design a molecular google search …
molecule, which would interact with the DNA data to retrieve some piece of it.

Or you may just translate all programs running on all computers from all over the world into lambda calculus, then turn them into chemlambda molecules, maybe you get how much, a cup of molecular matter?

Which attention:
– it executes as you look at it
– you can duplicate it into two cups in a short matter of time, in the real world
– which makes the sneakernet simply huge related to the virtual net!

Which brings of course molecular computers proposal to the fore
http://chorasimilarity.github.io/chemlambda-gui/dynamic/molecular.html  ”

Let’s develop this a bit! (source)

The following are projections about the possible future of biochemical computations with really big data: the whole, global data produced and circulated by humans.

They are based on estimates of the information content in the biosphere from the article [1] and on a proposal for life like molecular computing.

Here are the facts. In [1] there are given estimates about the information content of DNA in the biosphere, which are used further.

One estimate is that there are about 5 X 10^11 tonnes of DNA, in a biomass of about 2 X 10^12 tonnes, which gives a proportion of DNA in biomass of about 1/40.

This can be interpreted as: in order to run the biochemical computations with 1g of DNA there are needed about 40g of biochemical machinery.

From the estimate that the biomass contains about 5 X 10^30 cells, it follows that 4g of DNA are contained (and thus run in the biochemical computation) in 10^13 cells.

The Internet has about 3 X 10^9 computers and the whole data stored is equivalent with about 5g of DNA [exact citation not yet identified, please provide a source].

Based on comparisons with the Tianhe-2 supercomputer (which has about 3 X 10^6 cores) it follows that the whole Internet processes in a way equivalent as a magnitude order to 10^3 such supercomputers.
From [1] (and from the rather dubious equivalence of FLOPS with NOPS)  we get that the whole biosphere has a power of 10^15 X 10^24 NOPS, which gives for 10^13 cells (the equivalent of 4g of DNA) about 10^17 NOPS. This shows that approximately the power of the biochemical computation of 4g of DNA (embedded in the biochemical machinery of about 160g) is of the same order with the power of computation of the whole internet.

Conclusion until now: the whole Internet could be run in a “pet” living organism of about 200g. (Comparable to a rat.)

This conclusion holds only if there is a way to map silicon and TM based computers into biochemical computations.

There is a huge difference between these two realms, which comes from the fact that the Internet and our computers are presently built as a hierarchy, with multiple levels of control, while in the same time the biochemical computations in a living cell do not have any external control (and there is no programming).

It is therefore hard to understand how to map the silicon and TM based computations (which run one of the many computation paradigms embedded into the way we conceive programming as a discipline of hierarchical control) into a decentralized, fully asynchronous, in a ransom environment biochemical computation.

But this is exactly the proposal made in [2], which shows that in principle this can be done.

The details are that in [2] is proposed an artificial chemistry (instead of the real world chemistry) and a model of computation which satisfies all the requirements of biochemical computations.
(See very simple examples of such computations in the chemlambda collection https://plus.google.com/u/0/collection/UjgbX )

The final conclusion, at least for me, is that provided there is a way to map this (very basic) artificial chemistry into real chemical reactions, then one day you might have the whole Internet as a copy which runs in your pet.

[1] An Estimate of the Total DNA in the Biosphere,
Hanna K. E. Landenmark,  Duncan H. Forgan,  Charles S. Cockell,
http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002168

[2] Molecular computers,
Marius Buliga, 2015
http://chorasimilarity.github.io/chemlambda-gui/dynamic/molecular.html

# Crossings as pairs of molecular bonds; boolean computations in chemlambda

I collect here the slightly edited versions of a stream of posts on the subject from the microblogging chemlambda collection. Hopefully this post will give a more clear image about this thread.

Here at the chorasimilarity open notebook, the subject has been touched previously, but perhaps that the handmade drawings made it harder to understand (than with the modern 🙂 technology from the chemlambda repository):

# Teaser: B-type neural networks in graphic lambda calculus (II)

(especially the last picture!)

All this comes with validation means. This is a very powerful idea: in the future validation will replace peer review, because it is more scientific than hearsay from anonymous authority figures (aka old peer review) and because it is more simple to implement than a network of hearsay comments (aka open peer review).

All animations presented here are obtained by using the script quiner.sh and various mol files. See instructions about how you can validate (or create your own animations) here:

Here starts the story.

(Source FALSE is the hybrid of TRUE, boole.mol file used)

Church encoding gives a way to define boolean values as terms in the lambda calculus. It is easy:

TRUE= Lx.(Ly.x)

FALSE= Lx.(Ly.y)

So what? When we apply one of these terms to another two, arbitrary terms X and Y, look what happens: (arrows are beta reductions (Lx.A)B – – > A[x:=B] )

(TRUE X) Y – – > (Ly.X) Y – – > X (meaning Y goes to the garbage)

(FALSE X) Y – – > (Ly.y) Y – – > Y (meaning X goes to the garbage)

It means that TRUE and FALSE select a way for X and Y: one of them survives, the other disappears.

Good: this selection is an essential ingredient for computation

Bad: too wasteful! why send a whole term to the garbage?

Then, we can see it otherwise: there are two outcomes: S(urvival) and G(arbage), there is a pair of terms X and Y.

– TRUE makes X to connect with S and Y to connect with G

– FALSE makes X to connect with G and Y to connect with S

The terms TRUE and FALSE appear as molecules in chemlambda, each one made of two red nodes (lambdas) and a T (termination) node. But we may dispense of the T nodes, because they lead to waste, and keep only the lambda nodes. So in chemlambda the TRUE and FALSE molecules are, each, made of two red (lambda) nodes and they have one FROUT (free out).

They look almost alike, only they are wired differently. We want to see how it looks to apply one term to X and then to Y, where X and Y are arbitrary. In chemlambda, this means we have to add two green (A) application nodes, so TRUE or FALSE applied to some arbitrary X and Y appear, each, as a 4 node molecule, made of two red (lambda) two green (A), with two FRIN (free in) nodes corresponding to X and Y and two FROUT (free out) nodes, corresponding: one to the deleted termination node, thus this is the G(arbage) outcome, and the other to the “output” of the lambda terms, thus this is the S(urvival) outcome.

But the configuration made of two green A nodes and two red L nodes is the familiar zipper which you can look at in this post

In the animation you see TRUE (at left) and FALSE (at right), with the magenta FROUT nodes and the yellow FRIN nodes.

The zipper configurations are visible as the two vertical strings made of two green, two red nodes.

What’s more? Zippers, they do only one thing: they unzip.

The wiring of TRUE and FALSE is different. You can see the TRUE and FALSE in the lower part of the animation. I added four Arrow (white) nodes in order to make the wiring more visible. Arrow nodes are eliminated in the COMB cycle, they have only a fleeting existence.

This shows what is really happening: look at each (TRUE-left, FALSE-right) configuration. In the upper side you have 4 nodes, two magenta, two yellow, which are wired together at the end of the computation. In the case of TRUE they end up wired in a X pattern, in the case of FALSE they end up wired in a = pattern.

At the same time, in the lower side, before the start of the computation, you see the 4 white nodes which: in the case of TRUE are wired in a X pattern, in the case of FALSE are wired in a = pattern. So what is happening is that the pattern ( X or = ) is teleported from the 4 white nodes to the 4 magenta-yellow nodes!

The only difference between the two molecules is in this wire pattern, X vs =. But one is the hybrid of the other, hybridisation is the operation (akin to the product of knots) which has been used and explained in the post about senescence and used again in more recent posts. You just take a pair of bonds and switch the ends. Therefore TRUE and FALSE are hybrids, one of the other.

(Source Boolean wire, boolewire.mol file used )

If you repeat the pattern which is common to TRUE and FALSE molecules then you get a boolean wire, which is more impressive “crossings teleporter”. This time the crosses boxed have been flattened, but the image is clear:

Therefore, TRUE and FALSE represent choices of pairs of chemical bonds! Boolean computation (as seen in chemlambda) can be seen as management of promises of crossings.

(Source Promises of crossings, ifthenelsecross.mol file used )

You see 4 configurations of 4 nodes each, two magenta and two yellow.

In the upper left side corner is the “output” configuration. Below it and slightly to the right is the “control” configuration. In the right side of the animation there are the two other configurations, stacked one over the other, call them “B” (lower one) and “C” (upper one).

Connecting all this there are nodes A (application, green) and L (lambda, red).

You see a string of 4 green nodes, approximately vertical, in the middle of the picture, and a “bag” of nodes, red and green, in the lower side of the picture. This is the molecule for the lambda term

IFTHENELSE = L pqr. pqr

applied to the “control” then to the “B” then to the “C”, then to two unspecified “X” and “Y” which appear only as the two yellow dots in the “output” configuration.

After reductions we see what we get.

Imagine that you put in each 4 nodes configuration “control”, “B” and “C”, either a pair of bonds (from the yellow to the magenta nodes) in the form of an “X” (in the picture), or in the form of a “=”.

“X” is for TRUE and “=” is for false.

Depending on the configuration from “control”, one of the “B” or “C” configuration will form, together with its remaining pair of red nodes, a zipper with the remaining pair of green nodes.

This will have as an effect the “teleportation” of the configuration from “B” or from “C” into the “output”, depending on the crossing from “control”.

You can see this as: based on what “control” senses, the molecule makes a choice between “B” and “C” promises of crossings and teleports the choice to “output”.

(Source: Is there something in the box?, boxempty.mol used)

I start from the lambda calculus term ISZERO and then I transform it into a box-sensing device.

In lambda calculus the term ISZERO has the expression

ISZERO = L a. ((a (Lx.FALSE)) TRUE)

and it has the property that ISZERO N reduces either to TRUE (if N is the number 0) or FALSE (if N is a number other than 0, expressed in the Church encoding).

The number 0 is
0 = FALSE = Lx.Ly.y

For the purpose of this post I take also the number 2, which is in lambda calculus

2=Lx.Ly. x(xy)

(which means that x is applied twice to y)

Then, look: (all reductions are BETA: (Lx.A)B – – > A[x:=B] )

ISZERO 0 =
(L a. ((a (Lx.FALSE)) TRUE) ) (Lx.Ly.y) – – >
((Lx.Ly.y) (Lx.FALSE)) TRUE – – >
(Ly.y)TRUE – – > (remark that Lx.FALSE is sent to the garbage)
TRUE (and the number itself is destroyed in the process)

and

ISZERO 2 =
(L a. ((a (Lx.FALSE)) TRUE) ) (Lx.Ly. x(xy)) – – >
((Lx.Ly. x(xy)) (Lx.FALSE)) TRUE – – > (fanout of Lx.FALSE performed secretly)
(Lx.FALSE) ((Lx.FALSE) TRUE) – – >
FALSE ( and (Lx.FALSE) TRUE sent to the garbage)

Remark that in the two cases there was the same number of beta reductions.

Also, the use of TRUE and FALSE in the ISZERO term is… none! The same reductions would have been performed with an unspecified “X” as TRUE and an unspecified “Y” as FALSE.

(If I replace TRUE by X and FALSE by Y then I get a term which reduces to X if applied to 0 and reduces to Y if applied to a non zero Church number.)

Of course that we can turn all this into chemlambda reductions, but in chemlambda there is no garbage and moreover I want to make the crossings visible. Or, where are the crossings, if they don’t come from TRUE and FALSE (because it should work with X instead of TRUE and Y instead of FALSE).

Alternatively, let’s see (a slight modification of) the ISZERO molecule as a device which senses if there is a number equal or different than 0, then transforms, according to the two cases, into a X crossing or a = crossing.

Several slight touches are needed for that.

1. Numbers in chemlambda appear as stairs of pairs of nodes FO (fanout, green) and A (application, green), as many stairs as the number which is represented. The stairs are wrapped into two L (lambda, red) nodes and their bonds.
We can slightly modify this representation so that it appears like a box of stairs with two inputs and two outputs, and aby adding a dangling green (A, application) node with it’s output connected to one of its inputs (makes no sense in lamnda calculus, but behaves well in the beta reductions as performed in chemlambda).

In the animation you can see, in the lower part of the figure:
-at left the number 0 with an empty box (there are two Arrow (white) nodes added for clarity)
-at right the number 2 with a box with 2 stairs
… and in each case there is this dangling A node (code in the mol file of the form A z u u)

2. The ISZERO is modified by making it to have two FRIN (free in, yellow) and two FROUT (free out, magenta) nodes which will be involved in the final crossing(s). This is done by a clever (hope) change of the translation of the ISZERO molecule into chemlambda: first the two yellow FRIN nodes represent the “X” and the “Y” (which they replace the FALSE and the TRUE, recall), and there are added a FOE (other fanout node, yellow) and a FI (fanin node, red) in strategic places.

________________________________

# ArXiv is 3 times bigger than all megajournals taken together

How big are the “megajournals” compared to arXiv?
I use data from the article

[1] Have the “mega-journals” reached the limits to growth? by Bo-Christer Björk ​https://dx.doi.org/10.7717/peerj.981 , table 3

and the arXiv monthly submission rates

To have a clear comparison I shall look at the window 2010-2014.

Before showing the numbers, there are some things to add.

1.  I saw the article [1] via the post by

[3] Have we reached Peak Megajournal? http://svpow.com/2015/05/29/have-we-reached-peak-megajournal/

I invite you to read it, it is interesting as usual.

2. Usually, the activity of counting articles is that dumb thing which is used by managers to hide behind, in order to not be accountable for their decisions.
Counting  articles is a very lossy compression technique, which associates to an article a very small number of bits.
I indulged into this activity because of the discussions from the G+ post

and its clone

[4′] Eisen’ “parasitic green OA” is the apt name for Harnad’ flawed definition of green OA, but all that is old timers disputes, the future is here and different than both green and gold OA https://chorasimilarity.wordpress.com/2015/05/28/eisen-parasitic-green-oa-is-the-apt-name-for-harnad-flawed-definition-of-green-oa-but-all-that-is-old-timers-disputes-the-future-is-here-and-different-than-both-green-and-gold-oa/

These discussions made me realize that the arXiv model is carefully edited out from reality by the creators and core supporters of green OA and gold OA.

[see more about in the G+ variant of the post https://plus.google.com/+MariusBuliga/posts/RY8wSk3wA3c ]
Now, let’s see those numbers. Just how big is that arXiv thing compared to “megajournals”?

From [1]  the total number of articles per year for “megajournals” is

2010:  6,913
2011:  14,521
2012:   25,923
2013:  37,525
2014:  37,794
2015:  33,872

(for 2015 the number represents  “the articles published in the first quarter of the year multiplied by four” [1])

ArXiv: (based on counting the monthly submissions listed in [2])

2010: 70,131
2011: 76,578
2012: 84,603
2013: 92,641
2014:  97,517
2015:  100,628  (by the same procedure as in [1])

This shows that arXiv is 3 times bigger than all the megajournals at once, despite that:
– it is not a publisher
– does not ask for APC
– it covers fields far less attractive and prolific than the megajournals.

And that is because:
– arxiv answers to a real demand from researchers, to communicate fast and reliable their work to their fellows, in a way which respects their authorship
– also a reaction of support for what most of them think is “green OA”, namely to put their work there where is away from the publishers locks.

_____________________________________

# Eisen’ “parasitic green OA” is the apt name for Harnad’ flawed definition of green OA, but all that is old timers disputes, the future is here and different than both green and gold OA

See this post and the replies on G+ at [archived post].

My short description of the situation: the future is here, and it is not gold OA (nor the flawed green OA definition which ignores arXiv). So, visually:

It has never occurred to me that putting an article in a visible place (like arXiv.org) is parasitic green OA  calls it parasitic because he supposes that this has to come along with the real publication. But what if not?

[Added: Eisen writes in the body of the post that he uses the definition given by Harnad to green OA, which ignores the reality. It is very conveniently for gold OA to have a definition of green OA which does not apply to the oldest (1991) and fully functional example of a research communication experiment which is OA and green: the arXiv.org.]
Then, compared to that, gold OA appears as a progress.
http://www.michaeleisen.org/blog/?p=1710

I think gold OA, in the best of cases, is a waste of money for nothing.

A more future oriented reply has
http://svpow.com/2015/05/26/green-and-gold-the-possible-futures-of-open-access/
who sees two possible futures, green (without the assumption from Eisen post) and gold.

I think that the future comes faster. It is already here.

Relax. Try validation instead peer review. Is more scientific.

Definition. Peer-reviewed article: published by the man who saw the man who claims to have read it, but does not back the claim with his name.

The reviewers are not supermen. They use the information from the traditional article. The only thing they are supposed to do is that they read it. This is what they use to give their approval stamp.

Validation means that the article provides enough means so that the readers can reproduce the research by themselves. This is almost impossible with  an article in the format inherited from the time when it was printed on paper. But when the article is replaced by a program which runs in the browser, which uses databases, simulations, whatever means which facilitate the validation, then the reader can, if he so wishes, make a scientific motivated opinion about this.

Practically the future has come already and we see it on Github. Today. Non-exclusively. Tomorrow? Who knows?

Going back to the green-gold OA dispute, and Elsevier recent change of sharing and hosting articles (which of course should have been the real subject of discussions, instead of waxing poetic about OA, only a straw man).

This is not even interesting. The discussion about OA revolves around who has the copyright and who pays (for nothing).

I would be curious to see discussions about DRM, who cares who has the copyright?

But then I realised that, as I wrote at the beginning of the post, the future is here.

Here to invent it. Open for everybody.

I took the image from this post by and modified the text.

_____________

Don’t forget to read the replies from the G+ post. I archived this G+ post because the platform went down. Read here why I deleted the chemlambda collection from G+.

____________________________________________________

# Real or artificial chemistries? Questions about experiments with rings replications

The replication mechanism for circular bacterial chromosomes is known. There are two replication forks which propagate in two directions, until they meet again somewhere and the replication is finished.

[source, found starting from the wiki page on circular bacterial chromosomes]

In the artificial chemistry chemlambda something similar can be done. This leads to some interesting questions. But first, here is a short animation which describes the chemlambda simulation.

The animation has two parts, where the same simulation is shown. In the first part some nodes are fixed, in order to ease the observation of the propagation of the replication, which is like the propagation of the replication forks. In the second part there is no node fixed, which makes easy to notice that eventually we get two ring molecules from one.

____________

If the visual evidence convinced you, then it is time to go to the explanations and questions.

But notice that:

• The replication of circular DNA molecules is done with real chemistry
• The replication of the circular molecule from the animation is done with an artificial chemistry model.

The natural question to ask is: are these chemistries the same?

The answer may be more subtle than a simple yes or no. As more visual food for thinking, take a look at a picture from the Nature Nanotechnology Letter “Self-replication of DNA rings” http://www.nature.com/nnano/journal/vaop/ncurrent/full/nnano.2015.87.html by Junghoon Kim, Junwye Lee, Shogo Hamada, Satoshi Murata & Sung Ha Park

[this is Figure 1 from the article]

This is a real ring molecule, made of patterns which, themselves are made of DNA. The visual similarity with the start of the chemlambda simulation is striking.

But this DNA ring is not a DNA ring as in the first figure. It is made by humans, with real chemistry.

Therefore the boldfaced question can be rephrased as:

Are there real chemical assemblies which react as of they are nodes and bonds of the artificial chemistry?

Like actors in a play, there could be a real chemical assembly which plays the role of a red atom in the artificial chemistry, another real chemical assembly which plays the role of a green atom, another for a small atom (called “port”) and another for a bond between these artificial chemistry atoms.

From one side, this is not surprising, for example a DNA molecule is figured as a string of letters A, C, T, G, but each letter is a real molecule. Take A (adenine)

[source]

Likewise, each atom from the artificial chemistry (like A (application), L (lambda abstraction), etc) could be embodied in real chemistry by a real molecule. (I am not suggesting that the DNA bases are to be put into correspondence with artificial chemistry atoms.)

Similarly, there are real molecule which could play the role of bonds. As an ilustration (only), I’ll take the D-deoxyribose, which is involved into the backbone structure of a DNA molecule.

[source]

So it turns out that it is not so easy to answer to the question, although for a chemist may be much easier than for a mathematician.

___________

0. (few words about validation)  If you have the alternative to validate what you read, then it is preferable to authority statements or hearsay from editors. Most often they use anonymous peer-reviews which are unavailable to the readers.

Validation means that the reader can make an opinion about a piece of research by using the means which the author provides. Of course that if the means are not enough for the reader, then it is the author who takes the blame.

The artificial chemistry  animation has been done by screen recording of the result of a computation. As the algorithm is random, you can produce another result by following the instructions from
I used the mol file model1.mol and quiner.sh. The mol file contains the initial molecule, in the mol format. The script quiner.sh calls the program quiner.awk, which produces a html and javascript file (called model1.html), which you can see in a browser.

I  added text to such a result and made a demo some time ago

http://chorasimilarity.github.io/chemlambda-gui/dynamic/model1.html

(when? look at the history in the github repo, for example: https://github.com/chorasimilarity/chemlambda-gui/commits/gh-pages/dynamic/model1.html)

1. Chemlambda is a model of computation based on artificial chemistry which is claimed to be very closed to the way Nature computes (chemically), especially when it comes to the chemical basis of computation in living organisms.
This is a claim which can be validated through examples. This is one of the examples. There are many other more; the chemlambda collection shows some of them in a way which is hopefully easy to read (and validate).
A stronger claim, made in the article Molecular computers (link further), is that chemlambda is real chemistry in disguise, meaning that there exist real chemicals which react according to the chemlambda rewrites and also according to the reduction algorithm (which does random rewrites, as if produced by random encounters of the parts of the molecule with invisible enzymes, one enzyme type per rewrite type).
This claim can be validated by chemists.
http://chorasimilarity.github.io/chemlambda-gui/dynamic/molecular.html

2. In the animation the duplication of the ring molecule is achieved, mostly, by the propagation of chemlambda DIST rewrites. A rewrite from the family DIST typically doubles the number of nodes involved (from 2 to 4 nodes), which vaguely suggest that DIST rewrites may be achieved by a DNA replication mechanism.
(List of rewrites here:
http://chorasimilarity.github.io/chemlambda-gui/dynamic/moves.html )

So, from my point of view, the question I have is: are there DNA assemblies for the nodes, ports and bonds of chemlambda molecules?

3. In the first part of the animation you see the ring molecule with some (free in FRIN and free out FROUT) nodes fixed. Actually you may attach more graphs at the free nodes (yellow and magenta 1-valent nodes in the animation).

You can clearly see the propagation of the DIST rewrites. In the process, if you look closer, there are two nodes which disappear. Indeed, the initial ring has 9 nodes, the two copies have 7 nodes each. That is because the site where the replication is initiated (made of two nodes) is not replicated itself. You can see the traces of it as a pair of two bonds which connect, each, a free in with a free out node.

In the second part of the animation, the same run is repeated, this time without fixing the FRIN and FROUT nodes before the animation starts. Now you can see the two copies, each with 7 nodes, and the remnants of the initiation site, as a two pairs of FRIN-FROUT nodes.

_________________________________________