Category Archives: Uncategorized

Quarantine garden

During these two months I spent a lot of time in a garden. A year ago there was nothing there. I worked the few patches of earth, threw the debris and started to plant stuff. This year the efforts begin to show.

 

There are now roses and jasmin, a small grapevine,

IMG_3869

and a lot if ivy, in the downtown of a big city

IMG_3859

Made also some garden drawings

IMG_3877

 

IMG_3875

Lots of things waiting to grow.

 

Later:

IMG_4010

IMG_4005

IMG_3998

 

Quine graphs (3), ouroboros, hapax and going public

Several news:

I decided that progressively I’m going to go public, with a combination of arXiv, Github and Zenodo (or Figshare), and publication. But there is a lot of stuff I have to publish and that is why this will happen progressively. Which means it will be nice to watch because it is interesting, for me at least,  to answer to the question:

What the … does a researcher when publishing? What is this for? Why?

Seriously, the questions are not at all directed against classical publication, nor are they biased versus OA. When you publish serially, like a researcher, you often tell again and again a story which evolves in time. To make a comparison, it is like a sequence of frames in a movie.

Only that it is not as simple. It is not quite like a sequence of frames,  is like a sequence of pictures, each one with it’s repeating tags, again and again.

Not at all compressed. And not at all like an evolving repository of programs which get better with time.

What is a chemlambda quine?

UPDATE 4: See Interaction combinators and Chemlambda quines.

UPDATE 3: I made a landing page for my pages to play and learn.

UPDATE 2: And now there is Fractalize!

UPDATE: The most recent addition to the material mentioned in the post is Find a Quine, which let you generate random 10 nodes graphs (there are 9 billion of them) and to search for new quines. They are rare, but today I found 3 (two all of them are shown as examples).  If you find one, mail me the code (instructions on the page).

__

The ease of use of the recently written chemlambda.js makes easier the sharing of past ideas (from the chemlambda collection) and as well of new ideas.

Here is some material and some new thoughts. Before this, just recall that the *new* work is in hapax. See what chemlambda has to do with hapax, especially towards the end.

A video tutorial about how to use the rest of new demos.

The story of the first chemlambda quine, deduced from the predecessor of a Church number. Especially funny is that this time you are not watching an animation, it happens in front of you 🙂

More quines and random eggs, if you want to go further in the subject of chemlambda quines.  The eggs are 4-nodes graphs (there are 720 of them). They manifest an amazing variety of behaviour. I think that the most interesting is that there are quines and there are also graphs which have a reversible evolution, without being quines. Indeed, in chemlambda a quine is one which has a periodic evolution (thus is reversible) under the greedy algorithm of rewrites. But there is also the reversible, but not quine case, where you can reverse the evolution of a graph by picking a sequence of rewrites.

Finally, if you want to look also at famous animations, you have the feed the quine. This contains some quines but also some other graphs which featured in the chemlambda collection.

Most of all, come back to see more, because I’m going to update and update…

Blockchain categoricitis 2, or life as an investor and a category theory fan

… or the unreasonable effectiveness of category theory in blockchain investments.

A year ago I wrote the post Blockchain categoricitis and now I see my prediction happening.

Categoricitis is the name of a disease which infects the predisposed fans of category theory, those which are not armed with powerfull mathematical antibodies. Show them some diagrams from the height of your academic tower, tell them you have answers for real problems and they will believe.

Case in point: RChain. See Boom, bust and blockchain: RChain Cooperative’s cryptocurrency dreams dissolve into controversy.

Update: Epilogue? (28.02.2020)

Yes, just another cryptocurrency story… Wait a moment, this one is different, because it is backed by strong mathematical authority! You’ll practically see all the actors from the GeekWire story mentioned in the posts linked further.

Look:

Guestpost at John Baez blog: RChain (archived)

“Programmers, venture capitalists, blockchain enthusiasts, experts in software, finance, and mathematics: myriad perspectives from around the globe came to join in the dawn of a new internet. Let’s just say, it’s a lot to take in. This project is the real deal – the idea is revolutionary […]”

RChain is light years ahead of the industry. Why? It is upholding the principle of correct by construction with the depth and rigor of mathematics.”

__________

Another one, in the same place: Pyrofex (archived). This is not a bombastic guestpost, it’s authored by Baez.

Mike Stay is applying category theory to computation at a new startup called Pyrofex. And this startup has now entered a deal with RChain.”

Incidentally (but which fan reads everything?) in the same post Baez is candid about computation and category theory.

“When I first started, I thought the basic story would be obvious: people must be making up categories where the morphisms describe processes of computation.

But I soon learned I was wrong: […] the morphisms were equivalence classes of things going between data types—and this equivalence relation completely washed out the difference, between, say, a program that actually computes 237 × 419 and a program that just prints out 99303, which happens to be the answer to that problem.

In other words, the actual process of computation was not visible in the category-theoretic framework.” [boldfaced by me]

(then he goes on to say that 2-categories are needed in fact, etc.)

In Applied Category Theory at NIST (archived) we read:

“The workshop aims to bring together two distinct groups. First, category theorists interested in pursuing applications outside of the usual mathematical fields. Second, domain experts and research managers from industry, government, science and engineering who have in mind potential domain applications for categorical methods.”

and we see an animation from the post  “Correct-by-construction Casper | A Visualization for the Future of Blockchain Consensus“.

________________________

I never trusted these ideas. I had interactions with some of the actors in this story   (example) (another example), basically around distributed GLC . Between 2013-2015, instead of writing programs the fans of GLC  practically killed the distributed   GLC project  because it was all the time presented in misleading terms of agents and processes, despite my dislike. Which made me write chemlambda, so eventually that was good.

[hype] GLC and chemlambda are sort of ideal Lisp machines which you can cut in half and they still work. But you have to renounce at semantics for that, which makes this description very different from the actual Lisp machines.  [/hype]

 

 

Category theory is not a theory, here’s why: [updated]

Category theory does not make predictions.

This is a black and white formulation, so there certainly are exceptions. Feel free to contradict.

___________________________________________

UPDATE: As I’m watching Gromov on probability, symmetry, linearity, the first part:

I can’t stop noticing several things:

  • he repeatedly say “we don’t compute”, “we don’t make computations”
  • he rightly say that the classical mathematical notation hides the real thing behind, like for example by using numbers, sets, enumerations (of states for ex.)
  • and he clearly thinks that category theory is a more evolved language than the classical.

Yes, my opinion is that indeed the category theory language is more evolved than classical. But there is an even more evolved stage: computation theory made geometrical (or more symmetric, without the need for states, enumerations, etc).

Category theory is some kind of trap for those mathematicians who want to say something  is computable or something is, or should be an algorithm, but they don’t know how to say it correctly. Corectly means without the burden of external, unnatural bagagge, like enumeration, naming, evaluations, etc. So they resort to category theory language, because it allows them to abstract over sets, enumerations, etc.

There is no, yet, a fully geometrical version of computation theory.

What Gromov wants is to express himself in that ideal computation theory, but instead he only has category theory language to use.

Gromov computes and then he says this is not a computation.

Grothendieck, when he soaks the nut in the water, he lets the water compute. He just build a computer and let it run.  He reports the results, that’s what classical mathematical language permits.

That’s the problem with category theory, it does not compute, properly, just reports the results of it.

___________________________________________

As concerns the real way humans use category theory…

Mathematicians use category theory as a tool, or as a notation, or as a thought discipline, or as an explanation style. Definitely useful for the informed researcher! Or a life purpose for a few minds.

All hype for the fans of mathematics, computer science or other sciences. To them, category theory gives the false impression of understanding. Deep inside, the fan of science (who does not want/have time/understands anything of the subject) feels that all creative insights are based on a small repertoire of simple (apparently) tricks. Something that the fan can do, something which looks science-y, without the effort.

Then, there are the programmers, wonderful clever people who practice a new science and long for recognition from the classics 🙂 Category theory seems modular enough for them. A tool for abstraction, too, something they are trained in.  And — why don’t you recognize? — with that eternal polish of mathematics, but without the effort.

This is exploited cynically by good  public communicators with a creativity problem.  The recipe is: explain. Take an older, difficult creation, wash it with category theory and present it as new.

Kaleidoscope

Unexpectedly and somehow contrary to my fresh posting about my plans for 2019, during the week of Jan 7-12, 2019 a new project appeared, which is temporary named Kaleidoscope. [Other names, until now: kaleidos, morphoo. Other suggestions?]

This post marks the appearance of the project in my log. I lost some time for a temporary graphical label of it:

chi-kai-s-min

I have the opinion that new, very promising projects need a name and a label, as much as an action movie superhero needs a punchline and a mask.

So what is the kaleidoscope? It is as much about mechanical computers (or physically embedded computation) as it is about graph rewrite systems and about space in the sense of emergent algebras and about probabilities. It is a physics theory, a computation model and a geometry in the same time.

What can I wish more, research wise?

Yes, so it deserves to be tried and verified in all details and this takes some time. I do hope that it will survive to my bugs hunt so that I can show it and submit it to your validation efforts.

 

Twitter lies: my long ago deleted account appears as suspended

9 months ago I deleted my Twitter account, see this post.  Just now I looked to see if there are traces left. To my surprise I get the message:

“This account has been suspended. Learn more about why Twitter suspends accounts or return to your timeline.”

See for yourself: link.

This is a lie. I feel furious about the fact that this company shows a misleading information about me, long after I deleted my account.

Projects for 2019 and a challenge (updated at the end of 2019)

It’s almost the end of 2018, so I updated my expectations post from a year ago, you may find it interesting. Update (dec. 2019): And now I updated this post.

Now, here is a list of projects which are almost done on paper and which deserve attention or reserve some surprises for 2019. Then a challenge for you, dear creative reader.

  • I mentioned Hydrogen previously. This is a project to build a realistic hydrogen atom purely in software. This means that I need a theory (a lambda calculus like) for state spaces, then for quantum mechanics and finally for a hydrogen atom. [UPDATE: too early at the moment, maybe for 2020]
  • Space is of special interest (and needed to build hydrogen), a lambda calculus for space is proposed in the em project. Now I am particularly fascinated by numbers. [UPDATE: now there is anharmonic lambda and pure see, in the making]
  • The needs project is a bridge towards chemlambda.  It’s entirely written, in pieces, it is about permutation automata. Only the main routine is public.[UPDATE: this project morphed into hapax]
  • And what would life be without luck, aka computable probabilities? This is the least advanced project, for the moment it covers some parts of classical mechanics, but it is largely feasible and a pleasure to play with it in the year to come. [UPDATE: see arXiv:1902.04598 and these slides about chemlambda as a hamiltonian system with dissipation]
  • [UPDATE: several other things happened, for example quine graphs]

 

I have a strong feeling that these projects look very weird to you, so I have a proposal and a challenge. The proposal for you is to ask for details. I am willing to give as much as (or perhaps more than) you need.

The challenge is the following.  As you know my banner is “can do anything”.  So let’s test it:

  • propose a subject of research where you are stuck. Or better, you want to change the world (in a good way).
  • I’ll do something about it as quick as possible, if you get me interested.
  • Then I’ll ask for means. And for fairness.
  • Then we’ll do it to the best of our capacities.

Well, happy 2019, soon!

 

Open Science is rwx science

Preamble: this is a short text on Open Science, written a while ago,  which I now put it here. It is taken from this place at telegra.ph. The link (not the content) appeared here at the Chemlambda for the people post. I can’t find other traces, except the empty github repository “creat”,  described as “framework for research output as a living creature“.

__________________

I am a big fan of Open Science. For me, a good piece of research is one which I can Read Write eXecute.

Researchers use articles to communicate. Articles are not eXecutable. I can either Read others’ articles or Write mine. I have to trust an editor who tells me that somebody else, whom I don’t know, read the article and made a peer-review.

No. Articles are stories told by researchers about how they did the work. And since the micromanagement era, they are even less: fungible units to be used in funding applications, by the number or by the keyword.

This is so strange. I’m a mathematician and you probably know that mathematics is the most economical way to explain something clearly. Take a 10 pages research article. It contains the intensive work of many months. Now, compress the article further more by the following ridiculous algorithm: throw away everything but the first several bits. Keep only the title, the name of the journal, keywords, maybe the Abstract. That’s not science communication, that’s massive misuse of brain material.

So I’m an Open Science fan, what should I do instead of writing articles? Maybe I should push my article in public and wait after that for somebody to review it. That’s called Open Access and it’s very good for the readers. So what? the article is still only Readable or Writable, pick only one option, otherwise it’s bad practice. What about my time? It looks that I have to wait and wait for all the bosses, managers, politicians and my fellow researchers to switch to OA first.

It’s actually much easier to do Open Science, remember! something that you can Read, Write and eXecute. As an author, you don’t have to wait for the whole society to leave the old ways and to embrace the new ones. You can just push what you did: stories, programs, data, everything. Any reader can pull the content and validate it, independently. EXecute what you pushed, Read your research story and Write derivative works.

I tried this! Want to know how to build a molecular computer which is indiscernible from how we are made? Use this playground called chemlambda. It’s a made up, simple chemistry. It works like the real chemistry does, that is locally, randomly, without any externally imposed control. My bet is that chemlambda can be done in real life. Now, or in a few years.

I use everything available to turn this project into Open Science. You name it: old form articles, html and javascript articles, research blog, Github repository, Figshare data repository, Google collection [update: deleted], this 🙂

Funny animations obtained from simulations. Those simulations can be run on your computer, so you can validate my research. Here’s what chemlambda looks like.

[Here come some examples and animations. ]

 

During this project I realized that it went beyond a Read Write Execute thing. What I did was to design many interesting molecules. They work by themselves, without any external control. Each molecule is like a theorem and the chemical evolution is the proof of the theorem, done by a blind, random, stupid, universal algorithm.

Therefore my Open Science attempt was to create molecules, some of them exhibiting a metabolism, some of them alive. Maybe this is the future of Open Science. To create a living organism which embodies in its metabolism the programs and research data. It’s valid if it lives, grow, reproduces, even die. Let it cross breed with other living creatures. In time the natural selection will do marvels. Life is not different than Science. Science is not different than life.

Authors: hodl your copyright or be filtered

For me this is the only sane reaction to the EU Copyright Directive. The only thing to do is to keep your copyright. Never give it to another. You can give non-exclusive rights of dissemination, but not the copyright of your work.

So: if you care about your piece of work then hodl copyright, if you don’t care about it (produced it to satisfy a job demand, for example) then proceed as usual, is trash anyway.

For my previous comments see this and this.

If you have other ideas then share them.

 

The second Statebox Summit – Category Theory Camp uses my animation

with attribution.

UPDATE: the post was initially written as a reaction to the fact that the Open Science project chemlambda needs attribution when some product related to it is used (in this case an animation obtained from a dodecahedron molecule which produces 4 copies; it works because it is a Petersen graph). As it can be seen in the comments everything was fixed with great speed, thank you Jelle. Here’s the new page look

Screenshot from 2018-09-09 15:18:06.png

Wishing the best to the participants, I’d like to learn more about Holochain in particular.

The rest of the post follows. It may be nice because it made me think about two unrelated little facts: (1) I was noticed before about the resemblance between chemlambda molecules and the “vajra chains” (2) well, I CHING hexagrams structure and rewrites are close to the two families of chemlambda rewrites, especially as seen in the “genes” shadow of a molecule. So putting these two things together, stimulated to find an even more halucinatory application of chemlambda, I arrived to algorithmic divination. Interested? Write to me!

__________________________________________________

I hope they’ll fix this, the animation is taken probably from the slides I prepared for TED Chemlambda for the people (html+js).

Here’s a gif I made from what I see today Saturday 20:20 Bucharest time.

test_s

Otherwise I’m interested in the subject and open to discussions, if any which is not category theory PR, but of substance.

UPDATE: second thoughts

  • the halucinatory power of chemlambda manifests again 🙂
  • my face is good enough for a TED conference (source), now my animation is good for a CT conference, but not my charming personality and ideas
  • here is a very lucrative idea, contact me if you like it,  chemlambda OS research could be financed from that: I was notified about the resemblance between chemlambda molecules and the vajra chains of awareness, therefore what about making an app which would use chemlambda as a divination tool? Better than a horoscope, if well made, huge market. I can design some molecules and the algorithm for divination.

A stochastic version for hamiltonian inclusions with convex dissipation

Appeared as arXiv:180710480  (it was previously available as (draft) )

A stochastic version and a Liouville theorem for hamiltonian inclusions with convex dissipation

Abstract: The statistical counterpart of the formalism of hamiltonian systems with convex dissipation arXiv:0810.1419  arXiv:1408.3102 is a completely open subject. Here are described a stochastic version of the SBEN principle and a Liouville type theorem which uses a minimal dissipation cost functional.

just in time for the anniversary of my son Matei 🙂

What about arXiv/figshare/zenodo and the EU copyright reform?

UPDATE: I asked again today (Sept 12) after the vote on the EU Copyright Directive.

___________

As a researcher I would very much appreciate answers to the following questions:

  • suppose I put an article in arXiv, then it appears in a journal. Are the uses of the link to the arXiv version affected in any way?
  • continuing, will the choices of licenses, among those used by arXiv, lead to different treatments?
  • does the EU copyright reform apply to articles which are already available on arXiv  (and also in journals)?
  • is there anything in the EU copyright reform which harms the arXiv?
  • what about other repositories, like figshare for example? what about zenodo?

I insist with the arXiv example because in some research fields, like mathematics or physics, the usual way things happen re articles is this: first the researcher submits the article to arXiv, then the article is published in a legacy journal. Some times the article is not published in journals, but it is cited in other articles published in journals.  Most of the articles are available therefore in two places: arXiv (say) and journal. From what I read about the EU copyright reform, I can’t understand if the use of the arXiv version of an article will be affected by this reform.

While I can understand that there are many problems concerning open source software repositories, I would like to see a clear discussion about this subject which is close but different from the subject of open source software repositories.

Groups are numbers (3). Axiom (convex)

This post will be updated as long as the em draft will progress. Don’t just look, ask and contribute.

UPDATE 3: Released: arXiv:1807.02058.

UPDATE 2: Soon to release. I found something so beautiful that I took two days off, just to cool down. Wish I release this first em-convex article in a week, because it does not modify the story told in that article. Another useful side-effect of writing this is that I found a wrong proof in arXiv:0804.0135 so I’ll update that too.

UPDATE: Don’t mind too much my rants, I have this problem, that what I am talking about is in the future with respect to what I show. For example I tried to say it several times, badly! that chemlambda may be indeed related to linear logic, because both are too commutative. Chemlambda is as commutative as linear logic because in chemlambda we can do the shuffle. Or the shuffle is equivalent with commutativity, that’s what I tried to explain last time in Groups are numbers (1). There is another, more elaborate point of view, a non-commutative version of chemlambda, in the making. In the process though, I went “oh, shiny thing, what’s that” several times and now I (humbly try to) retrace the correct steps, again, in a form which can be communicated easily. So don’t mind my bad manners, I don’t do it to look smart./

The axiom (convex) is the key of the Groups are numbers (1) (2) thread. Look at this (as it unfolds) as if this is a combination of:

  • the construction of the field of numbers in projective geometry and
  • the Gleason and Montgomery-Zippin solution to the Hilbert 5th problem

I think I’ll leave the (sym) axiom and the construction of coherent projections for another article.

Not in the draft available there are about 20 pages about the category of conical groups, why it is not compact symmetric monoidal (so goodbye linear logic) but it has as a sub-category Hilb. Probably will make another article.

I sincerely doubt that the article form will be enough. I can already imagine anonymous peer reviews where clueless people will ask me (again and again) why I don’t do linear logic or categorical logic (not that it is useless, but it is in the present form heavily influenced by a commutative point of view, is a fake generalization from a too particular particular case).

A validation tool would be great. Will the chemlambda story repeat, i.e. will I have to make, alone, some mesmerizing programs to prove what I say works? I hope not.

But who knows? Very few people deserve to be part of the Invisible College. People who have the programming skills (the easy part) and the lack of prejudices needed to question linear logic (the hard, hacker part).

 

Groups are numbers (2). Pattern matching

As in divination, pattern matching. Continues from Groups are numbers (1).

We start from elementary variables, then we define number terms by two operations: substraction and multiplication.

accept_0_0

  • Variables are terms.
  • Substraction (the first line): a is a variable and b is a term, then a-b is a term.
  • Multiplication (2nd line): a, b are terms, then ab is a term.

 

By pattern matching we can prove for example this:

accept_4

[update: figure replaced, the initial one was wrong by pattern matching only. The difference is that in this correct figure appears “(a-b)d” instead of the wrong “d(a-b)”]

What does it mean? These are just binary trees. Well let’s take a typing convention

accept_0_1

where e, x, … are elements of a vector space and the variables are invertible scalars. Moreover take e = 0 for simplicity.

Then the previous pattern matching dream says that

(1-(a-b))c x + (a-b)(c-d) x = (c - (a-b)d)x

which is true, but from all the irrelevant reasons (vector space, associativity and commutativity  of addition, distributivity, etc):

(c- ac + bc + ac -ad - bc + bd) x = (c - ad + bd) x = (c - (a-b)d)x 

What about this one, which is also by pattern matching:

accept_5

With the previous typing conventions it reads:

(c-(a-b))x = (1-b)(c-a)x + b(1- a^{-1})(c-a) x + (bc a^{-1})x

which is true because the right hand side is:

((1-b)(c-a) + b(1- a^{-1})(c-a) + bc a^{-1} )x =

= (c-a-bc+ab+bc-ab-bca^{-1} +b+bc a^{-1}) x = (c-a+b) x = (c-(a-b))x

Which is funny because it does not make any sense.

Groups are numbers (1), the shuffle trick and brackets

What I call the shuffle trick is this rewrite. It involves, at left, a pattern of 3 decorated nodes, with 5 ports (the root and 1, 2, 3, 4). At the right there is almost the same pattern, only that the decorations of the nodes changed and the ports 2, 3 are shuffled.

shuffle_1

I have not specified what is the orientation of the edges, nor what the decorations (here the letters “a”, “b”) are. As previously with chemlambda or emergent algebras, these graphs are in the family of trivalent oriented ribbon graphs. You can find them everywhere, in physics, topology, knot theory or interaction graphs. Usually they are defined as a pair of two permutations, A and B, over the set of “half-edges” (which are really half edges). The permutation A has the property that AAA=id and the orbits of A are the nodes of the graph. Translated, this gives a circular order of the edges incident to a node. The permutation B is such that BB=id and the orbits are the unoriented edges. Indeed, an edge is made by to half edges. The orientation of the edges is made by picking one of the half edges which make an edge, or equivalently by replacing the set of two half edges, say half-edge and B(half-edge), by a list of the two half edges.

I prefer though another description of these graphs, by using sticks and rings, or, just the same, by using a partially defined SUCC function (which defines the sticks or rings) and a GLUE permutation with GLUE GLUE = id. That is why I use behind the description of the chemlambda strings and what you can grasp by looking at the needs repository.

With the sticks and rings notation, here is an example of the shuffle trick:

shuffle

The shuffle trick is very important in chemlambda. It is this one, in a static diagram. You see that the decoration of the nodes are “FO” and “FOE” and that actually, in chemlambda, this is achieved via two rewrites.

shuffle_2

More about the chemlambda shuffle trick in the all-in-one illustrated shuffle trick post. where is explain why the shuffle trick is so important for duplication. An animation taken from there is this one, the dynamical version of the previous static picture. [The molecule used is this, you can see it live here.]

shuffle

But the shuffle trick is relevant to emergent algebras as well. This time we play with oriented binary trees, with nodes which can be white or black, decorated with “a”, “b” from a commutative group. To refresh a bit your memory, here are the rules of the game for emergent algebras, look at the first two columns and ignore the third one (because this is old notation from graphic lambda calculus). An emergent algebra over a set X is a collection of operations indexed with a parameter in a commutative group. We can represent these operations by using oriented trivalent ribbon graphs (left side) or just binary trees (right side), here with leaves at the right and the root at the left.

 

shuffle_0

(image taken from this post).   (Image changed)

In this post we’ll use Reidemeister moves (they are related to the true Reidemeister moves).

shuffle_5

(Emergent algebras have one more property, but for this we need to have an uniform structure on X, because we need to take limits wrt the parameter which are uniform wrt the leaves… not needed here for the moment.)

Further I’ll use the representation with binary trees, i.e. I’ll not draw the orientation of the edges, recall: from the leaves, at right, to the root, at left.

By using the Reidemester 2 move twice, we can make the following version of a shuffle trick (in the figure below the orientation of the edges is from right to left, or from the leaves 1, 2, 3, 4 to the root)

 

shuffle_3

Encircled is a graph which quantifies the difference between the left and right sides of the original shuffle trick. So if we want to have a real shuffle trick in emergent algebras, then we would like this graph to transform to an edge, i.e. the following rewrite

shuffle_4

If this rewrite is possible in an emergent algebra, then we’d have a shuffle trick for it. Conversely, if the shuffle trick would be valid, then the graph from the left would transform to the one from the right by that shuffle trick and two Reidemeister 2 moves.

But look closer at this graph: reading from right to left, it looks like a bracket, or commutator in a group. It has the form b^{-1} a^{-1} b a , or almost!

This is because we can prove that indeed it is an approximate version of the commutator and that the shuffle trick in emergent algebras is possible only in a commutative case. We shall apply this further for chemlambda, to show that it is in a sense valid in a commutative frame. Also, we can work non-commutatively as well, the first example being the Heisenberg group. Lot of fun!

 

Open Access Movies

Dear Netflix, Dear Hollywood, Dear Cannes Competition, etc etc etc

Dear artists,

You face tough times.  Your audiences are bigger than you ever imagined.  Your movies, your creations are now very easy to access, by anybody, from everywhere. But you can’t monetize this as much as you want. The artists can’t be paid enough, the producers can’t profit enough. There is no respect left for your noble profession. A screaming idiot with a video camera is more popular than a well thought, well funded movie with a dream cast.

There is a solution which I humbly suggest. Take the exemple from academic scientists. They are an absolute disaster as concerns the communication talent, but, reluctantly, you may grant them some intellectual capacities, even if used in the most weird way and to their least profit.

They invented Gold Open Access and I believe that it is a great idea for your business.

You have to recognize the problem, which is that you can no longer ensure a good profit from selling your movies. The audience will follow the cheapest path and will not pay you. That’s the sad truth.

But, what about the movie makers? They have money.

People from the audience is always seeking for the best movies. Give them the best movies for free!

You are the gatekeepers. Dissemination of movies is trivial. Make the movie makers pay!

You are the ones which can select the best movies. From the thousands of movies made each year, only a hundred of them are among the best, for a global audience.

Therefore, artists, if you want to work in the best, then you have to be in the cast of the best 100 movies of the year. Then you’re good and with some chance (there are lots and lots of artists, you know) you’ll have the chance to be in the cast of next year’s best movies.

Producers, why don’t you use your connection with politicians and convince them to take money from taxes and give them to you, in a competition alike to the various research grant competions in the academic world.

Producers can always split the money with the dissemination channels. They will be both part of the juries which decide which movie takes more funding and part of the juries which decide which movie is transmitted by Netflix, which one will deserve to be called  a Hollywood production or, in the case of Europeans, in the list of movies from the next Cannes competition.

In this way the producers make profit before the movie is stolen by the content pirates.

In this way the dissemination channels (Netflix, etc) have the best movies to show, vetted by respected professionals, and already paid by the various competition budgets.

In this way politicians can always massage as they want their message to the populace. And finance future campains.

So the great idea, borrowed from the intelligent academic research community, is to make the creators compete and pay for the honor to be in the first 100 best creators, with the money from taxes, taxes taken from the audience who sees the movie for free.

Groups are numbers (0)

I am very happy because today I arrived to finish a thread of work concerning computing and space. In future posts, with great pleasure I’ll take the time to explain that, with this post serving as an impresionistic introduction.

Eight years ago I was obsessing about approximate symmetric spaces. I had one tool, emergent algebras, but not the right computation side of it. I was making a lot of drawings, using links, claiming that there has to be a computational content which is deeply hidden not in the topology of space, but in the infinitesimal calculus. I reproduce here one of the drawings, made during a time spent at the IHES, in April 2010. I was talking with people who asked me to explain with words, not drawings, I was starting to feel that category theory does not give me the right tools, I was browsing Kauffman’s “Knots and Physics” in search for hints. (Later we collaborated about chemlambda, but knots are not quite enough, too.)

 

link_approx

 

This a link which describes what I thought that is a good definition for an approximate symmetric space. It uses conventions later explained in Computing with space, but the reason I reproduce it here is that, at the moment I thought it is totally crazy. It is organic, like if it’s alive, looked like a creature to me.

There was an article about approximate symmetric spaces later, but not with such figures. Completely unexpected, these days I had to check something from my notes back then and I found the drawing. After the experience with chemlambda I totally understand the organic feeling, and also why it does resemble to the “ouroboros” molecules, which are related to Church encoding of numbers and to the predecessor.

bigpred_train.gif

Because, akin to the Church encoding in lambda calculus, there is an encoding in emergent algebras, which makes these indeed universal, so that a group (to simplify) encodes numbers.

And it is also related to the Gleason-Yamabe theorem discussed here previously. That’s a bonus!

Quines in chemlambda (2)

Motivated by this comment I made on HN, reproduced further, I thought about making a  all-in-one page of links concerning various experiments with quines in chemlambda. There are too many for one post though. In the Library of chemlambda molecules about 1/5 of the molecules are, or involve quines.

[EDIT: see the first post Quines in chemlambda from 2014]

If you want to see some easy to read (I hope) explanations, go to the list of posts of the chemlambda collection and search for “quine”. Mind that there are  several other posts which do not have the word “quine” in the title, but do have quine-relevant content, like those about biological imortality, or about senescence, or about “microbes”.

There’s a book to be written about, with animated pages. Or a movie, with uniformised style simulations. Call me if you want to start a project.

Here is the comment which motivated this post.

One of the answers from your first link gives a link to this excellent article

https://link.springer.com/chapter/10.1007%2F978-3-540-92273-…

on “autocatalitic quines”. The Introduction section explains very nice the history of uses of quines in artificial life.

There are some weird parts though in all this, namely that we may think about different life properties in terms of quines:

1) Metabolism, where you take one program, consume it and produce the same program

2) Replication, where you take one program, consume it and produce two copies.

But what about

3) Death

I thought about this a lot during my chemlambda alife project, where I have a notion of a quine which might be interesting, seen the turn of these comments.

A chemlambda molecule is a particular trivalent graph (imagine a set of real molecules, the graphs don’t have to be connected), chemical reactions are rewrites, like in reality, when if there is a certain pattern detected (by an enzyme, say) then the patern is rewritten.

There are two extremes in the class of possible algorithms. One extreme is the deterministic one, where rewrites are done whenever possible, in the order of preference from a list, so that the possible conflicting patterns are always solved in the same way. The other extreme is the purely random one, where patterns are randomly detected and then executed or not acording to a coin toss.

Now, a quine in this world is by definition a graph which has a periodic evolution under the deterministic algorithm.

The interesting thing is that a quine, under the random algorithm, has some nice properties, among them that it has a metabolism, can self-replicate and it can also die.

Here is how a quine dies. Simple situation. Take a chemlambda quine of period 1. Suppose that there are two types of rewrites, the (+) one which turns a pattern of 2 nodes into a pattern of 4 nodes, the other (-) which turns a pattern of 2 nodes into a pattern of 0 nodes (by gluing the 4 remaining dangling links in the graph).

Then each (+) rewrite gives you 4 possible new patterns (one/node) and each (-) rewrite gives you 2 possible new patterns (because you glued two links). Mind that you may get 0 new patterns after a (+) or (-) rewrite, but if you think that a node has an equal chance to be in a (+) pattern or in a (-) pattern, then there is twice as possible that a new pattern comes from a (+) rewrite than from a (-) rewrite.

Suppose that in the list of preferences you always put the (+) type in front of the (-) one. It looks that in this way graphs will tend to grow, right? No!

In a quine of period 1 the number of (+) patterns = number of (-) patterns.

Hence, if you use the random algorithm, the non execution of a (+) rewrite is twice more probable to affect future available rewrites than the non-execution of a (-) rewrite.

In experiments, I noticed lots of quines which die (there are no more rewrites available after a time), some which seem immortal, and no example of a quine which thrives.”

 

 

 

 

I deleted Facebook, Twitter and entered the Invisible College

I’m fine. I still exist and my life is better. I write this after several years of experiments with Open Science in social media. I still keep a presence with Google because I don’t want to delete the chemlambda collection. But in no way am I satisfied with this.

UPDATE 4: In January 2019, my long ago deleted Twitter account appears as suspended. So these liars pretend that my account is not deactivated, but suspended by them.

UPDATE 3: I deleted the chemlambda collection.

UPDATE 1: I deleted my Medium account, it was just another Twitter sht.

UPDATE 2: The fight of legacy media against FB is so stupid. But amusing. May be useful, like an infection with a gut parasite which makes the immune system able to kill a more dangerous viral invection. When this is done, the gut parasite is easy to get rid of…

I explained what I think is wrong with corporate social media, from the point of view of a researcher who wants to share scientific content and discuss about it. For example in the Twitter “moment” which no longer exists. (I think very few people saw it because it was hidden by Twitter 🙂 )

dataexh

Dissatisfied with the careless treatment of scientific data, precious data, by corporate social media, in this “moment” I explain that I tried, probably successfully, to socially hack Google Plus. It worked, reasons here.

The other reason for which social media is not good for Open Science is that successful collaborations via said media are very rare. Most of the interactions have almost no scientific value. It is an endless stream of hot air bubbles coming from a school of bored goldfishes.

The reason is not that people [who are willing to interact via social media] are stupid. Don’t believe this shallow explanation. I think this is because of the frame of mind cultivated by social media. People there consume, they don’t build. They are encouraged to click, not to reason. They have to do everything as quick as possible. Why? they don’t know. I imagine though that [some hackers excepted] there is not much rational thought in the brains of a casino client.

There are therefore two reasons which make social media bad for use for Open Science:

  • bad, disrespectful treatment of scientific data, despite low volume and high density
  • bad medium for rational interaction, despite being presented as an enhanced one.

I’ll go to a liitle bit of detail concerning Facebook and Twitter, because until now I wrote mainly about my experience with Google.

Facebook. I tried several times to use Facebook for the same purpose. But I failed, because of a complete lack of a chance for visibility. Even the 10s animations from real simulations were badly presented in FB (hence tehnical reasons). Moreover the algorithms were clearly not in favor for the kind of posts I made on Google Plus. But I have to admit that there was a matter of chance that Google had this idea of collections, plus the superior technical possibilities, which made the chemlambda collection to be very visible.

Twitter. I had a presence on Twitter since, I think, 2011. I intentionally kept a low count of people I followed, varying and keeping only those who posted interesting tweets. However, it was clear for me since a long time that there is heavy censorship, or call it editing, same, on what I see and what my followers see.

From time to time I made or consumed political tweets. I am free to do this, as far as I know, and I am a grownup whose youth was spent under heavy thought police, so allow me to be furious to see the new thought police enforced exactly by those whom I admire in principle.

Going back to Google, the same thing happened btw, here is a  clear case where I was allowed a rare glimpse over the algorithmic wall, where I and my interlocutor saw each comment censored by Google in the other’ worldview:

screen-shot-2016-11-11-at-10-47-16

Story here.

People are more and more furious now (i.e. 2 years after), especially about Facebook and Cambridge Analitica. But let’s ignore politics and go back to using social media for science.

Well, corporate social media does not care about this. Moreover, censorship (aka algorithmic editing) can have very bad consequences for scientific communication, like: inhibition of better scientific ideas in order to protect a worse technical solution which brings money, or straight scientific theft, when a big data company obtains for free good scientific ideas.

 

OK, what about the Invisible College?

I think we really are on the brink of a scientific revolution. We do have technical means to interact scientifically. Most of the scientists who ever eisted on Earth are alive. Rational, educated thought and brain power, from professionals of many fields, from inquisitive minds, from creative freaks, these are in an unprecedented quantity. Add computing power to that mix, add the holy Internet. Here we are, ready to pass to a new level.

If you look back to the last scientific revolution, then one of the places where it happened was in a precursor group of the Royal Society of London called The Invisible College.

What a great idea!

Look, these people really were like us. In a past post I shared the front page of a famous book by Newton (appeared posthumously) where you can recognize the same ideas as today.

newton

 

I am sure that there are others members, most of them perhaps future ones, of the Invisible College of the 21st century, where we have to solve the problems: how to treat scientific information fairly, among us, the members, and how to interact rationally and thoughtfully. Long term.

Because social media failed us. Because who cares about politics?

Here, there are lots of good names from that period, like the related “College for the Promoting of Physico-Mathematical Experimental Learning” from 1660, but my liking goes to the Invisible college.

I end with two invitations for more private discussions

Screenshot-8_1    Screenshot-9_1

but I fully encourage to discuss here as well, if you want.

 

 

 

 

 

Blockchain categoricitis

The following conditions predispose you to categoricitis and this can be very bad for your savings:

  • baby boomer
  • you were a programmer once or you made money from that
  • you are a known researcher but your last original idea was last century
  • interested in pop science
  • you think an explanation by words can be shorter than one by mathematics
  • don’t know mathematics at the researcher level
  • you think you’re smart
  • you are all about internet, decentralization and blockchains
  • you believe in ether or may have pet alternative theories about the universe
  • you are not averse to a slight cheating, if this is backed by solid institutions or people.

More of these conditions present, more are you at risk.

(If you work in the money business then you are immune. For you, those people with categoricitis are a golden opportunity.)

The most dangerous is when you feel the need to be blockchain cool, but you missed the Bitcoin train. Your categoricitis is then grabbing you, making you smell like money. You feel the need to invest. Hey, what’s the problem? Is backed by math, banks and M$.

You’ll be relieved rather sooner than later 🙂

The “Chemlambda for the people” PM

Let’s have some fun with the release of the original recording of the talk rehearsal “Chemlambda for the people”. I’ve told the story in this post, you can see the slides I used here (needs js!).

The rehearsal took approx 31 min with discussions, so I split the original mp4 into 3 parts, video and sound as they were.

Enjoy:

 

 

 

A question about binary trees

I need help to identify where does appear the following algebra of trees.

UPDATE: seems that it does not appear anywhere else. Thanks for input, even if it led to this negative result. Please let me know, though, if you recognize this algebra somewhere!

We start with the set of binary trees:

  • the root I is a tree
  • if A and B are trees then AB is the tree which is obtained from A and B by adding o the root I the LEFT child A and the RIGHT child B

We think therefore about oriented binary trees, so that any node which is not a leaf has two childs, called the left and the right child.

On the set of these binary trees we define two operations:

  • a 1-ary operation denoted by a *
  • a 2-ary operation denoted by a little circle

The operations are defined recursively, as in the following picture:

question

I am very interested to learn about the appearance of this algebra somewhere. My guess is that this algebra has been studied. For the moment I don’t have any information about this and I kindly request for help.

If not, what can be said about it? It is easy to see that the root is a neutral element for the operation “small circle”. Probably the operation “small circle” is associative, however this is less clear than I first thought.

If you think this structure is too dry to be interesting, then just try it, take examples and see what gives. How much does it take to compute the result of an operation, etc…

Thank you for help, if any!

 

 

Creepy places to be

Google is creepy. Facebook, I heard is creepy. Maybe you don’t know yet but Firefox is creepy. Twitter is a creepy bad joke. Hacker News is creepy by omission, if that matters to anybody.

If you want to talk then mail me at one of the addresses down the first page of this article.  Or open an issue at one of my repositories. Or come see me. Or let me see you.

See you 🙂

 

What I expect from 2018 (updated at the end of 2018)

In the About section I wrote: “This blog contains ideas from the future”. Well let me look into my crystal ball. Then, at the end of 2018 I shall update this post and compare.

This is about stuff I expect to do in 2018, and stuff I expect to happen in 2018, or even things I hope to happen in 2018.

Before that a short review of what I think is significant to remember at the end of 2017.

  • all soft and hard is wrecked beyond any paranoid dream. There is nothing we can trust. Trust does not exist any more, as an effect.
  • in particular there is no trust in deterministic randomness 😉 so boy, how safe are your bitcoins…
  • all corporate Net is dead for the few intelligent people, but in the same time many discover the Net today and they love it! It is the new TV, don’t tell me that you expect from TV to be interactive. You hold the remote, but otherwise you “Got thirteen channels of shit on the T.V. to choose from“.
  • corporate Net is hands in in hands with the legacy publishers, because of a simple reason: science brings authority, so you don’t mess with science dissemination. If you mess with it then you question authority, or in this human world there is nothing else than authority which keeps things going as expected.

Now, what I expect to do in 2018:

  • [true, see arXiv:1807.02058, arXiv:1807.10480, arXiv.1811.04960] write articles in the classic arXiv style, short if possible, with programs repositories if possible (projected: NN, space, thermodynamics, engineering, computing)
  • [unexpected things happened] if these articles (on new subjects! or on older subjects but with new techniques) make wou want to change the world together then I exist in the meatspace, 3d version and I welcome you to visit me or me to visit you, all else is futile
  • shall do mostly mathematics, but you know the thing about mathematics…

What I expect to happen in 2018:

  • [true] the Net will turn in TV completely
  • [not yet true] “content creators”, i.e. those morons who produce the lowest possible quality (lack of) content for TV and cinema, will be fucked by big data. It is enough to get all the raw cinema footage and some NN magic in order to deliver, on individual demand, content better than anything those media professional morons can deliver. And of course much cheaper, therefore…
  • [partially true, see also blockchain categoricitis forecast] I expect a big fat nice money bubble to burst, because money laundering is a fundamental part of the actual economy

What I hope to happen in 2018:

  • [not true] new hardware
  • [only very limited] real meatspace randomness devices
  • [happened, then burst] more distributed automata (like bitcoin) to suck up the economy
  • [not true] diversification, because not anybody can be among the handful of deep state corporate beings.

OK, what do you think? Lay here your forecast, if you wish… or dare.

Open Science: “a complete institution for the use of learners”

The quote is from 1736. You can see it on the front page of the book “The method of fluxions and infinite series” by Newton, “translated from the author’s Latin original not yet made publick” (nobody is perfect, we know now where this secrecy led in the dispute with Leibniz over the invention of the differential calculus).

newton

That should be the goal of any open science research output.

What we have at the end of 2017?

  • Sci-hub. Pros: not corporate. It does not matter where you output your article, as long as it becomes available to any learner. Cons:  only old style articles, not more. So not a full solution.
  • ArXiv. Pros: simple, proved to be reliable long term. Cons: only articles.
  • Zenodo. Pros: not corporate, lots of space for present needs. Cons: not playable.
  • Github. Pros: good for publicly and visibly share and discuss over articles and programs. Cons: corporate, not reliable in the long term.
  • Git in general. Pros: excellent tool.
  • Blockchain. Pros: excellent tool.

I have not added anything about BOAI inspired Open Access because it is something from the past. It was just a trick to delay the demise of legacy publishing style, it was done over the heads of researchers, basically a deal between publishers and academic managers, for them to be able to siphon research $  and stiffle the true open access movement.

Conclusion: at the moment there are only timid and partial proposals for open science as “a complete institution for the use of learners”. Open science is not a new idea. Open science is the natural way to do science.

There is only one way to do it: share. Let’s do it!

Genocide as a win-win situation

Imagine. A company appears in your town and starts by making the public place more welcoming. Come play, says the company, come talk, come here and have fun. Let us help you with everything: we’ll keep your memories, traditions, we’ll take care to remind you about that friend you lost track some years ago. We’ll spread your news to everybody, we’ll get customers for your business. Are you alone? No problem, many people are like you, what if you could talk and meet them, whenever you want?

We don’t want anything important, says the company, just let us put some ads in the public place. It’s a win-win situation. Your social life will get better and we’ll make profit from those ads.

Hey, what if you let us manage your photos? All that stuff you want to keep, but there’s too more of it and it’s hard for you to preserve. We’ll put it on a cloud. Clouds are nice, those fluffy things which pass over your head in a sunny morning, while you, or your kids play together in the public place.

Remember how it was before? The town place was not at all as alive as now. You had not as many friends as now, your memories were less safe. Let us take care about all your cultural self.

Let us replace the commons. We are the commons of the future. We, the company…

We’ll march together and right all wrongs. Organize yourselves by using the wonderful means we give you. Control the politicians! Keep an eye on those public contracts. Do you have abusive neighbours? Shame their bad habits in the public place.

The public place of the future. Kindly provided by us, the company. A win-win situation.

 

Transparency is superior to trust

I am fascinated by this quote. I think it’s the most beautiful quote, in it’s terseness, I’ve seen since a long time. Wish I invented it!

It is not, though, the motto of Wikileaks, it’s taken from the section on Reproducibility of this Open Science manifesto.

To me, this quote means that validation is superior to peer review.

It is also significant that the quote says nothing about the publishing aspects of Open Science. That is because, I believe, we should split publishing from the discussion about Open Science.

Publishing, scientific publishing I mean, is simply irrelevant at this point. The strong part of Open Science, the new, original idea it brings forth is validation.

Sci-Hub acted as the great leveler, as concerns scientific publication. No interested reader cares, at this point, if an article is hostage behind a paywall or if the author of the article paid money for nothing to a Gold OA publisher.

Scientific publishing is finished. You have to be realistic about this thing.

But science communication is a far greater subject of interest. And validation is one major contribution to a superior scientific method.

The Library of Alexandra

“Hint: Sci-Hub was created to open papers that are not available online at all. You cannot find these papers in Google or in open access” [tweet by @Sci_Hub]

“Public Resource will make extracts of the Library of Alexandra available shortly, will present the issues to publishers and governments.” [tweet by Carl Malamud]

 

 

Update the Panton Principles please

There is a big contradiction between the text of The Panton Principles and the List of the Recommended Conformant Licenses. It appears that it is intentional, I’ll explain in a moment why I write this.

This contradiction is very bad for the Open Science movement. That is why, please, update your principles.

Here is the evidence.

1. The second of the Panton Principles is:

“2. Many widely recognized licenses are not intended for, and are not appropriate for, data or collections of data. A variety of waivers and licenses that are designed for and appropriate for the treatment of data are described [here](http://opendefinition.org/licenses#Data). Creative Commons licenses (apart from CCZero), GFDL, GPL, BSD, etc are NOT appropriate for data and their use is STRONGLY discouraged.

*Use a recognized waiver or license that is appropriate for data.* ”

As you can see, the authors clearly state that “Creative Commons licenses (apart from CCZero) … are NOT appropriate for data and their use is STRONGLY discouraged.”

2. However, if you look at the List of Recommended Licenses, surprise:

Creative Commons Attribution Share-Alike 4.0 (CC-BY-SA-4.0) is recommended.

3. The CC-BY-SA-4.0 is important because it has a very clear anti-DRM part:

“You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, the Licensed Material if doing so restricts exercise of the Licensed Rights by any recipient of the Licensed Material.” [source CC 4.0 licence: in Section 2/Scope/a. Licence grant/5]

4. The anti-DRM is not a “must” in the Open Definition 2.1. Indeed, the Open Definition clearly uses “must” in some places and “may” in another places.  See

“2.2.6 Technical Restriction Prohibition

The license may require that distributions of the work remain free of any technical measures that would restrict the exercise of otherwise allowed rights. ”

5. I asked why is this here. Rufus Pollock, one of the authors of The Panton Principles and of the Open Definition 2.1, answered:

“Hi that’s quite simple: that’s about allowing licenses which have anti-DRM clauses. This is one of the few restrictions that an open license can have.”

My reply:

“Thanks Rufus Pollock but to me this looks like allowing as well any DRM clauses. Why don’t include a statement as clear as the one I quoted?”

Rufus:

“Marius: erm how do you read it that way? “The license may prohibit distribution of the work in a manner where technical measures impose restrictions on the exercise of otherwise allowed rights.”

That’s pretty clear: it allows licenses to prohibit DRM stuff – not to allow it. “[Open] Licenses may prohibit …. technical measures …”

Then:

“Marius: so are you saying your unhappy because the Definition fails to require that all “open licenses” explicitly prohibit DRM? That would seem a bit of a strong thing to require – its one thing to allow people to do that but its another to require it in every license. Remember the Definition is not a license but a set of principles (a standard if you like) that open works (data, content etc) and open licenses for data and content must conform to.”

I gather from this exchange that indeed the anti-DRM is not one of the main concerns!

6. So, until now, what do we have? Principles and definitions which aim to regulate what Open Data means which avoid to take an anti-DRM stance. In the same time they strongly discourage the use of an anti-DRM license like CC-BY-4.0. However, on a page which is not as visible they recommend, among others, CC-BY-4.0.

There is one thing to say: “you may use anti-DRM licenses for Open Data”. It means almost nothing, it’s up to you, not important for them. They write that all CC licenses excepting CCZero are bad! Notice that CC0 does not have anything anti-DRM.

Conclusion. This ambiguity has to be settled by the authors. Or not, is up to them. For me this is a strong signal that we witness one more attempt to tweak a well intended  movement for cloudy purposes.

The Open Definition 2.1. ends with:

Richard Stallman was the first to push the ideals of software freedom which we continue.

Don’t say, really? Maybe is the moment for a less ambiguous Free Science.

The price of publishing with GitHub, Figshare, G+, etc

Three years ago I posted The price of publishing with arXiv. If you look at my arXiv articles then you’ll notice that I barely posted on arXiv.org since then. Instead I went into territory which is even less recognized as serious by a big part of academia. I used:

The effects of this choice are put in front of my homepage, so go there to read them. (Besides, it is a good exercise to remember how to click on links and use them, that lost art from the age when internet was free.)

In this post I want to explain what is the price I paid for these choices and what I think now about them.

First, it is a very stressful way of living. I am not joking, as you know stress comes from realizing that there are many choices and one has to choose. Random reward from the social media is addictive. The discovery that there is a way to get out from the situation which keeps us locked into the legacy publishing system (validation). The realization that the problem is not technical but social. A much more cynical view of the undercurrents of the social life of researchers.

The feeling that I can really change the world with my research. The worries that some possible changes might be very dangerous.

The debt I owe concerning the scarcity of my explanations. The effort to show only the aspects I think are relevant, putting aside those who are not. (Btw, if you look at my About page then you’ll read “This blog contains ideas from the future”. It is true because I already pruned the 99% of the paths leading nowhere interesting.)

The desire to go much deeper, the desire to explain once again what and why, to people who seem either lacking long term attention capability or having shallow pet theories.

Is like fishing for Moby Dick.

Google segregation should take blame

Continuing from the last post, here is a concrete example of segregation performed by the corporate social media. The result of the US election is a consequence of this phenomenon.

Yesterday I posted on Google+ the article Donald Trump is moving to the White House, and liberals put him there | Thomas Frank | Opinion | The Guardian    and I received an anti-Trump comment (reproduced at the end of this post). I was OK with the comment and did nothing to suppress it.

Today, after receiving some more comments, this time bent towards Trump, I noticed that the first one disappeared. It was marked as spam by a Google algorithm.

I restored the comment classified as spam.

The problem is, you see, that Google and Facebook and Twitter, etc, all corporate media are playing a segregation game with us. They don’t let us form opinions based on facts which we can freely access. They filter our worldview.  They don’t provide us means for validation of their content. (They don’t have to, legally.)

The idiots from Google who wrote that piece of algorithm should be near the top list of people who decided the result of these US elections.

______________________

UPDATE: Bella Nash, the identity who posted that comment, now replies the following:

“It says the same thing on yours [i.e. that my posts are seen as spam in her worldview] and I couldn’t reply to it. I see comments all over that  google is deleting posts, some guy lost 28 new and old replies in an hour. How the hell can comments be spam? I’m active on other boards so I don’t care what google does, it’s their site and their ambiguous rules.”

Screen Shot 2016-11-11 at 10.47.16.png

Theory of spam relativity 🙂

______________________

To be clear, I’m rather pleased about the results, mainly because I’m pissed beyond limits by these tactics. This should not limit the right to be heard of other people, at least not in my worldview. Let me decide if this comment is spam or not:

“In Chicago roughly a thousand headed for the Trump International Hotel while chanting against racism and white nationalism. Within hours of the election result being announced the hashtag #NotMyPresident spread among half a million Twitter users.

UPDATE 2: Some people are so desperate that I’m censored even on 4.chan 🙂 I tried to share there this post, several times, I had a timeout. I tried to share this ironical Disclaimer

screen-shot-2016-11-11-at-13-13-14

which should be useful on any corporate media site, and it disappeared.

The truth is that the algorithmic idiocy started with walled garden techniques. If you’re on one social media site, then it should be hard to follow a link to another place. After that, it became hard to know about people with different views. Discussions became almost impossible. This destroys the Internet.

Euclideon Holoverse virtual reality games revealed

Congratulations! Via a comment by roy.  If there is any other news you have then you’re welcome here, as in the old days.

Bruce Dell has a way to speak, to choose colors and music which is his own. Nevertheless, to share the key speaker honor with Steve Wozniak is just great.

 

 

It rubs me a bit in the wrong direction when he says that he has the “world first new virtual lifeforms” at 7:30. Can they replicate? Do they have a metabolism? On their own, in random conditions?

If I sneeze in a Holoverse room, will they cough the next day? If they run into me, shall I dream new ideas about bruises later?

 

Let’s discuss the 3 Sci-Hub ideas

The site http://sci-hub.io/ has a part called “Sci-Hub ideas”. I have not seen any discussion about this in the commercial social networks, where almost everybody is a lawyer, apparently.
What if we look at these ideas, a bit more?

Screenshot from 2016-02-28 01:23:34

 

Further are my opinions about those:

1. Knowledge to all. I totally support this idea. That is why I always supported Green OA and not Gold OA. Open Science, which is a far more general and future oriented concept than OA, proposes the same, because the only scientific knowledge is the one which can be independently validated. This is not possible if there are walls around knowledge.

A more sensible point is the “inequality in knowledge access across the world”. This inequality has to be recognized as such and we should fight it.

2. No copyright for scientific and educational resources. It is very convenient to forget that the copyright has been a barrier for progress, several times in the past. Aviation and PC hardware are two examples. Some people understand that: “All our Patents are Belong to You”.

3. Open access. The most puzzling reaction against Sci-Hub, at least for me, was the one coming from some of the proponents of OA. I agree that Sci-Hub is not a solution for OA publishing of new articles. It is not a OA publishing model. OK. But OA itself is a very murky thing. Is arXiv.org OA? According to many OA advocates, it is not, is only an open repository. However, arXiv.org was a real solution for publishing, i.e. fast dissemination of knowledge. People used arXiv.org (and they use it now as well) in order to learn and communicate, via scientific articles, open and fast. There was no publishing revolution, just people using a better system than what the legacy publishers proposed. Likewise, Sci-Hub responded to a big need of many researchers, as witnessed by the fact that the site is heavily used. I think the support of Sci-Hub for OA is only lip service, what they really want to say is that they created a solution for a real problem which is not solved by OA.

SciHub and patent wars

The Wright brothers used their patents to block the building of new airplanes. The historical solution was a pool of patents, eventually. Now everybody can fly.

We all have and use PCs because the patent wars around computer hardware were lost by those who tried to limit the production of it.

Elon Musk announced in 2014 that All Our Patent Are Belong To You.

These days publishers  complain that SciHub  breaks their paywalls. They have the copyrights for the  research works which are publicly funded mostly.

This is a new version of a patent war and I believe it will end as others in the past.

Sci-Hub is not tiny, nor special interest

“Last year, the tiny special-interest academic-paper search-engine Sci-Hub was trundling along in the shadows, unnoticed by almost everyone.” [source: SW-POW!, Barbra Streisand, Elsevier, and Sci-Hub]

According to the info available in the article Meet the Robin Hood of science, by Simon Oxenham:

[Sci-Hub] “works in two stages, firstly by attempting to download a copy from the LibGen database of pirated content, which opened its doors to academic papers in 2012 and now contains over 48 million scientific papers.”

“The ingenious part of the system is that if LibGen does not already have a copy of the paper, Sci-hub bypasses the journal paywall in real time by using access keys donated by academics lucky enough to study at institutions with an adequate range of subscriptions. This allows Sci-Hub to route the user straight to the paper through publishers such as JSTOR, Springer, Sage, and Elsevier. After delivering the paper to the user within seconds, Sci-Hub donates a copy of the paper to LibGen for good measure, where it will be stored forever, accessible by everyone and anyone. ”

“As the number of papers in the LibGen database expands, the frequency with which Sci-Hub has to dip into publishers’ repositories falls and consequently the risk of Sci-Hub triggering its alarm bells becomes ever smaller. Elbakyan explains, “We have already downloaded most paywalled articles to the library … we have almost everything!” This may well be no exaggeration.”

Is that tiny? I don’t think so. I have near me the comparisons I made in
ArXiv is 3 times bigger than all megajournals taken together and, if we would trust the publicly available numbers, then:

  • Sci-Hub is tiny
  • arXiv.org is minuscule with about 1/40 of what (is declared as) available in Sci-Hub
  • all the gold OA journals have no more than 1/100 of the “tiny” baseline, therefore they are, taken together, infinitesimal

Do i feel a dash of envy? subtle spin in favor of gold OA? maybe because Alexandra Elbakyan is from Kazakhstan? More likely is only an unfortunate formulation, but the thing is that if this info is true, then it’s huge.

UPDATE: putting aside all legal aspects, where I’m not competent to have an opinion, so putting aside these, it appears that the 48 million collection of paywalled articles is the result of the collective behaviour of individuals who “donated” (or whatever the correct word should be used) them.

My opinion is that this collective behaviour shows a massive vote against the system. Is not even intended to be a vote, people (i.e. individual researchers) just help one another. Compare this behaviour with the one of academic managers and with the one of all kinds of institutions which a) manage public funds and negociate prices with publishers, b) use metrics which are based on commercial publishers for distributing public funds as grants and promotions.

On one side there is the reality of individual researchers, who create and want to read what others like them created (from public funds basically) and on the other side there is this system in academia which rewards the compliance with this obsolete medium of dissemination of knowledge (presently turned upside down and replaced with a  system which puts paywalls around the research articles, it’s amazing).

Of course, I am not discussing here if Sci-hub is legal, or if commercial publishers are doing anything wrong from a legal point of view.

All this seems to me very close to the disconnection between politicians and regular people. These academic managers are like politicians now, the system ignores that it is possible to gauge the real opinion of people, almost in real time, and instead pretends that everything is OK, on paper.

 

____________________

Open peer review is something others should do, Open science is something you could do

This post follows Peer review is not independent validation, where it is argued that independent validation is one of the pillars of the scientific method. Peer review is only a part of the editorial process. Of course that peer review is better than nothing, but it is only a social form of validation, much less rigorous than what the scientific method asks.

If the author follows the path of Open science, then the reader has the means to perform an independent validation. This is great news, here is why.

It is much easier to do Open science than to change the legacy publishing system.

Many interesting alternatives to the legacy publishing have been proposed already. There is green OA, there is gold OA (gold is for $), there is arXiv.org. There are many other versions, but the main problem is that research articles are not considered really serious unless they are peer reviewed. Legacy publishing provides this, it is actually the only service they provide. People are used to review for established journals and any alternative publishing system has to be able to compete with that.

So, if you want to make an OA platform, it’s not serious unless you find a way to make other people to peer review the articles. This is hard!

People are slowly understanding that peer review is not what we should aim for. We are so used with the idea that peer review is that great thing which is part of the scientific method. It is not! Independent validation is the thing, peer review is an old, unscientific way (very useful, but not useful enough to allow research finding to pass the validation filter).

The alternative, which is Open science, is that the authors of research findings make open all the data, procedures, programs, etc, everything they have. In this way, any other group of researchers, anybody else willing to try can validate those research findings.

The comparison is striking. The reviewers of the legacy publishing system don’t have magical powers, they just read the article, they browse the data provided by the very limited article format and they make an opinion about the credibility of the research findings. In the legacy system, the reviewer does not have the means to validate the article.

In conclusion, it is much simpler to do Open science than to invent a way to convince people to review your legacy articles. It is enough to make open your data, your programs, etc. It is something that you, the author can do.

You don’t have to wait for the others to do a review for you. Release your data, that’s all.

Peer review is not independent validation

People tend to associate peer review with science. As an example, even today there are still many scientists who believe that an arXiv.org article is not a true article, unless it has been peer reviewed. They can’t trust the article, without reading it first, unless it passed the peer review, as a part of the publishing process.

Just because a researcher puts a latex file in the arXiv.org (I continue with the example), it does not mean that the content of the file has been independently validated, as the scientific method demands.

The part which slips from the attention is that peer review is not independent validation.

Which means that a peer reviewed article is not necessarily one which passes the scientific method filter.

This simple observation is, to me, the key for understanding why so many research results communicated in peer reviewed articles can not be reproduced, or validated, independently. The scale of this peer reviewed article rot is amazing. And well known!

Peer review is a part of the publishing process. By itself, it is only a social validation. Here is why: the reviewers don’t try to validate the results from the article because they don’t have the means to do it in the first place. They do have access only to a story told by the authors. All the reviewers can do is to read the article and to express an opinion about it’s credibility, based on the reviewers experience, competence (and biases).

From the point of view of legacy publishers, peer review makes sense. It is the equivalent of the criteria used by a journalist in order to decide to publish something or not. Not more!

That is why it is very important for science to pass from peer review to validation. This is possible only in an Open Science frame. Once more (in this Open(x) fight) the medical science editors lead. From “Journal Editors To Researchers: Show Everyone Your Clinical Data” by Harlan Krumholz, a quote:

“[…] last Wednesday, the editors of the leading medical journals around the world made a proposal that could change medical science forever. They said that researchers would have to publicly share the data gathered in their clinical studies as a condition of publishing the results in the journals. This idea is now out for public comment.

As it stands now, medical scientists can publish their findings without ever making available the data upon which their conclusions were based.

Only some of the top journals, such as The BMJ, have tried to make data sharing a condition of publication. But authors who didn’t want to comply could just go elsewhere.”

This is much more than simply saying “peer review is bad” (because is not, only that it is not a part of the scientific method, it is a part of the habits of publishers). It is a right step towards Open Science. I repeat here my opinion about OS, in the shortest way I can:

There are 2 parts involved in a research communication:   A (author, creator, the one which has something to disseminate) and R (reader). The legacy publishing process introduces a   B (reviewer).  A puts something in a public place, B expresses a public opinion about this and R uses B’s opinion as a proxy for the value of A’s thing, in order to decide if A’s thing is worthy of R’s attention or not.  Open Access is about the direct interaction of A with R, Open Peer-Review is about transparent interaction of A with B, as seen by R and Validation (as I see it) is improving the format of A’s communication so that R could make a better decision than the social one of counting on B’s opinion.

That’s it! The reader is king and the author should provide everything to the reader, for the reader to be able to independently validate the work. This is the scientific method at work.

 

The replicant

This is a molecular machine designed as a patch which would upgrade biological ribosomes. Once it attaches to a ribosome, it behaves in an almost similar ways as the synthetic ribosome Ribo-T, recently announced in  Nature 524,119–124(06 August 2015) doi:10.1038/nature14862  [1].  It thus enables an orthogonal genetic system, (i.e., citing from the mentioned Nature letter “genetic systems that could be evolved for novel functions without interfering with native translation”).

The novel function is designed for is more ambitious than specialized new proteins synthesis. It is, instead, a  two-ways translation device, between real chemistry and programmable artificial chemistry.

It behaves like a bootstrapper in computing. It is itself simulated in chemlambda, an artificial chemistry which was recently proposed as a means towards molecular computers [2].  The animation shows one of the first successful simulations.

 

spiral_boole_construct2_orig_in

 

With this molecular device in place, we can program living matter by using living cells themselves, instead of using, for example, complex, big 3D DNA printers like the ones developed by Craig Venter.

The only missing step, until recently, was the discovery of the basic translation of the building blocks of chemlambda into real chemistry.

I am very happy to make public a breakthrough result by Dr. Eldon Tyrell/Rosen, a genius who went out of academia some years ago and pursued a private career. It appears that he got interested early in this mix of lambda calculus, geometry and chemistry and he arrived to reproduce with real chemical ingredients two of the chemlambda graph rewrites: the beta rewrite and one of the DIST rewrites.

He tells me in a message  that he is working on prototypes of replicants already.

He suggested the name “replicant” instead of a synthetic ribosome because a replicant, according to him, is a device which replicates a computer program (in chemlambda molecular form) into a form compatible with the cellular DNA machine, and conversely, it may convert certain (RNA) strands into chemlambda molecules, i.e. back into  synthetic form corresponding to a computer program.

[1] Protein synthesis by ribosomes with tethered subunits,  C. Orelle, E. D. Carlson, T. Szal,  T. Florin,  M. C. Jewett, A. S. Mankin
http://www.nature.com/nature/journal/v524/n7563/full/nature14862.html

[2] Molecular computers, M Buliga
http://chorasimilarity.github.io/chemlambda-gui/dynamic/molecular.html

[This post is a reply to +Yonatan Zunger  post
https://plus.google.com/u/0/+YonatanZunger/posts/6a3C5Nm5fNS
where he shows that the INCEPT DATE of the Blade Runner replicant Roy Batty appears to be 8 Jan, 2016.
So here is a replicant, in the inception phase 🙂 ]

PS: The post appeared as well in the chemlambda collection:

https://plus.google.com/+MariusBuliga/posts/jQTh741YYdP

Res vs objectus

Objects are evidence. If reality is the territory, then objects are on the map.  Objective reality is to be compared with bureaucracy.
If reality is not objective, then how is it? Real, of course. Evidence is a map  of the real. Passive, done already, laid further in the court, ready to be used in the argumentation.
Who makes the map has the power over reality, in the same way as bureaucrats have the power over the people.
The confusion  between res and objectus has very concrete effects in our society.
We communicate on the net via evidence. The technical solutions are these, issued from historical reasons, like wars and analytic philosophy.
We are discontent about the lack of privacy of evidence.
Objects as evidence of happiness are not the same as happiness. We are discontent because objects are not enough, when we are told that they should be.
In this setting, who controls the map making mechanism, who controls the data processing, has the power.
Ultimate bureaucracy presented as the unique way. As the only real way. A lie.

After the IoT comes Gaia

They say that sneakernet does not scale. If you think about the last product of Amazon, the AWS Import/Export Snowball, this clumsy suitcase contains less than a grain of pollen.

Reason from these arguments:

  • the Internet of Things is an extension of the internet, where lots of objects in the real world will start to talk and to produce heaps of data
  • so there is a need for a sneakernet solution in order to move these data around,  because the data are only passive evidence and they need to be processed,
  • compared though with biology, this quantity of data is tiny
  • and moreover biology does not function via signal transmission, it functions via signal transduction, a form of sneakernet,

you’ll get to the unavoidable conclusion that the IoT is only a small step towards a global network which works with chemical like interactions, transports data (which are active themselves) via signal transduction and it extends the real world biology.

After the IoT comes Gaia. A technological version, to be clear.

Some time in the future, but not yet when we could say that the Gaia extension appeared, there will still be a mixture of old ways IoT and new ways biological like. Maybe there will be updates, say of the informational/immunity  OS, delivered via anycasts issued from  tree like antennas, which produce pollen particle. The “user” (what an ugly reductionistic name) breaths them and the update start to work.

The next scene may be one which describes what happens if somebody find out that some antennas produce faulty grains. Maybe some users have been alerted by their (future versions of) smartwatches that they inhaled a possible terminal vector.

The faulty updates have to be identified, tracked (chemically, in real world) and anihilated.

The users send a notification via the old internet that something is wrong and somewhere, perhaps on the other side of the planet, a mechanical turk identifies the problem, runs some simulations of the real chemistry with his artificial chemistry based system.

His screen may show something like this:

mask_of_anarchy_new_short

Once a solution is identified, the artificial chemistry solution is sent to a Venter printer close to the location of the faulty antenna and turned real. In a matter of hours the problem is solved, before the affected users metabolisms go crazy.

Local machines

Suppose there is a deep conjecture which haunts the imagination of a part of the mathematical community. By the common work of many, maybe even spread over several centuries and continents, slowly a solution emerges and the conjecture becomes a theorem. Beautiful, or at least horrendously complex theoretical machinery is invented and put to the task. Populations of family members experienced extreme boredom when faced to the answers of the question “what are you thinking about?”. Many others expressed a moderate curiosity in the weird preoccupations of those mathematicians, some, say, obsessed with knots or zippers or other childish activities. Finally, a constructive solution is found. This is very very rare and much sought for, mind you, because once we have a constructive solution then we may run it on a computer. So we do it, perhaps for the immense benefit of the finance industry.

Now here is the weird part. No matter what programming discipline is used, no matter which are programmers preferences and beliefs, the computer which runs the program is a local machine, which functions without any appeal to meaning.

I stop a bit to explain what is a local machine. Things are well known, but maybe is better to have them clear in front of the eyes. Whatever happens in a computer, it is only physically local modifications of it’s state. If we look at the Turing machine (I’ll not argue about the fact that computers are not exactly TMs, let’s take this as a simplification which does not affect the main point), then we can describe it as well as a stateless Turing machine, simply by putting the states of the machine on the tape, and reformulating the behaviour of the machine as a family of rewrite rules on local portions of the tape. It is fully possible, well known, and it has the advantage to work even if we don’t add one or many moving heads into the story, or indirection, or other ingredient than the one that these rewrites are done randomly. Believe it or not (if not then read

Turing machines, chemlambda style
http://chorasimilarity.github.io/chemlambda-gui/dynamic/turingchem.html

for an example) but that is a computer, indifferently of what technological complexities are involved into really making one.

times_only_bb_short_opt

(this is an animation showing a harmonious interaction between a chemical molecule derived from a lambda term, in the upper side of the image, and a Turing machine whose tape is visible in the lower side of the image)

Let’s get back to the algorithmic form of the solution of the mathematical problem. On the theoretical side there are lots of high meanings and they were discovered by a vast social collaboration.

But the algorithm run by the computer, in the concrete form it is run, edits out any such meaning. It is a well prepared initial tape (say “intelligently designed”, hope you have taken your daily dose of humour), which is then stupidly, randomly, locally rewritten until there’s no more reaction possible. Gives the answer.

If it is possible to advance a bit, even with this severe constraint to ignore global semantics, then maybe we find really new stuff, which is not visible under all these decorations called “intelligent”, or high level.

[Source:

https://plus.google.com/u/0/+MariusBuliga/posts/5z4UBwq4Y7G

Life at molecular scale

UPDATE: Chemlambda collection of animations.

Recently there are more and more amazing results in techniques allowing the visualization of life at molecular scale. Instead of the old story about soups of species of molecules, now we can see individual molecules in living cells [1], or that the coiled DNA has a complex chemical configuration, or that axons and dendrites interact in a far more complex way than imagined before. Life is based on a complex tangle of evolving individuals, from the molecular scale onwards.

To me, this gives hope that at some point chemists will start to  consider seriously the possibility to build such structures, such molecular computers [4] from first principles.

The image is a screencast of a chemlambda computation, done with quiner mist.

bigpred_train_egg_mist_blue

[1] Li et. al., “Extended Resolution Structured Illumination Imaging of Endocytic and Cytoskeletal Dynamics,” Science.

[2] Structural diversity of supercoiled DNA, Nature Communications 6,Article number:8440doi:10.1038/ncomms9440,
http://www.nature.com/ncomms/2015/151012/ncomms9440/full/ncomms9440.html

[3] Saturated Reconstruction of a Volume of Neocortex, Cell, Volume 162, Issue 3, p648–661, 30 July 2015
http://www.cell.com/cell/abstract/S0092-8674%2815%2900824-7
and video:

[4] Molecular computers
http://chorasimilarity.github.io/chemlambda-gui/dynamic/molecular.html

Molecular computers in real life

A molecular computer [1]  is a single molecule which transforms into a predictable another one, by a cascade of random chemical reactions mediated by a collection of enzymes, without any external control.

composite_2

We could use the artificial chemistry chemlambda to build real molecular computers. There is a github repository [2] where this model is implemented and various demos are available.

By using molecular bricks which can play the role of the basic elements of chemlambda we can study the behaviour of real molecules which suffer hundreds or thousands of random chemical reactions, but without having to model them on supercomputers.
A molecule designed like this will respect for a while the chemlambda predictions… We don’t know for how much, but there might be a window of opportunity which would allow a huge leap in synthetic biology. Imagine instead of simple computations with a dozen of boolean gates, the possibility to chemically compute with recursive but not primitive recursive functions.

More interesting, we might search for chemlambda molecules which do whatever we want them to do. We can build arbitrarily complex molecules, called chemlambda quines, which have all the characteristics of living organisms.

We may dream bigger. Chemlambda can unite the virtual and the real worlds… Imagine a chemical lab which takes as input a virtual chemlambda molecule and outputs the real world version, much like Craig Venter’s printers. The converse is a sensor, which takes a real chemical molecule, compatible with chemlambda and translates it into a virtual chemlambda molecule.

Applications are huge, some of them beneficial and others really scary.

For example, you may extend your immune system in order to protect your virtual identity with your own, unique antibodies.

As for using a sensor to make a copy of yourself, at the molecular level, this is out of reach in the recent future, because the real living organism works by computations at a scale which dwarfs the human technical possibilities.

The converse is possible though. What about having a living computer, of the size of a cup, which performs at the level of the whole collection of computers available now on Earth? [3]

References:

[1] this is the definition which I use here, taken from the articles Molecular computers and Build a molecular computer (2015)

[2] https://github.com/chorasimilarity/chemlambda-gui/blob/gh-pages/dynamic/README.md

[3] The internet can be your pet

How I hit a wall when I used the open access and open source practices when applying for a job

UPDATE 11.10.2015. What happened since the beginning of the “contest”? Nothing. My guess is that they are going to follow the exact literary sense of their announcement. It is a classic sign of cronyism. They write 3 times that they are going to judge according to the file submitted (the activity of the candidate as it looks from the file), but they don’t give other criteria than the ones from an old law. In my case I satisfy these criteria, of course, but later on they write about “candidates considered eligible”, which literary means candidates that an anonymous board considers they are eligible and not simply eligible according to the mentioned criteria.

Conclusion: this is not news, is dog bites man.

I may be wrong. But in the case I’m right then the main subject (namely what happens in a real situation with open access practices in case of a job opening) looks like a frivolous, alien complaint.

The split between:
– a healthy, imaginative, looking to the future community of individuals and
– a kafkian old world of bureaucratic cronies
is growing bigger here in my country.

__________

UPDATE 14.10.2015: Suppositions confirmed. The results have been announced today, only verbally, the rest is shrouded in mystery. Absolutely no surprise. Indeed, faced with the reality of local management, my comments about open access and open source practices are like talking about a TV show to cavemen.

Not news.

There is a statement I want to make, for those who read this and have only access to info about Romanians from the media, which is, sadly, almost entirely negative.

It would be misleading to judge the local mathematicians (or other creative people, say) from these sources. There is nothing wrong with many Romanian people. On the contrary, these practices which show textbook signs of corruption are typical for the managers of state institutions from this country. They are to be blamed. What you see in the media is the effect of the usual handshake between bad leadership and poverty.

Which sadly manifest everywhere in the state institutions of Romania, in ways far beyond the ridicule.

So next time when you shall interact with one such manager, don’t forget who they are and what they are really doing.

I am not going to pursue a crusade against corruption in Romania, because I have better things to do. Maybe I’m wrong and what is missing is more people doing exactly this. But the effects of corrupt practices is that the state institution becomes weaker and weaker. So, by psycho historic reasons 🙂 there is no need for a fight with dying institutions.

Let’s look to the future, let’s do interesting stuff!

________________________

This is real: there are job openings at the Institute of Mathematics of the Romanian academy, announced by the pdf file

Click to access Concurs-anunt-2015.pdf

The announce is in Romanian but you may notice that they refer to a law from 2003, which asks for a CV, research memoire, list of publications and ten documents, from kindergarden to PhD. On paper.

That is only the ridicule of bureaucracy, but the real problems were somewhere else.

There is no mention of criteria of selection, members of the committee, but in the announcement is written 3 times that every candidate’s work will be considered only as it appears from looking at the file submitted.

They also ask that the scientific, say, part of the submission to be sent by email to two addresses which you can grasp from the announcement.

So I did all the work and I hit a wall when I submitted by email.

I sent them the following links:

– my homepage which has all the info needed (including links to all relevant work)
http://imar.ro/~mbuliga/

– link to my arxiv articles
http://arxiv.org/a/buliga_m_1
because all my published articles and all my cited articles, published or not) are available at arXiv

– link to the chemlambda repository for the programming, demos, etc part
https://github.com/chorasimilarity/chemlambda-gui/blob/gh-pages/dynamic/README.md

I was satisfied because I finished this, when I got a message from DanTimotin@imar.ro telling me that I have to send them, as attachment, the pdf files of at least 5 relevant articles.

In the paper file I put 20+ of these articles (selected from 60+), but they wanted also the pdf files.

I don’t have the pdfs of many legacy published articles because they are useless for open access, you can’t distribute them publicly.
Moreover I keep the relevant work I do as open as possible.

Finally, how could I send the content of the github repository? Or the demos?

So I replied by protesting about the artificial difference he makes between a link and the content available at that link and I sent a selection of 20 articles with links to their arXiv version.

He replied by a message where he announced that if I want my submission to be considered then I have to send 5 pdfs attached.

I visited physically Dan Timotin to talk and to understand why a link is different from the content available to that link.

He told me that these are the rules.

He told that he is going to send the pdfs to the members of the committees and it might happen that they don’t have access to the net when they look for the work of the candidate.

He told me that they can’t be sure that the arXiv version is the same as the published version.

He has nothing to say about the programming/demo/animations part.

He told that nobody will read the paper file.

I asked if he is OK if I make public this weird practice and he agreed to that.

Going back to my office, I arrived to find 9 pdfs of the published articles. In many other cases my institute does not have a subscription to journals where my articles appeared, so I don’t think that is fair to be asked to buy back my work, only because of the whims of one person.

Therefore I sent to Dan Timotin a last message where I attached these 9 pdfs, I explained that I can’t access the others, but I firmly demand that all the links sent previously to be sent to the (mysterious, anonymous, net deprived, and lacking public criteria) committee, otherwise I would consider this an abuse.

I wrote that I regret this useless discussion provoked by the lack of transparency and by the hiding behind an old law, which should not stop a committee of mathematicians to judge the work of a candidate as it is, and not as it appears by an abuse of filtering.

After a couple of hours he replied that he will send the files and the links to the members of the committee.

I have to believe his word.

That is what happens, in practice, with open access and open science, at least in some places.

What could be done?

Should I wait for the last bureaucrat to stop supporting passively the publishing industry, by actively opposing open access practices?

Should I wait for all politicians to pass fake PhDs under the supervision of a very complacent local Academia?

Should I feel ashamed of being abused?

Deterministic vs random, an example of grandiose shows vs quieter, functioning anarchy

In the following video you can see the deterministic, at the right random evolution of the same molecule, duplex.mol from the chemlambda repository. They take about the same time.

The deterministic one is like a ballet, it has a comprehensible development, it has rhythm and drama. Clear steps and synchronization.

The random one is more fluid, less symmetric, more mysterious.

What do you prefer, a grand synchronized show or a functioning, quieter anarchy?

Which one do you think is more resilient?

What is happening here?

The molecule is inspired from lambda calculus. The computation which is encoded is the following. Consider the lambda term for the identity function, i.e. I=Lx.x. It has the property that IA reduces to A for any term A. In the molecule it appears as a red trivalent node with two ports connected, so it looks like a dangling red globe. Now, use a tree of fanouts to multiply (replicate) this identity 8 times, then build the term

(((II)(II))((II)(II)))(((II)(II))((II)(II)))

Then use one more fanout to replicate this term into two copies and reduce all. You’ll get two I terms, eventually.
In the deterministic version the following happens.

– the I term (seen as a red dangling node) is replicated (by sequence of two rewrites, detail) and gradually the tree of fanouts is destroyed

– simultaneously, the tree of applications (i.e. the syntactic tree of the term, but seen with the I’s as leaves) replicates by the fanout from the end

– because the reduction is deterministic, we’ll get 16 copies of I’s exactly when we’ll get two copies of the application tree, so in the next step there will be a further replication of the 16 I’s into 32 and then there will be two, disconnected, copies of the molecule which represents ((II)(II))((II)(II))

– after that, this term-molecule reduces to (II)(II), then to II, then to I, but recall that there are two copies, therefore you see this twice.

In the random version everything mixes. Anarchy. Some replications of the I’s reach the tree of applications before it has finished to replicate itself, then reductions of the kind II –> I happen in the same time with replications of other pieces. And so on.
There is no separation of stages of this computation.
And it still works!

I used quiner_experia, with the mol file duplex.mol. The first time I modified all the weights to 0 (to have deterministic application) and took the rise parameter=0 (this is specific to quiner_experia, not present in quiner) too, because the rise parameter lower the probabilities of new rewrites, exponentially, during the same step, in order to give fair chances to any subset of all possible rewrites possible.
Then I made a screencast of the result, without speeding it, and by using safari to run the result.
For the random version I took all the weights equal to 1 and the rise parameter equal to 8 (empirically, this gives the most smooth evolution, for a generic molecule from the list of examples). Ran the result with safari and screencasted it.
Then I put the two movies one near the other (deterministic at left, random at right) and made a screencast of them running in parallel. (Almost, there is about 1/2 second difference because I started the deterministic one first, by hand).
That’s it, enjoy!
For chemlambda look at the trailer from the collections of videos I have here on vimeo.

Replication, 4 to 9

In the artificial chemistry chemlambda  there exist molecules which can replicate, they have a metabolism and they may even die. They are called chemlambda quines, but a convenient shorter name is: microbes.
In this video you see 4 microbes which replicate in complex ways. They are based on a simpler microbe whose life can be seen live (as a suite of d3.js animations) at [1].
The video was done by screencasting the evolution of the molecule 5_16_quine_bubbles_hyb.mol and with the script quiner_experia, all available at the chemlambda GitHub repository [2].

[1] The birth and metabolism of a chemlambda quine. (browsers recommended: safari, chrome/chromium)
chorasimilarity.github.io/chemlambda-gui/dynamic/A_L_eggshell.html

[2] The chemlambda repository: github.com/chorasimilarity/chemlambda-gui/blob/gh-pages/dynamic/README.md

In the mood for a rant: attention conservation notices

I see attention conservation notices at the beginning of posts  belonging to some rather interesting collections. And I wonder: what is the goal of the author of such announcements?

Should I put one too?

Well, if I would put one, then it would be like this:

Wait, let me first give you some context, in the form of a rant. Then I’ll write down the attention conservation notice.

Context. I am one of those researchers who want to create new things, in new ways, in this new connected world. I got in love with the Net the first time I saw a glimpse of it.

My position is the following: research needs to pass by a liberating process exactly like art did a hundred years ago. At a much bigger scale, of course, but the idea is the same.

Much like a revolutionary impressionist at the time of the Art Academies, this is a thrilling and also ridiculous position.

Besides the mediocre but respectable art channels provided by the exhibitions of art academies, there is only worse. The revolutionary painters did have the street to show their works. On the street, the cute portraits and the boooring visual memes are the rule. Not to say also that, on the street there are many other revolutionaries who are either too cool to paint, or just looking for relief from the monkey inheritance who pushes all of us to pretend we are really different.

Art academies are full of good, but statistically mediocre painters, who want to advance in their career with great determination. For them painting is not the goal, but the means towards ensuring a comfortable life. They are job oriented, like everybody else on the street. They speak the language of the street: they are professionals who, incidentally, spend their time splatting pigments on rectangular surfaces. The works are then reviewed by other professionals and finally shown (at different heights, the best ones at the eye level) to their peers, mostly.

Also  to anybody else willing to spend a free afternoon in a pleasant way, by visiting a reputable exhibition. Going back home, then, acquainted with the professional artistic last trends, the enlightened art lover may pick, from the street, something which is surely less expensive, but cute enough or modern enough to deserve the eye level place in the art lover’s home.

These guys are certainly not going to feed a Van Gogh, except by accident. First because is on the street. Secondly because it does not look professional, don’t you see that the guy uses randomly splashed colours, and worse even, you can see the traces of the brushes, instead of the polished, varnished, shitty brown finish. Thirdly, look at that cuute little boy pissing! Or that cat, btw.

You see where I’m going, right? The art lover just wants to spend some pleasant time off work. Just want to feel he or she has human interests. And to show to the Joneses he or she has that special artistic bend.

Now tell me, is an attention conservation notice going to help? Certainly, for somebody who does 5 min portraits for a living. And for that portraits, not for the other stuff. Not for the really good stuff, because the really good stuff takes work to appreciate it.

In conclusion, even if I wish sometimes to put the following attention notice:

This is openly shared work. You have to sweat to get it. If you are an academic looking for promotion please don’t steel it because you’ll be easy to find. If you are just looking for distraction then watch TV, not this post. If you want to discuss then do it after you spent the time to accommodate with the content, you clicked, read and understood all sources. Because otherwise you either show disrespect for my work or you look stupid

but I refrain from it.

Tree traversal magic

UPDATE: you can see and play with this online in the salvaged collection of g+ animations.

Also, read the story of the ouroboros here.

This is an artificial creature which destroys a tree only to make it back as it was. Random graph rewrites algoritm!

bigpred_tree

The creature is, of course, a walker, or ouroboros, check out the live demo, with the usual proviso that it works much better in chrome, chromium or safari than in firefox.

Artificial chemistry suggests a hypothesis about real biochemistry

What is the difference between a molecule encoded in a gene and the actual molecule produced by a ribosome from a copy of the encoding?

synthetic_bigpred

Here is a bootstrapping hypothesis based on the artificial chemistry chemlambda.

This is a universal model of computation which is supposed to be very simple, though very close to real chemistry of individual molecules. The model is implemented by an algorithm, which uses as input a data format called a mol file.

The language use does not matter, although there may be more elegant versions than mine, like the one by +sreejith s  (still work in progress though) [1].

Since the model is universal it implies that the algorithm and the mol data structure can be themselves turned into an artificial molecule which reacts with the usual invisible enzymes from the model and does the computation of the reduction of the original molecule.

The boostrapping hypothesis is that the original molecule is like the synthetized molecule and that the mol file format turned into a molecule is the stored version of the molecule.

In the post there  is mentioned a previous post [2], where this was tried by hand for a molecule called the 20 quine, but now there are new scripts in the chemlambda repository which allow to do the same for any molecule (limited by the computer and the browser of course).

The final suggestion is that the hypothesis can be continued along these lines, by saying that the “enzymes” which do the rewrite are (in this boostrapping sense) the rewriting part of the algorithm.

[1] Chemlambda-py

[2] Invisible puppeteer

https://plus.google.com/+MariusBuliga/posts/2XMSyKJrPPW

Synthetic stuff. Now there is a script which allows to synthetize any chemlambda molecule, like in the previous Invisible puppeteer post.
Look in the chemlambda repository, namely at the pair synthetic.sh and synthetic.awk from this branch of the repository (i.e. the active branch).
In this animation you see the result of the “synthetic” bigpred.mol (which was the subject of the recent hacker news link).

What I did:
– bash synthetic.sh and choose bigpred.mol
– the output is the file synt.mol
– bash quineri.sh and choose synt.mol (I had a random evolution, with cycounter=150, time_val=5 (for safari, but for chromium I choose time_val=10 or even 20).
– the output is synt.html
– I opened synt.html with a text editor to change a bit some stuff: at lines 109-110 I choose a bigger window         var w = 800,   h = 800; and smaller charges and gravity (look for  and modify to .charge(-10)
.gravity(.08) ).

Then I opened the synt.html with safari. (Also worked with chromium). It’s a hard time for the browser because the initial graph has more than 1400 nodes (and the problem is not coming from setTimeout because, compared with other experiments, there are not as many, but it comes from an obscure problem of d3.js with prototypes; anyway this makes firefox lousy, which is a general trend at firefox, don’ know why, chromium OK and safari great. I write this as a faithful user of firefox!).
In this case even the safari had to think a bit about life, philosophy, whatnot, before it started.

I made a screencast with Quicktime and then sped it up progressively to 16X, in order to be able to fit it into less than 20s.

Then I converted the .mov screencast to .gif and I proudly present it to you!

It worked!

Now that’s a bit surprising, because, recall, what I do is to introduce lots of new nodes and bonds, approx 6X the initial ones, which separate the molecule which I want to synthetize into a list of nodes and a list of edges. The list of edges is transformed into a permutation.

Now during the evolution of the synt molecule, what happens is that the permutation is gradually applied (because if randomness) and it will mix with the evolution of the active pieces which start already to rewrite.

But it worked, despite the ugly presence of a T node, which is the one which sometimes may create problems due to such interferences if the molecule is not very well designed.

At the end I recall what I believe is the take away message, namely that the mol file format is a data structure which itself can be turned into a molecule.

Summer news, Ackermann related observations and things to come

Light summer post, want more then follow the links.

1. after a chemlambda demo appeared on hackernews  (link) I saw a lot of interest from hacker brains (hopefully). Even if slightly misplaced (title reads: D3.js visualization of a lambda calculus) it is an entry gate into this fascinating subject.

2. Sreejith S (aka 4lhc) works on a python port for chemlambda, called chemlambda-py. It works already, I look forward to try it when back home. Thank you Sreejith! Follow his effort and, why not, contribute?

3. Herman Bergwerf set out to write a chemlambda IDE, another great initiative which I am very eager to see it working, just by looking at Herman beautiful MolView.

4. During discussions with Sreejith, I noticed some funny facts about the way chemlambda computes the Ackermann function. Some preliminaries: without cheats (i.e. closed forms) or without memoization, caching, etc, it is hopeless to try to compute Ack(4,2). The problem is not as much the fact that the function takes huge values, but the fact that in order to compute even modest values, there are lots and lots of calls. See the rosettacode entry for the Ackermann function about that. Compared with those examples, the implementation of chemlambda in awk does not behave bad at all. There are now several mol files for various ackermann function values which you may try. The only one which takes lots of time (but not huge memory, if you except the html output, which you can eliminate by commenting with a # all printf lines in the awk script) is ackermann_4_1. `This one works, but I still have to see how much time it takes. The interesting observation is that the number of steps (in the deterministic version) of chemlambda (mind: steps not rewrites!) is at 1 or 2 difference than the number of steps of this beautiful stacks based script: ackermann script. It means that somehow the chemlambda version records in space the intermediary values, instead of stacking them for further use. Very strange, to explore!

5. There exist, on paper, the chemlambda v3 “enzo”. It’s very very nice, you’ll see!

Permutation-replication-composition all-in-one

This is the permutation cycle 1 – > 2 – > 3 – > 4 – > 5 – > 6 – > 7 – > 8 – > 1 which is replicated and composed with itself in the same time.

pwheel_8_compo

Done with pwheel_8_compo.mol from the chemlambda repo, and with quiner.sh, in the deterministic variant (all weights set to 0. The result is a pair of cycles 1 – > 3 – > 5 – > 7 – > 1  and 2 – > 4 – > 6 – > 8 – > 2.

See other plays with permutations in the collection deleted!

https://plus.google.com/collection/UjgbX

_________________________

The mesh is the computer

The article

The mesh is a network of microtubule connectors that stabilizes individual kinetochore fibers of the mitotic spindle

announces the discovery of a new structure in the cell: the mesh.

From the abstract:

Kinetochore fibers (K-fibers) of the mitotic spindle are force-generating units that power chromosome movement during mitosis. K-fibers are composed of many microtubules that are held together throughout their length.
Here, we show, using 3D electron microscopy, that K-fiber microtubules (MTs) are connected by a network of MT connectors. We term this network ‘the mesh’.
The K-fiber mesh is made of linked multipolar connectors. Each connector has up to four struts, so that a single connector can link up to four MTs.  […]
Optimal stabilization of K-fibers by the mesh is required for normal progression through mitosis.
We propose that the mesh stabilizes K-fibers by pulling MTs together and thereby maintaining the integrity of the fiber. “

My speculation is that the mesh has not only the role of a scaffold for the microtubule structure.

Together with the microtubules and with some other (yet undiscovered or, on the contrary, very well known) parts, this is the computer.

 

F1.large

DNA, which fascinates us, is more like a memory device.

But the computation may be as in  chemlambda. The dynamical reorganization of the mesh-microtubule-other proteins structure is very much resembling to a chemlambda molecule (or even to a chemlambda quine.

20_20_hyb

 

Mentioned this here, because there is an evocative image

https://plus.google.com/+MariusBuliga/posts/V4Z2TmNAfVB

____________________

Inceptionism: an AI built on pragmatic principles turns out to be an artificial Derrida

Not willing to accept this, now they say that the artificial neural network dreams. Name: Google Deep Dream.

The problem is that the Google Deep Dream images can be compared with dreams by us humans.  Or in the general case of an automatic classifier of big data there is no term of comparison.  How can we, the pragmatic scientists, know that the output obtained from data (by training a neural network on other data) is full of dreamy dog eyes or not?

If we can’t trust the meaning obtained from big data, by pragmatic means, then we might as well renounce at analytic philosophy and turn towards continental (so called) philosophy.

That is seen as not serious, of course, which means that the ANN dreams, whatever that means. With this stance we transform a  kick in the ass of our most fundamental beliefs into a perceived progress.

___________________________________________________________________

The inner artificial life of a cell, a game proposal

The  inner life of a cell is an excellent, but passive window

It is also scripted according to usual human expectations: synchronous moves, orchestrated reactions at a large scale. This is of course either something emergent in real life, or totally unrealistic.

As you know, I propose to use the artificial chemistry chemlambda for making real, individual molecular computing devices, as explained in Molecular computers.

But much more can be aimed at, even before the confirmation that molecular computers, as described there,  can be built by us humans.

Of course that Nature builds them everywhere, we are made of them. It works without any external control, not as a sequence of lab operations, asynchronously, in a random environment, and it is very hard to understand if there is a meaning behind the inner life of a living cell, but it works nevertheless without the need of a semantics to streamline its workings.

So obvious, however so far from IT way of seeing computation.

Despite the huge and exponential advances in synthetic biology, despite the fact that many of these advances are related to IT, despite that more and more ways to control biological workings, I believe that there has to be a way to attack the problem of computations in biology from the basis. Empirical understanding is great and will fuel for some time this amazing synthetic biology evolution, but why not thinking about understanding how life works, instead of using biological parts to make functions, gates and other components of the actual IT paradigm?

As a step, I propose to try to build a game-like artificial life cell, based on chemlambda. It should look and feel like the Inner life of a cell video, only that it would be interactive.

There are many bricks already available, some molecules (each with its own story and motivation) are in the chemlambda repository, other are briefly described, with animations, in the chemlambda collection.

For example:
– a centrosome and the generated microtubules like in

https://plus.google.com/+MariusBuliga/posts/1mUDCRRynfH
kinesins  as adapted walkers like in

https://plus.google.com/+MariusBuliga/posts/8agbhCH6L7B

– molecules built from other ones like RNA from DNA

– programmed computations (mixing logic and biologic)

– all in an environment looking somehow like this

https://plus.google.com/+MariusBuliga/posts/DFuT9D7coZL
Like in a game, you would not be able to see the whole universe at once, but you could choose to concentrate to this part or that part.
You could build superstructures from chemlambda quines and other bricks, then you could see what happens either in a random environment or in one where, for example, reductions happen triggered by the neighbourhood of your
mouse pointer (as if the mouse pointer is a fountain of invisible enzymes).

Videos like this, about the internal working of a neuron


would become tasks for the gamer.

______________________________________________________________

Artificial life which can be programmed

Artificial life

3_27_quine_huge_short

which can be programmed

ttttt

__________________________________________________________________________________

__________________________________________________________________________________

__________________________________________________________________________________

__________________________________________________________________________________

An apology of molecular computers and answers to critics

This is how a molecular computer would look like, if seen with a magically powerful microscope. It is a single molecule which interacts randomly with other molecules, called “enzymes”, invisible in this animation.

molecular_computer_new

There is no control over the order of the chemical reactions. This is the idea, to compute without control.

The way it works is like this: whenever a reaction happens, this creates the conditions for the next reaction to happen.

There is no need to use a supercomputer to model such a molecule, nor is it reasonable to try, because of big number of the atoms.

It is enough instead to find real molecular assemblies for nodes, ports and bonds, figured here by colored circles and lines.

The only computations needed are those for simulating the family of rewrites – chemical reactions. Every such rewrite involves up to 4 nodes, therefore the computational task is handy.

Verify once that the rewrites are well done, independently of the situation where you want to apply them, that is all.

Once such molecular compounds are found, the next task is to figure out how to build (by chemical reactions) such molecules.

But once one succeeds to build one molecule, the rest is left to Nature way of computing: random, local, asynchronous.

From this stage there is no need to simulate huge molecules in order to know they work. That is something given by the chemlambda formalism.

It is so simple: translate the rewrites into real chemistry, they are easy, then let go the unneeded control from that point on.

This animation is a screencast of a part of the article Molecular computers
http://chorasimilarity.github.io/chemlambda-gui/dynamic/molecular.html
and everything can be validated (i.e. verified by your own) by using the chemlambda repository
https://github.com/chorasimilarity/chemlambda-gui/tree/gh-pages/dynamic

Now I’ll pass to a list of critics which, faced with the evidence, they look uninformed:
1. Chemlambda is one of those rewriting systems everybody knows. Ignorant claim, while it is true that some rewrites appear all over the place, from string theory to knot theory to category theory to geometry of interaction, the family of graphs considered is not the same, because those graphs are purely combinatorial object and they don’t need a global embedding, like all other formalism do, in a schematic space-time. Moreover, the choice of the rewrites is such that the system works only by local rewriting and no global control on the cascade of rewrites. No other formalism from the family does that.

2.  Is well known that all this is already done in the category theory treatment of lambda calculus.

False, if one really reads what they do in category theory with lambda calculus, then one figures quick that they can’t do much for untyped lambda beta calculus, that is without eta reduction. This is mentioned explicitly in Barendregt, for example, but the hype around categories and lambda calculus is so pervasive that people believe more than what actually is.

3.  Chemical computing is old stuff: DNA computing, membrane computing, the chemical abstract machine, algorithmic chemistry.

Just because it is chemical computing, it does not mean that it is in the family mentioned.

The first name of chemlambda was “chemical concrete machine” and there there are comparison with the chemical abstract machine
http://arxiv.org/abs/1309.6914
(btw I see that some people discover now “catalysts” without credits in the written papers)
The cham is a formalism working with multisets of molecules, not with individual ones, and the computation is done by what corresponds to lab operation (splitting a solution in two, heating, cooling, etc)
The membrane computing work is done around membranes which enclose containers of multisets of molecules, the membrane themselves being some abstract concepts, of a global nature, whil ein reality, as well as in chemlambda, everything is a molecule. Membranes exist in reality but they are made of many molecular compounds.
DNA computing is an amazing research subject, which may be related to chemlambda if there is a realization of chemlambda nodes, ports and bonds, but not otherwise, because there is not, up to my knowledge, any model in DNA computing with the properties: individual molecules, random reactions, not lab operations.
Algorithmic chemistry is indeed very much related to chemlambda, by the fact that it proposes a chemical view on lambda calculus. But from this great insight, the paths are very different. In algorithmic chemistry the application operation from lambda calculus represents a chemical reaction and the lambda abstraction signals a reactive site. In chemlambda the application and lambda abstraction corresponds to atoms of molecules. Besides, chemlambda is not restricted to lambda calculus, only some of the chemlambda molecules can be put in relation with lambda terms, but even for those, the reactions they enter don’t guarantee that the result is a molecule for a lambda term.

Conclusion: if you are a chemist, consider chemlambda, there is nothing like it already proposed. The new idea is to let control go and instead chain the randomly appearing reactions by their spatial patterns, not by lab operations, nor by impossibly sophisticated simulations.
Even if in reality there would be more constraints (coming from the real spatial shapes of the molecules constructed from these fundamental bricks) this would only influence the weights of the random encounters with the enzymes, thus not modifying the basic formalism.
And if it works in reality, even for only situations where there are cascades of tens of reactions, not hundreds or thousands, even that would be a tremendous advance in chemical computing, when compared with the old idea of copying boolean gates and silicon computers circuits.

______________________________________

Appeared also in the chemlambda collection microblog

https://plus.google.com/+MariusBuliga/posts/DE6mWMbieFk

______________________________________

What if… it can be done? An update of an old fake news post

In May 2014 I made a fake news post (with the tag WHAT IF) called Autodesk releases Seawater. It was about this big name who just released a totally made up product called Seawater.

“SeaWater is a design tool for the artificial life based decentralized Internet of Things.”

In the post it is featured this picture

[source]

scoop-of-water-magnified-990x500… and I wrote:

“As well, it could be  just a representation of the state of the IoT in a small neighbourhood of you, according to the press release describing SeaWater, the new product of Autodesk.”

Today I want to show you this:

3_27_quine

or better go and look in fullscreen HD this video

The contents is explained in the post from the microblogging collection chemlambda

27 microbes. “This is a glimpse of the life of a community of 27 microbes (aka chemlambda quines). Initially the graph has 1278 nodes (atoms) and 1422 edges (bonds). There are hundreds of atoms refreshed and bonds made and broken at once.”

Recall that all this is done with the most simple algorithm, which turns chemlambda into an asynchronous graph rewrite automaton.

A natural development would be to go further, exactly like described in the Seawater post.

Because it can be done 🙂

_________________________________

The Internet can be your pet

or  you could have a pet which embeds and run a copy of the whole Internet.

The story from this post  starts from this exploratory posting on Google plus from June 2nd, 2015, which zooms from sneakernet to sneakernet  delay-tolerant networking to  Interplanetary Internet to Nanonetworks to DNA digital data storage to molecular computers.

I’ll reproduce the final part, then I’ll pass to the Internet as you pet thing.

“At this stage things start to be interesting. There is this DNA digital data storage technique
http://en.wikipedia.org/wiki/DNA_digital_data_storage
which made the news recently by claiming that the whole content of the Net fits into a spoon of DNA (prepared so to encode it by the technique).

I have not been able to locate the exact source of that claim, but let’s believe it because it sounds reasonable (if you think at the scales involved).

It can’t be the whole content of the net, it must mean the whole passive content of the net. Data. A instant (?) picture of the data, no program execution.

But suppose you have that spoonful of DNA, how do you use it? Or what about also encoding the computers which use this data, at a molecular level.

You know, like in the post about one Turing Machine, Two Turing Machines https://plus.google.com/+MariusBuliga/posts/4T19daNatzt
if you want classical computers running on this huge DNA tape.

Or, in principle, you may be able to design a molecular google search …
molecule, which would interact with the DNA data to retrieve some piece of it.

Or you may just translate all programs running on all computers from all over the world into lambda calculus, then turn them into chemlambda molecules, maybe you get how much, a cup of molecular matter?

Which attention:
– it executes as you look at it
– you can duplicate it into two cups in a short matter of time, in the real world
– which makes the sneakernet simply huge related to the virtual net!

Which brings of course molecular computers proposal to the fore
http://chorasimilarity.github.io/chemlambda-gui/dynamic/molecular.html  ”

Let’s develop this a bit! (source)

The following are projections about the possible future of biochemical computations with really big data: the whole, global data produced and circulated by humans.

They are based on estimates of the information content in the biosphere from the article [1] and on a proposal for life like molecular computing.

Here are the facts. In [1] there are given estimates about the information content of DNA in the biosphere, which are used further.

One estimate is that there are about 5 X 10^11 tonnes of DNA, in a biomass of about 2 X 10^12 tonnes, which gives a proportion of DNA in biomass of about 1/40.

This can be interpreted as: in order to run the biochemical computations with 1g of DNA there are needed about 40g of biochemical machinery.

From the estimate that the biomass contains about 5 X 10^30 cells, it follows that 4g of DNA are contained (and thus run in the biochemical computation) in 10^13 cells.

The Internet has about 3 X 10^9 computers and the whole data stored is equivalent with about 5g of DNA [exact citation not yet identified, please provide a source].

Based on comparisons with the Tianhe-2 supercomputer (which has about 3 X 10^6 cores) it follows that the whole Internet processes in a way equivalent as a magnitude order to 10^3 such supercomputers.
From [1] (and from the rather dubious equivalence of FLOPS with NOPS)  we get that the whole biosphere has a power of 10^15 X 10^24 NOPS, which gives for 10^13 cells (the equivalent of 4g of DNA) about 10^17 NOPS. This shows that approximately the power of the biochemical computation of 4g of DNA (embedded in the biochemical machinery of about 160g) is of the same order with the power of computation of the whole internet.

Conclusion until now: the whole Internet could be run in a “pet” living organism of about 200g. (Comparable to a rat.)

This conclusion holds only if there is a way to map silicon and TM based computers into biochemical computations.

There is a huge difference between these two realms, which comes from the fact that the Internet and our computers are presently built as a hierarchy, with multiple levels of control, while in the same time the biochemical computations in a living cell do not have any external control (and there is no programming).

It is therefore hard to understand how to map the silicon and TM based computations (which run one of the many computation paradigms embedded into the way we conceive programming as a discipline of hierarchical control) into a decentralized, fully asynchronous, in a ransom environment biochemical computation.

But this is exactly the proposal made in [2], which shows that in principle this can be done.

The details are that in [2] is proposed an artificial chemistry (instead of the real world chemistry) and a model of computation which satisfies all the requirements of biochemical computations.
(See very simple examples of such computations in the chemlambda collection https://plus.google.com/u/0/collection/UjgbX )

The final conclusion, at least for me, is that provided there is a way to map this (very basic) artificial chemistry into real chemical reactions, then one day you might have the whole Internet as a copy which runs in your pet.

[1] An Estimate of the Total DNA in the Biosphere,
Hanna K. E. Landenmark,  Duncan H. Forgan,  Charles S. Cockell,
http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002168

[2] Molecular computers,
Marius Buliga, 2015
http://chorasimilarity.github.io/chemlambda-gui/dynamic/molecular.html

Crossings as pairs of molecular bonds; boolean computations in chemlambda

I collect here the slightly edited versions of a stream of posts on the subject from the microblogging chemlambda collection. Hopefully this post will give a more clear image about this thread.

Here at the chorasimilarity open notebook, the subject has been touched previously, but perhaps that the handmade drawings made it harder to understand (than with the modern 🙂 technology from the chemlambda repository):

Teaser: B-type neural networks in graphic lambda calculus (II)

(especially the last picture!)

All this comes with validation means. This is a very powerful idea: in the future validation will replace peer review, because it is more scientific than hearsay from anonymous authority figures (aka old peer review) and because it is more simple to implement than a network of hearsay comments (aka open peer review).

All animations presented here are obtained by using the script quiner.sh and various mol files. See instructions about how you can validate (or create your own animations) here:
https://github.com/chorasimilarity/chemlambda-gui/blob/gh-pages/dynamic/README.md

Here starts the story.

(Source FALSE is the hybrid of TRUE, boole.mol file used)

boole

Church encoding gives a way to define boolean values as terms in the lambda calculus. It is easy:

TRUE= Lx.(Ly.x)

FALSE= Lx.(Ly.y)

So what? When we apply one of these terms to another two, arbitrary terms X and Y, look what happens: (arrows are beta reductions (Lx.A)B – – > A[x:=B] )

(TRUE X) Y – – > (Ly.X) Y – – > X (meaning Y goes to the garbage)

(FALSE X) Y – – > (Ly.y) Y – – > Y (meaning X goes to the garbage)

It means that TRUE and FALSE select a way for X and Y: one of them survives, the other disappears.

Good: this selection is an essential ingredient for computation

Bad: too wasteful! why send a whole term to the garbage?

Then, we can see it otherwise: there are two outcomes: S(urvival) and G(arbage), there is a pair of terms X and Y.

– TRUE makes X to connect with S and Y to connect with G

– FALSE makes X to connect with G and Y to connect with S

The terms TRUE and FALSE appear as molecules in chemlambda, each one made of two red nodes (lambdas) and a T (termination) node. But we may dispense of the T nodes, because they lead to waste, and keep only the lambda nodes. So in chemlambda the TRUE and FALSE molecules are, each, made of two red (lambda) nodes and they have one FROUT (free out).

They look almost alike, only they are wired differently. We want to see how it looks to apply one term to X and then to Y, where X and Y are arbitrary. In chemlambda, this means we have to add two green (A) application nodes, so TRUE or FALSE applied to some arbitrary X and Y appear, each, as a 4 node molecule, made of two red (lambda) two green (A), with two FRIN (free in) nodes corresponding to X and Y and two FROUT (free out) nodes, corresponding: one to the deleted termination node, thus this is the G(arbage) outcome, and the other to the “output” of the lambda terms, thus this is the S(urvival) outcome.

But the configuration made of two green A nodes and two red L nodes is the familiar zipper which you can look at in this post

betazipper

In the animation you see TRUE (at left) and FALSE (at right), with the magenta FROUT nodes and the yellow FRIN nodes.

The zipper configurations are visible as the two vertical strings made of two green, two red nodes.

What’s more? Zippers, they do only one thing: they unzip.

The wiring of TRUE and FALSE is different. You can see the TRUE and FALSE in the lower part of the animation. I added four Arrow (white) nodes in order to make the wiring more visible. Arrow nodes are eliminated in the COMB cycle, they have only a fleeting existence.

This shows what is really happening: look at each (TRUE-left, FALSE-right) configuration. In the upper side you have 4 nodes, two magenta, two yellow, which are wired together at the end of the computation. In the case of TRUE they end up wired in a X pattern, in the case of FALSE they end up wired in a = pattern.

At the same time, in the lower side, before the start of the computation, you see the 4 white nodes which: in the case of TRUE are wired in a X pattern, in the case of FALSE are wired in a = pattern. So what is happening is that the pattern ( X or = ) is teleported from the 4 white nodes to the 4 magenta-yellow nodes!

The only difference between the two molecules is in this wire pattern, X vs =. But one is the hybrid of the other, hybridisation is the operation (akin to the product of knots) which has been used and explained in the post about senescence and used again in more recent posts. You just take a pair of bonds and switch the ends. Therefore TRUE and FALSE are hybrids, one of the other.

(Source Boolean wire, boolewire.mol file used )

If you repeat the pattern which is common to TRUE and FALSE molecules then you get a boolean wire, which is more impressive “crossings teleporter”. This time the crosses boxed have been flattened, but the image is clear:

boolewire

Therefore, TRUE and FALSE represent choices of pairs of chemical bonds! Boolean computation (as seen in chemlambda) can be seen as management of promises of crossings.

(Source Promises of crossings, ifthenelsecross.mol file used )

You see 4 configurations of 4 nodes each, two magenta and two yellow.

In the upper left side corner is the “output” configuration. Below it and slightly to the right is the “control” configuration. In the right side of the animation there are the two other configurations, stacked one over the other, call them “B” (lower one) and “C” (upper one).

Connecting all this there are nodes A (application, green) and L (lambda, red).

ifthenelsecross

You see a string of 4 green nodes, approximately vertical, in the middle of the picture, and a “bag” of nodes, red and green, in the lower side of the picture. This is the molecule for the lambda term

IFTHENELSE = L pqr. pqr

applied to the “control” then to the “B” then to the “C”, then to two unspecified “X” and “Y” which appear only as the two yellow dots in the “output” configuration.

After reductions we see what we get.

Imagine that you put in each 4 nodes configuration “control”, “B” and “C”, either a pair of bonds (from the yellow to the magenta nodes) in the form of an “X” (in the picture), or in the form of a “=”.

“X” is for TRUE and “=” is for false.

Depending on the configuration from “control”, one of the “B” or “C” configuration will form, together with its remaining pair of red nodes, a zipper with the remaining pair of green nodes.

This will have as an effect the “teleportation” of the configuration from “B” or from “C” into the “output”, depending on the crossing from “control”.

You can see this as: based on what “control” senses, the molecule makes a choice between “B” and “C” promises of crossings and teleports the choice to “output”.

(Source: Is there something in the box?, boxempty.mol used)

I start from the lambda calculus term ISZERO and then I transform it into a box-sensing device.

In lambda calculus the term ISZERO has the expression

ISZERO = L a. ((a (Lx.FALSE)) TRUE)

and it has the property that ISZERO N reduces either to TRUE (if N is the number 0) or FALSE (if N is a number other than 0, expressed in the Church encoding).

The number 0 is
0 = FALSE = Lx.Ly.y

For the purpose of this post I take also the number 2, which is in lambda calculus

2=Lx.Ly. x(xy)

(which means that x is applied twice to y)

Then, look: (all reductions are BETA: (Lx.A)B – – > A[x:=B] )

ISZERO 0 =
(L a. ((a (Lx.FALSE)) TRUE) ) (Lx.Ly.y) – – >
((Lx.Ly.y) (Lx.FALSE)) TRUE – – >
(Ly.y)TRUE – – > (remark that Lx.FALSE is sent to the garbage)
TRUE (and the number itself is destroyed in the process)

and

ISZERO 2 =
(L a. ((a (Lx.FALSE)) TRUE) ) (Lx.Ly. x(xy)) – – >
((Lx.Ly. x(xy)) (Lx.FALSE)) TRUE – – > (fanout of Lx.FALSE performed secretly)
(Lx.FALSE) ((Lx.FALSE) TRUE) – – >
FALSE ( and (Lx.FALSE) TRUE sent to the garbage)

Remark that in the two cases there was the same number of beta reductions.

Also, the use of TRUE and FALSE in the ISZERO term is… none! The same reductions would have been performed with an unspecified “X” as TRUE and an unspecified “Y” as FALSE.

(If I replace TRUE by X and FALSE by Y then I get a term which reduces to X if applied to 0 and reduces to Y if applied to a non zero Church number.)

Of course that we can turn all this into chemlambda reductions, but in chemlambda there is no garbage and moreover I want to make the crossings visible. Or, where are the crossings, if they don’t come from TRUE and FALSE (because it should work with X instead of TRUE and Y instead of FALSE).

Alternatively, let’s see (a slight modification of) the ISZERO molecule as a device which senses if there is a number equal or different than 0, then transforms, according to the two cases, into a X crossing or a = crossing.

Several slight touches are needed for that.

1. Numbers in chemlambda appear as stairs of pairs of nodes FO (fanout, green) and A (application, green), as many stairs as the number which is represented. The stairs are wrapped into two L (lambda, red) nodes and their bonds.
We can slightly modify this representation so that it appears like a box of stairs with two inputs and two outputs, and aby adding a dangling green (A, application) node with it’s output connected to one of its inputs (makes no sense in lamnda calculus, but behaves well in the beta reductions as performed in chemlambda).

In the animation you can see, in the lower part of the figure:
-at left the number 0 with an empty box (there are two Arrow (white) nodes added for clarity)
-at right the number 2 with a box with 2 stairs
… and in each case there is this dangling A node (code in the mol file of the form A z u u)
boxempty

2. The ISZERO is modified by making it to have two FRIN (free in, yellow) and two FROUT (free out, magenta) nodes which will be involved in the final crossing(s). This is done by a clever (hope) change of the translation of the ISZERO molecule into chemlambda: first the two yellow FRIN nodes represent the “X” and the “Y” (which they replace the FALSE and the TRUE, recall), and there are added a FOE (other fanout node, yellow) and a FI (fanin node, red) in strategic places.

________________________________

ArXiv is 3 times bigger than all megajournals taken together

 How big are the “megajournals” compared to arXiv?
I use data from the article

[1] Have the “mega-journals” reached the limits to growth? by Bo-Christer Björk ​https://dx.doi.org/10.7717/peerj.981 , table 3

and the arXiv monthly submission rates

[2] http://arxiv.org/stats/monthly_submissions

To have a clear comparison I shall look at the window 2010-2014.

Before showing the numbers, there are some things to add.

1.  I saw the article [1] via the post by +Mike Taylor

[3] Have we reached Peak Megajournal? http://svpow.com/2015/05/29/have-we-reached-peak-megajournal/

I invite you to read it, it is interesting as usual.

2. Usually, the activity of counting articles is that dumb thing which is used by managers to hide behind, in order to not be accountable for their decisions.
Counting  articles is a very lossy compression technique, which associates to an article a very small number of bits.
I indulged into this activity because of the discussions from the G+ post

[4] https://plus.google.com/+MariusBuliga/posts/efzia2KxVzo

and its clone

[4′] Eisen’ “parasitic green OA” is the apt name for Harnad’ flawed definition of green OA, but all that is old timers disputes, the future is here and different than both green and gold OA https://chorasimilarity.wordpress.com/2015/05/28/eisen-parasitic-green-oa-is-the-apt-name-for-harnad-flawed-definition-of-green-oa-but-all-that-is-old-timers-disputes-the-future-is-here-and-different-than-both-green-and-gold-oa/

These discussions made me realize that the arXiv model is carefully edited out from reality by the creators and core supporters of green OA and gold OA.

[see more about in the G+ variant of the post https://plus.google.com/+MariusBuliga/posts/RY8wSk3wA3c ]
Now, let’s see those numbers. Just how big is that arXiv thing compared to “megajournals”?

From [1]  the total number of articles per year for “megajournals” is

2010:  6,913
2011:  14,521
2012:   25,923
2013:  37,525
2014:  37,794
2015:  33,872

(for 2015 the number represents  “the articles published in the first quarter of the year multiplied by four” [1])

ArXiv: (based on counting the monthly submissions listed in [2])

2010: 70,131
2011: 76,578
2012: 84,603
2013: 92,641
2014:  97,517
2015:  100,628  (by the same procedure as in [1])

This shows that arXiv is 3 times bigger than all the megajournals at once, despite that:
– it is not a publisher
– does not ask for APC
– it covers fields far less attractive and prolific than the megajournals.

And that is because:
– arxiv answers to a real demand from researchers, to communicate fast and reliable their work to their fellows, in a way which respects their authorship
– also a reaction of support for what most of them think is “green OA”, namely to put their work there where is away from the publishers locks.

_____________________________________

Eisen’ “parasitic green OA” is the apt name for Harnad’ flawed definition of green OA, but all that is old timers disputes, the future is here and different than both green and gold OA

See this post and the replies on G+ at [archived post].

My short description of the situation: the future is here, and it is not gold OA (nor the flawed green OA definition which ignores arXiv). So, visually:

imageedit_34_6157098125

It has never occurred to me that putting an article in a visible place (like arXiv.org) is parasitic green OA+Michael B. Eisen  calls it parasitic because he supposes that this has to come along with the real publication. But what if not?

[Added: Eisen writes in the body of the post that he uses the definition given by Harnad to green OA, which ignores the reality. It is very conveniently for gold OA to have a definition of green OA which does not apply to the oldest (1991) and fully functional example of a research communication experiment which is OA and green: the arXiv.org.]
Then, compared to that, gold OA appears as a progress.
http://www.michaeleisen.org/blog/?p=1710

I think gold OA, in the best of cases, is a waste of money for nothing.

A more future oriented reply has +Mike Taylor
http://svpow.com/2015/05/26/green-and-gold-the-possible-futures-of-open-access/
who sees two possible futures, green (without the assumption from Eisen post) and gold.

I think that the future comes faster. It is already here.

Relax. Try validation instead peer review. Is more scientific.

Definition. Peer-reviewed article: published by the man who saw the man who claims to have read it, but does not back the claim with his name.

The reviewers are not supermen. They use the information from the traditional article. The only thing they are supposed to do is that they read it. This is what they use to give their approval stamp.

Validation means that the article provides enough means so that the readers can reproduce the research by themselves. This is almost impossible with  an article in the format inherited from the time when it was printed on paper. But when the article is replaced by a program which runs in the browser, which uses databases, simulations, whatever means which facilitate the validation, then the reader can, if he so wishes, make a scientific motivated opinion about this.

Practically the future has come already and we see it on Github. Today. Non-exclusively. Tomorrow? Who knows?

Going back to the green-gold OA dispute, and Elsevier recent change of sharing and hosting articles (which of course should have been the real subject of discussions, instead of waxing poetic about OA, only a straw man).

This is not even interesting. The discussion about OA revolves around who has the copyright and who pays (for nothing).

I would be curious to see discussions about DRM, who cares who has the copyright?

But then I realised that, as I wrote at the beginning of the post, the future is here.

Here to invent it. Open for everybody.

I took the image from this post by +Ivan Pierre and modified the text.
https://plus.google.com/+IvanPierreKilroySoft/posts/BiPbePuHxiH

_____________

Don’t forget to read the replies from the G+ post. I archived this G+ post because the platform went down. Read here why I deleted the chemlambda collection from G+.

____________________________________________________

Real or artificial chemistries? Questions about experiments with rings replications

The replication mechanism for circular bacterial chromosomes is known. There are two replication forks which propagate in two directions, until they meet again somewhere and the replication is finished.

Bidirectionalrep2

[source, found starting from the wiki page on circular bacterial chromosomes]

In the artificial chemistry chemlambda something similar can be done. This leads to some interesting questions. But first, here is a short animation which describes the chemlambda simulation.

ringduplication

The animation has two parts, where the same simulation is shown. In the first part some nodes are fixed, in order to ease the observation of the propagation of the replication, which is like the propagation of the replication forks. In the second part there is no node fixed, which makes easy to notice that eventually we get two ring molecules from one.

____________

If the visual evidence convinced you, then it is time to go to the explanations and questions.

But notice that:

  • The replication of circular DNA molecules is done with real chemistry
  • The replication of the circular molecule from the animation is done with an artificial chemistry model.

The natural question to ask is: are these chemistries the same?

The answer may be more subtle than a simple yes or no. As more visual food for thinking, take a look at a picture from the Nature Nanotechnology Letter “Self-replication of DNA rings” http://www.nature.com/nnano/journal/vaop/ncurrent/full/nnano.2015.87.html by Junghoon Kim, Junwye Lee, Shogo Hamada, Satoshi Murata & Sung Ha Park

nnano.2015.87-f1

[this is Figure 1 from the article]

This is a real ring molecule, made of patterns which, themselves are made of DNA. The visual similarity with the start of the chemlambda simulation is striking.

But this DNA ring is not a DNA ring as in the first figure. It is made by humans, with real chemistry.

Therefore the boldfaced question can be rephrased as:

Are there real chemical assemblies which react as of they are nodes and bonds of the artificial chemistry?

Like actors in a play, there could be a real chemical assembly which plays the role of a red atom in the artificial chemistry, another real chemical assembly which plays the role of a green atom, another for a small atom (called “port”) and another for a bond between these artificial chemistry atoms.

From one side, this is not surprising, for example a DNA molecule is figured as a string of letters A, C, T, G, but each letter is a real molecule. Take A (adenine)

800px-Adenine-3D-balls

[source]

Likewise, each atom from the artificial chemistry (like A (application), L (lambda abstraction), etc) could be embodied in real chemistry by a real molecule. (I am not suggesting that the DNA bases are to be put into correspondence with artificial chemistry atoms.)

Similarly, there are real molecule which could play the role of bonds. As an ilustration (only), I’ll take the D-deoxyribose, which is involved into the backbone structure of a DNA molecule.

D-deoxyribose_chain-3D-balls

[source]

So it turns out that it is not so easy to answer to the question, although for a chemist may be much easier than for a mathematician.

___________

 0. (few words about validation)  If you have the alternative to validate what you read, then it is preferable to authority statements or hearsay from editors. Most often they use anonymous peer-reviews which are unavailable to the readers.

Validation means that the reader can make an opinion about a piece of research by using the means which the author provides. Of course that if the means are not enough for the reader, then it is the author who takes the blame.

The artificial chemistry  animation has been done by screen recording of the result of a computation. As the algorithm is random, you can produce another result by following the instructions from
https://github.com/chorasimilarity/chemlambda-gui/blob/gh-pages/dynamic/README.md
I used the mol file model1.mol and quiner.sh. The mol file contains the initial molecule, in the mol format. The script quiner.sh calls the program quiner.awk, which produces a html and javascript file (called model1.html), which you can see in a browser.

I  added text to such a result and made a demo some time ago

http://chorasimilarity.github.io/chemlambda-gui/dynamic/model1.html

(when? look at the history in the github repo, for example: https://github.com/chorasimilarity/chemlambda-gui/commits/gh-pages/dynamic/model1.html)

1. Chemlambda is a model of computation based on artificial chemistry which is claimed to be very closed to the way Nature computes (chemically), especially when it comes to the chemical basis of computation in living organisms.
This is a claim which can be validated through examples. This is one of the examples. There are many other more; the chemlambda collection shows some of them in a way which is hopefully easy to read (and validate).
A stronger claim, made in the article Molecular computers (link further), is that chemlambda is real chemistry in disguise, meaning that there exist real chemicals which react according to the chemlambda rewrites and also according to the reduction algorithm (which does random rewrites, as if produced by random encounters of the parts of the molecule with invisible enzymes, one enzyme type per rewrite type).
This claim can be validated by chemists.
http://chorasimilarity.github.io/chemlambda-gui/dynamic/molecular.html

2. In the animation the duplication of the ring molecule is achieved, mostly, by the propagation of chemlambda DIST rewrites. A rewrite from the family DIST typically doubles the number of nodes involved (from 2 to 4 nodes), which vaguely suggest that DIST rewrites may be achieved by a DNA replication mechanism.
(List of rewrites here:
http://chorasimilarity.github.io/chemlambda-gui/dynamic/moves.html )

So, from my point of view, the question I have is: are there DNA assemblies for the nodes, ports and bonds of chemlambda molecules?

3. In the first part of the animation you see the ring molecule with some (free in FRIN and free out FROUT) nodes fixed. Actually you may attach more graphs at the free nodes (yellow and magenta 1-valent nodes in the animation).

You can clearly see the propagation of the DIST rewrites. In the process, if you look closer, there are two nodes which disappear. Indeed, the initial ring has 9 nodes, the two copies have 7 nodes each. That is because the site where the replication is initiated (made of two nodes) is not replicated itself. You can see the traces of it as a pair of two bonds which connect, each, a free in with a free out node.

In the second part of the animation, the same run is repeated, this time without fixing the FRIN and FROUT nodes before the animation starts. Now you can see the two copies, each with 7 nodes, and the remnants of the initiation site, as a two pairs of FRIN-FROUT nodes.

_________________________________________

Asynchronous and decentralized teleportation of Turing Machines

The idea is that it is possible to copy a computer which executes a program, during execution, and to produce working clones. All this without any control (hence decentralized) and any synchronization.

More than that, the multiple copies are done in the same time as the computers compute.

I think is crazy, there’s nothing like this on the offer. This is a proof of principle.  The post is a light (hopefully) introduction into the subject.

If you look for validation, then go follow the instructions from
and use the file bbdupli.mol
This animation starts with one Turing Machine and ends with three Turing Machines.
bbdupli
Huh?
Let’s take it slower.
1. Everybody knows what is a TM. Computers are the real thing which resembles most to a TM. There is lot of stuff added, like a screen, keyboard, a designer box, antennas and wires, but essentially a computer has a memory which is like a tape full of bits (0 and 1) and there is a processor which is like a read/write head with an internal state. When the head reads a bit from the tape then the internal state changes, the head writes in the tape and then moves, according to some internal rules.
2. These rules define the TM. Any rule goes, but among them there are rules which make the TM universal. That means that there are well chosen rules such that  if you first put on the tape a string of bits, which is like the OS of the computer, and if you place the head at the beginning of this string, then the TM (which obbeys those clever rules) uploads the OS and then it becomes the TM of your choice.
3. From here it becomes clear that these rules are very important. Everything else is built on them. There are many examples of rules for universal TM, the more or less precise estimate is that for a tape written with bits (0 and 1) the known universal TM need about 20 rules.
4. Just a little bit earlier than the TM invention, lambda calculus has been invented as well, by Alonzo Church. Initially the lambda calculus was a monumental formalism, but after the TM invention there was a variant of it, called untyped lambda beta calculus, which has been proved to be able to compute the same things as a TM can compute.
The untyped lambda beta calculus is famous among the CS nerds and much less known by others (synthetic biologists, I’m looking at you) who think that TM are the natural thing to try to emulate with biology and chemistry, because it is obviously something everybody understands, not like lambda calculus which goes into higher and higher abstractions.
There are exceptions, the most famous in my opinion is the Algorithmic Chemistry (or Alchemy) of Fontana and Buss. They say that lambda calculus is chemistry and that one operation (called application) of LC is like a chemical reaction and the other operation (called abstraction) is like a reactive site. (Then they change their mind, trying to fit types into the story, speaking about chemicals as functions, then they use pi calculus, all very interesting but outside of the local goal of this post. Will come back later to that.)
Such exceptions aside, the general idea is: TM easy, LC hard.That is false and I shall prove it. The side bonus is that I can teleport Turing machines.
5. If we want to compare TM with LC then we have to put them on the same footing, right? This same footing is to take the rules of TM and the reductions of LC as rewrites.
But, from the posts about chemlambda, rewrites are chemical reactions!
So let’s put all in the chemlambda frame.

http://chorasimilarity.github.io/chemlambda-gui/dynamic/turingchem.htmlConclusions:

  • LC is actually much simpler than TM, because it uses only one rewrite which is specific to it (the BETA rewrite) instead of about 20 min for the TM
  • LC and TM are both compatible with the chemical approach a la chemlambda, in the sense that chemlambda can be enhanced by the addition of “bits” (red and green 2 valent nodes), head move (Arrow element) and TM internal states (other 2-valent nodes) and by some “propagation” rewrites, such that chemlambda can now do both TM and LC in the same time!

In the animation you see, at the beginning, something like
a tree made of green fanout (FO) nodes (and yellow FRIN and magenta FROUT leaves) and
– a  small ring of green (2-valent, bit 0) and pale green (a state of a TM). That small ring is a TM tape which is “abstracted”, i.e. connected to a lambda abstraction node, which itself is “applied” (by an application A node) to the fanout tree.

What happens? Eventually there are produced 3 TM (you see 3 tapes) which function independently.

They are actually in different states, because the algorithm which does all this is random (it does a rewrite if a coin has the fancy to drop in the right state).

The algorithm is random (simulation of asynchronous behaviour) and all rewrites are local (simulation of decentralized).

Only a simulation which shows it is perfectly possible. The real thing would be to turn this into a protocol.

____________________________________________________________

Screen recording of the reading experience of an article which runs in the browser

The title probably needs parsing:

SCREEN RECORDING {

READING {

PROGRAM EXECUTION {

RESEARCH ARTICLE }}}

An article which runs in the browser is a program (ex. html and javascript)  which is executed by the browser. The reader has access to the article as a program, to the data and other programs which have been used for producing the article, to all other articles which are cited.

The reader becomes the reviewer. The reader can validate, if he wishes, any piece of research which is communicated in the article.

The reader can see or interact with the research communicated. By having access to the data and programs which have been used, the reader can produce other instances of the same research (i.e virtual experiments).

In the case of the article presented as an example, embedded in the article are animations of the Ackermann function computation and the other concerning the building of a molecular structure. These are produced by using an algorithm which has randomness in the composition, therefore the reader may produce OTHER instances of these examples, which may or may not be consistent with the text from the article. The reader may change parameters or produce completely new virtual experiments, or he may use the programs as part of the toolbox for another piece of research.

The experience of the reader is therefore:

  • unique, because of the complete freedom to browse, validate, produce, experiment
  • not limited to reading
  • active, not passive
  • leading to trust, in the sense that the reader does not have to rely on hearsay from anonymous reviewers

In the following video there is a screen recording of these possibilities, done for the article

M. Buliga, Molecular computers, 2015, http://chorasimilarity.github.io/chemlambda-gui/dynamic/molecular.html

This is the future of research communication.

____________________________________________________________

A neuron-like artificial molecule

This is a neuron-like artificial molecule, simulated in chemlambda.

neuron

It has been described in the older post

https://chorasimilarity.wordpress.com/2015/03/04/how-to-put-a-y-combinator-into-a-neuron-and-lock-it-with-a-quine/

Made with quiner.sh and the file neuron.mol as input.
If you want to make your own, then follow the instructions from here:

https://github.com/chorasimilarity/chemlambda-gui/blob/gh-pages/dynamic/README.md

It is on the list of demos: http://chorasimilarity.github.io/chemlambda-gui/dynamic/neuron.html

__________
Needs explanations, because this is not exactly the image used for a neuron in a neural network.

_________
1. In neural networks a neuron is a black box with some inputs and some outputs, which computes and sends (the same) signal at the outputs as a function of the signals from the inputs. A connection between an output of one neuron and an an input of another has a weight.

The interpretation is that outputs represent the axon of a neuron, which is like a tree with the root in the neuron’s soma. The inputs of the neuron represents the dendrites. For a neuron in a neural network, a connection between an input (leaf of a dendrite) and an output (leaf of an axon) has a weight because it models an average of many contact points (synapses) between dendrites and axons.

The signals sent represent electrical activity in the neural network.

2. Here, the image is different.

Each neuron is seen as a bag of chemicals. The electrical signals are only a (more easy to measure) aspect of the cascades of chemical reactions which happen inside the neuron’s some, in the dendrites, axon and synapses.

Therefore a neuron in this image is the chemical reactions which happen.

From here we go a bit abstract.

3. B-type unorganised machines. In his fundamental research “Intelligent Machinery” Turing introduced his  B-type neural networks
http://www.alanturing.net/turing_archive/pages/Reference%20Articles/connectionism/Turing%27s%20neural%20networks.html
or B-type unorganised machine, which is made by more or less identical neurons (they compute a boolean function) which are connected via a connection-modifier box.

A connection modifier box has an input and an output, which are part of the connection between two neurons. But it has also some training connections, which can modify the function  computed by the connection modifier.

What is great is that Turing explains that the connection modifier can me made by neurons
http://www.alanturing.net/turing_archive/archive/l/l32/L32-007.html
so that actually a B-type unorganised machine is an A-type unorganised machine (same machine without connection modifiers), where we see the connection modifiers as certain, well chosen patterns of neurons.

OK, the B-type machines compute by signals sent and received by neurons nevertheless. They compute boolean values.

4. What would happen  if we pass from Turing to Church?

Let’s imagine the same thing, but in lambda calculus: neurons which reduce lambda terms, in networks which have some recurring patterns which are themselves made by neurons.

Further: replace the signals (which are now lambda terms) by their chemical equivalent — chemlambda molecules — and replace the connection between them by chemical bonds. This is what you see in the animation.

Connected to the magenta dot (which is the output of the axon) is a pair of nodes which is related to the Y combinator, as seen as a molecule in chemlambda. ( Y combinator in chemlambda explained in this article http://arxiv.org/abs/1403.8046 )

The soma of this neuron is a single node (green), it represents an application node.

There are two dendrites (strings of red nodes) which have each 3 inputs (yellow dots). Then there are two small molecules at the end of the dendrites which are chemical signals that the computation stops there.

Now, what happens: the neuron uses the Y combinator to build a tree of application nodes with the leaves being the input nodes of the dendrites.

When there is no more work to do the molecules which signal these interact with the soma and the axon and transform all into a chemlambda quine (i.e. an artificial bug of the sort I explained previously) which is short living, so it dies after expelling some “bubbles”  (closed strings of white nodes).

5. Is that all? You can take “neurons” which have the soma any syntactic tree of a lambda term, for example. You can take neurons which have other axons than the Y-combinator. You can take networks of such neurons which build and then reduce any chemlambda molecule you wish.
__________________________________________

Artificial life, standard computation tests and validation

In previous posts from the [update: revived] chemlambda collection  I wrote about the simulation of various behaviours of living organisms by the artificial chemistry called chemlambda.
UPDATE: the links to google+ animations are no longer viable. There are two copies of the collection, which moreover are enhanced:

______________

There are more to show in this direction, but there is already an accumulation of them:
jellyfish
20_20_hyb
9_9_hyb
As the story is told backwards, from present to the past, there will be more about reproduction later.
Now, that is one side of the story: these artificial microbes or molecules manifest life characteristics, stemming from an universal, dumb simple algorithm, which does random rewrites as if the molecule encounters invisible rewriting enzymes.

 

So, the mystery is not in the algorithm. The algorithm is only a sketchy physics.

But originally this formalism has been invented for computation.

 

It does pass very well standard computation steps, as well as nonstandard ones (from the point of view of biologists, who perhaps don’t hold enough sophisticated views as to differentiate between boring boolean logic gates and recursive but not primitive recursive functions like the Ackermann function).

In the following animation you see a few seconds screenshot of the computation of a factorial function.

facto

Recall that the factorial is something a bit more sophisticated than a AND boolean gate, but it is primitively recursive, so is less sophisticated than the Ackermann function.

 

During these few seconds there are about 5-10 rewrites. The whole process has several hundreds of them.

How are they done? Who decides the order? How are criteria satisfied or checked, what is incremented, when does the computation stop?

That is why it is wonderful:

  • the rewrites are random
  • nobody decides the order, there’s no plan
  • there are no criteria to check (like equality of values), there are no values to be incremented or otherwise to be passed
  • the computation stops when there are no possible further rewrites (thus, according to the point of view used with the artificial life forms, the computation stops when the organism dies)

Then, how it works? Everything necessary is in the graphs and the rewrite patterns they show.

It is like in Nature, really. In Nature there is no programmer, no function, no input and output, no higher level. All these are in the eyes of the human observer, who then creates a story which has some predictive power if it is a scientific one.

All I am writing can be validated by anybody wishing to do it. Do not believe me, it is not at all necessary to appeal to authority here.

So I conclude:  this is a system of a simplified world which manifest life like behaviours and universal computation power, in the presence of randomness. In this world there is no plan, there is no control and there are no externally imposed goals.

Very unlike the Industrial Revolution thinking!

This thread of posts can be seen at once in the chemlambda collection
https://plus.google.com/u/0/collection/UjgbX

If you want to validate this wordy post yourself then go to the github repository and read the instructions
https://github.com/chorasimilarity/chemlambda-gui/blob/gh-pages/dynamic/README.md

The live computation of the factorial is here
http://chorasimilarity.github.io/chemlambda-gui/dynamic/lisfact_2_mod.html

Which means that if you want to produce another random computation of the factorial then you have to use the file
lisfact_2_mod.mol and to follow the instructions.

________________________________________________________

It is time to cast doubt on any peer reviewed but not validated research article

Any peer reviewed article which does not come at least with the reviews has only a social validation. With reviews which contain only value judgements, grammar corrections and impossible to validate assertions, there is not much more trust added.

As to the recourse to experts… what are we, a guild of wizards? It is true because somebody says some anonymous experts have  been consulted and they say it’s right or wrong?

Would you take a pill based on the opinion of an anonymous expert that it cures your disease?

Would you fly in a plane whose flight characteristics have been validated by the hear-say of unaccountable anonymous experts?

What is more than laughable is that it seems that mathematics is the field with the most wizards, full of experts who are willingly exchanging private value opinions, but who are reluctant to make them in public.

Case by case, building on concrete examples, in an incremental manner, it is possible to write articles which can be validated by using the means they provide (and any other available), by anyone willing to do it.

It is time to renounce at this wizardry called peer review and to pass to a more rigorous approach.

Hard, but possible. Of course that the wizards will complain. After all they are in material conflict of interests, because they are both goalkeepers and arbiters, both in academic and editorial committees.

But again, why should we be happy with “it’s worthy of publication or not because I say so, but do not mention my name” when there is validation possible?

The wizardry costs money, directed to compliant students, produces no progress elsewhere than in the management metrics, kills or stalls research fields where the advance is made harder than it should because of the mediocrity of these high, but oh so shy in public experts who are where they are because in their young time the world was more welcoming with researchers.

Enough!

_____________________________________________________________

Bemis and the bull

Bemis said:

“I fell at the foot of the only solitary tree there was in nine counties adjacent (as any creature could see with the naked eye), and the next second I had hold of the bark with four sets of nails and my teeth, and the next second after that I was astraddle of the main limb and blaspheming my luck in a way that made my breath smell of brimstone. I had the bull, now, if he did not think of one thing. But that one thing I dreaded. I dreaded it very seriously. There was a possibility that the bull might not think of it, but there were greater chances that he would. I made up my mind what I would do in case he did. It was a little over forty feet to the ground from where I sat. I cautiously unwound the lariat from the pommel of my saddle——”

“Your saddle? Did you take your saddle up in the tree with you?”

“Take it up in the tree with me? Why, how you talk. Of course I didn’t. No man could do that. It fell in the tree when it came down.”

“Oh—exactly.”

“Certainly. I unwound the lariat, and fastened one end of it to the limb. It was the very best green raw-hide, and capable of sustaining tons. I made a slip-noose in the other end, and then hung it down to see the length. It reached down twenty-two feet—half way to the ground. I then loaded every barrel of the Allen with a double charge. I felt satisfied. I said to myself, if he never thinks of that one thing that I dread, all right—but if he does, all right anyhow—I am fixed for him. But don’t you know that the very thing a man dreads is the thing that always happens? Indeed it is so. I watched the bull, now, with anxiety—anxiety which no one can conceive of who has not been in such a situation and felt that at any moment death might come. Presently a thought came into the bull’s eye. I knew it! said I—if my nerve fails now, I am lost. Sure enough, it was just as I had dreaded, he started in to climb the tree——”

“What, the bull?”

“Of course—who else?””

[ Mark Twain, Roughing It, chapter VII]

Like Bemis, legacy publishers hope you’ll not think the unthinkable.

That we can pass to a new form of research sharing.

In publicity they say that the public is like a bull.

When you read an article you are like a passive couch potato in front of the TV. They (the publishers, hand in hand with academic managers) cast the shows, you have the dubious freedom to tap onto the remote control.

Now, it is possible, hard but possible and doable on a case by case basis. It is possible to do more. Comparable to the experience you have in a computer game vs the one you have in front of the TV.

You can experience research actively, via research works which run in the browser. I’ll call them “articles” for the lack of the right name, but articles they are not.

An article which runs in the browser should have the following features:

  • you, the reader-gamer, can verify the findings by running (playing) the article
  • so there has to be some part, if not all of the content, into a form which is executed during gameplay, not only as an attached library of programs which can be downloaded and run by the interested reader (although such an attachment is already a huge advance over the legacy publisher pity offer)
  • verification (aka validation) is up to you, and not limited to a yes/no answer. By playing the game (as well as other related articles) you can, and you’ll be interested into discovering more, or different, or opposing results than the one present in the passive version of the article and why not in the mind of the author
  • as validation is an effect of playing the article, peer review becomes an obsolete, much weaker form of validation
  • peer review is anyways a very weird form of validation: the publisher, by the fact it publishes an article, implies that some anonymous members of the research guild have read the article. So when you read the article in the legacy journal you are not even told, only hinted that somebody from the editorial staff exchanged messages with somebody who’s a specialist, who perhaps read the article and thought it is worthy of publication. This is so ridiculous, but that is why you’ll find in many reviews, which you see as an author, so many irrelevant remarks from the reviewer, like my pet example of the reviewer who’s offput by my use of quotation signs. That’s why, because what the reviewer can do is very limited, so in order to give the impression he/she did something, to give some proof that he/she read the article, then it comes with this sort of circumstantial proof. Actually, for the most honest reviewer, the ideally patient and clever fellow who validates the work of the author, there is not much else to do. The reviewer has to decide if he believes it or not, from the passive form of the article he received from the editor, and in the presence of the conflict of interests which comes from extreme specialisation and low number of experts on a tiny subject. Peer review is not even a bad joke.
  • the licence should be something comparable to CC-BY-4.0, and surely not CC-BY-NC-ND. Something which leave free both the author and the reader/gamer/author of derivative works, and in the same time allows the propagation of the authorship of the work
  • finally, the article which runs in the browser does not need a publisher, nor a DRM manager. What for?

So, bulls, let’s start to climb the tree!

Related: https://chorasimilarity.wordpress.com/2015/04/28/one-of-the-first-articles-with-means-for-complete-validation-by-reproducibility/

_________________________________________

Visit the chemlambda collection

UPDATE: Chemlambda collection of animations is the version of the collection hosted on github. The original site is under very heavy traffic (in Jan 2020). Small images, about a 1/2 of the collection, due to memory limitations. But you can play the simulations in js!

__________

You don’t have to possess a Google+ account to visit the new

chemlambda collection

Kind of a micro-blogging place where you can read and see animated gifs about autonomous computing molecules, about the MicrobiomeOS in the making and easy clear intros to details of chemlambda.

If you are on G+ then don’t be shy and add it to one of your circles!

__________________________________________

let x=A in B in chemlambda

The beta reduction from lambda calculus is easy to be understood as the command
let x=B in A
It corresponds to the term (Lx.A)B which reduces to A[x=B], i.e to the term A where all instances of x have been replaced by B.
(by an algorithm which is left to be added, there are many choices among them!)

In Christopher P. Wadsworth, Semantics and Pragmatics of the Lambda Calculus , DPhil thesis, Oxford, 1971, and later John Lamping, An algorithm for optimal lambda calculus reduction, POPL ’90 Proceedings of the 17th ACM SIGPLAN-SIGACT symposium on Principles of programming languages, p. 16-30, there are proposals to replace the beta rule by a graph rewrite on the syntactic tree of the term.

These proposals opened many research paths, related to call by name strategies of evaluations, and of course going to the Interaction Nets.

The beta rule, as seen on the syntactic tree of a term, is a graph rewrite which is simple, but the context of application of it is complex, if one tries to stay in the real of syntactic trees (or that of some graphs which are associated to lambda terms).

This is one example of blindness caused by semantics. Of course that it is very difficult to conceive strategies of applications of this local rule (it involves only two nodes and 5 edges)  so that the graph after the rewrite has a global property (means a lambda term).

But a whole world is missed this way!

In chemlambda the rewrite is called BETA or A-L.
see the list of rewrites
http://chorasimilarity.github.io/chemlambda-gui/dynamic/moves.html

In mol notation this rewrite is

L 1 2 c , A c 3 4  – – > Arrow 1 4 , Arrow 3 2

(and then is the turn of COMB rewrite to eliminate the arrow elements, if possible)

As 1, 2, 3, 4, c are only port variables in chemlambda, let’s use other:

L A x in , A in B let  – – >  Arrow A let , Arrow B x

So if we interpret Arrow (via the COMB rewrite) as meaning “=”, we get to the conclusion that

let x=B in A

is in chemlambda

L A x in

A in B let

Nice. it almost verbatim the same thing.  Remark that the command”let” appears as a port variable too.

Visually the rewrite/command is this:

beta
As you see this is a Wadsworth-Lamping kind of graph rewrite, with the distinctions that:
(a) x, A, B are only ports variables, not terms
(b) there is no constraint for x to be linked to A

The price is that even if we start with a graph which is related to a lambda term, after performing such careless rewrites we get out of the realm of lambda terms.

But there is a more subtle difference: the nodes of the graphs are not gates and the edges are not wires which carry signals.

The reduction works well for many fundamental examples,
http://chorasimilarity.github.io/chemlambda-gui/dynamic/demos.html
by a simple combination of the beta rewrite and those rewrites from the DIST family I wrote about in the post about duplicating trees.
https://chorasimilarity.wordpress.com/2015/05/04/the-illustrated-shuffle-trick-used-for-tree-duplication/

So, we get out of lambda calculus, what’s wrong with that? Nothing, actually, it turns out that the world outside lambda, but in chemlambda, has very interesting features. Nobody explored them, that is why is so hard to discuss about that without falling into one’s preconceptions (has to be functional programming, has to be lambda calculus, has to be a language, has to have a global semantics).
___________________________________________

Nothing vague in the “no semantics” point of view

I’m a supporter of “no semantics” and I’ll try to convince you that it is nothing vague in it.

Take any formalism. To any term built from this formalism there is an associated syntactic tree. Now, look at the syntactic tree and forget about the formalism. Because it is a tree, it means that no matter how you choose to decorate its leaves, you can progress from the leaves to the root by decorating each edge. At each node of the tree you follow a decoration rule which says: take the decorations of the input edges and use them to decorate the output edge. If you suppose that the formalism is one which uses operations of bounded arity then you can say the following thing: strictly by following rules of decoration which are local (you need to know only at most N edge decorations in order to decorate another edge) you can arrive to decorate all the tree. Al the graph! And the meaning of the graph has something to do with this decoration. Actually the formalism turns out to be not about graphs (trees), but about static decorations which appear at the root of the syntactic tree.
But, you see, these static decorations are global effects of local rules of decoration. Here enters the semantic police. Thou shall accept only trees whose roots accept decorations from a given language. Hard problems ensue, which are heavily loaded with semantics.
Now, let’s pass from trees to other graphs.
The same phenomenon (there is a static global decoration emerged from local rules of decoration) for any DAG (directed acyclic graph). It is telling that people LOVE DAGs, so much so they go to the extreme of excluding from their thinking other graphs. These are the ones who put everything in a functional frame.
Nothing wrong with this!
Decorated graphs have a long tradition in mathematics, think for example at knot theory.
In knot theory the knot diagram is a graph (with 4-valent nodes) which surely is not acyclic! However, one of the fundamental objects associated to a knot is the algebraic object called “quandle”, which is generated from the edges of the graph, with certain relations coming from the edges. It is of course a very hard, fully loaded semantically problem to try to identify the knot from the associated quandle.
The difference from the syntactic trees is that the graph does not admit a static global decoration, generically. That is why the associated algebraic object, the quandle, is generated by (and not equal to) the set of edges.

There are beautiful problems related to the global objects generated by local rules. They are also difficult, because of the global aspect. It is perhaps as difficult to find an algorithm which builds an isomorphism between  two graphs which have the same associated family of decorations, as it is to  find a decentralized algorithm for graph reduction of a distributed syntactic tree.

But these kind of problems do not cover all the interesting problems.

What if this global semantic point of view makes things harder than they really are?

Just suppose you are a genius who found such an algorithm, by amazing, mind bending mathematical insights.

Your brilliant algorithm, because it is an algorithm, can be executed by a Turing Machine.

Or Turing machines are purely local. The head of the machine has only local access to the tape, at any given moment (Forget about indirection, I’ll come back to this in a moment.). The number of states of the machines is finite and the number of rules is finite.

This means that the brilliant work served to edit out the global from the problem!

If you are not content with TM, because of indirection, then look no further than to chemlambda (if you wish combined with TM, like in
http://chorasimilarity.github.io/chemlambda-gui/dynamic/turingchem.html , if you love TM ) which is definitely local and Turing universal. It works by the brilliant algorithm: do all the rewrites which you can do, nevermind the global meaning of those.

Oh, wait, what about a living cell, does it have a way to manage the semantics of the correct global chemical reactions networks which ARE the cell?

What about a brain, made of many neural cells, glia cells and whatnot? By the homunculus fallacy, it can’t have static, external, globally selected functions and terms (aka semantic).

On the other side, of course that the researcher who studies the cell, or the brain, or the mathematician who finds the brilliant algorithm, they are all using heavy semantic machinery.

TO TELL THE STORY!

Not that the cell or the brain need the story in order for them to live.

In the animated gif there is a chemlambda molecule called the 28 quine, which satisfies the definition of life in the sense that it randomly replenish its atoms, by approximately keeping its global shape (thus it has a metabolism). It does this under the algorithm: do all rewrites you can do, but you can do a rewrite only if a random coin flip accepts it.

semtree
Most of the atoms of the molecule are related to operations (application and abstraction) from lambda calculus.

I modified a bit a script (sorry, not in the repo this one) so that whenever possible the edges of this graph which MAY be part of a syntactic tree of a lambda term turn to GOLD while the others are dark grey.

They mean nothing, there’s no semantics, because for once the golden graphs are not DAGs, and because the computation consists into rewrites of graphs which don’t preserve well the “correct” decorations before the rewrite.

There’s no semantics, but there are still some interesting questions to explore, the main being: how life works?

http://chorasimilarity.github.io/chemlambda-gui/dynamic/28_syn.html

UPDATES:

Louis Kauffman reply to this:

Dear Marius,
There is no such thing as no-semantics. Every system that YOU deal with is described by you and observed by you with some language that you use. At the very least the system is interpreted in terms of its own actions and this is semantics. But your point is well-taken about not using more semantic overlay than is needed for any given situation. And certainly there are systems like our biology that do not use the higher level descriptions that we have managed to observe. In doing mathematics it is often the case that one must find the least semantics and just the right syntax to explore a given problem. Then work freely and see what comes.
Then describe what happened and as a result see more. The description reenters the syntactic space and becomes ‘uninterpreted’ by which I mean  open to other interactions and interpretations. It is very important! One cannot work at just one level. You will notice that I am arguing both for and against your position!
Best,
Lou Kauffman
My reply:
Dear Louis,
Thanks! Looks that we agree in some respects: “And certainly there are systems like our biology that do not use the higher level descriptions that we have managed to observe.” Not in others; this is the base of any interesting dialogue.
Then I made another post
Related to the “no semantics” earlier g+ post [*], here is a passage from Rodney Brooks “Intelligence without representation”

“It is only the observer of the Creature who imputes a central representation or central control. The Creature itself has none; it is a collection of competing behaviors.  Out of the local chaos of their interactions there emerges, in the eye of an observer, a coherent pattern of behavior. There is no central purposeful locus of control. Minsky [10] gives a similar account of how human behavior is generated.  […]
… we are not claiming that chaos is a necessary ingredient of intelligent behavior.  Indeed, we advocate careful engineering of all the interactions within the system.  […]
We do claim however, that there need be no  explicit representation of either the world or the intentions of the system to generate intelligent behaviors for a Creature. Without such explicit representations, and when viewed locally, the interactions may indeed seem chaotic and without purpose.
I claim there is more than this, however. Even at a local  level we do not have traditional AI representations. We never use tokens which have any semantics that can be attached to them. The best that can be said in our implementation is that one number is passed from a process to another. But it is only by looking at the state of both the first and second processes that that number can be given any interpretation at all. An extremist might say that we really do have representations, but that they are just implicit. With an appropriate mapping of the complete system and its state to another domain, we could define a representation that these numbers and topological  connections between processes somehow encode.
However we are not happy with calling such things a representation. They differ from standard  representations in too many ways.  There are no variables (e.g. see [1] for a more  thorough treatment of this) that need instantiation in reasoning processes. There are no rules which need to be selected through pattern matching. There are no choices to be made. To a large extent the state of the world determines the action of the Creature. Simon  [14] noted that the complexity of behavior of a  system was not necessarily inherent in the complexity of the creature, but Perhaps in the complexity of the environment. He made this  analysis in his description of an Ant wandering the beach, but ignored its implications in the next paragraph when he talked about humans. We hypothesize (following Agre and Chapman) that much of even human level activity is similarly a reflection of the world through very simple mechanisms without detailed representations.”

This brings to mind also this quote from the end of Vehicle 3 section from V. Braintenberg book Vehicles: Experiments in Synthetic Psychology:

“But, you will say, this is ridiculous: knowledge implies a flow of information from the environment into a living being ar at least into something like a living being. There was no such transmission of information here. We were just playing with sensors, motors and connections: the properties that happened to emerge may look like knowledge but really are not. We should be careful with such words.”

Louis Kauffman reply to this post:
Dear Marius,
It is interesting that some people (yourself it would seem) get comfort from the thought that there is no central pattern.
I think that we might ask Cookie and Parabel about this.
Cookie and Parabel and sentient text strings, always coming in and out of nothing at all.
Well guys what do you think about the statement of MInsky?

Cookie. Well this is an interesting text string. It asserts that there is no central locus of control. I can assert the same thing! In fact I have just done so in these strings of mine.
the strings themselves are just adjacencies of little possible distinctions, and only “add up” under the work of an observer.
Parabel. But Cookie, who or what is this observer?
Cookie. Oh you taught me all about that Parabel. The observer is imaginary, just a reference for our text strings so that things work out grammatically. The observer is a fill-in.
We make all these otherwise empty references.
Parabel. I am not satisfied with that. Are you saying that all this texture of strings of text is occurring without any observation? No interpreter, no observer?
Cookie. Just us Parabel and we are not observers, we are text strings. We are just concatenations of little distinctions falling into possible patterns that could be interpreted by an observer if there were such an entity as an observer?
Parabel. Are you saying that we observe ourselves without there being an observer? Are you saying that there is observation without observation?
Cookie. Sure. We are just these strings. Any notion that we can actually read or observe is just a literary fantasy.
Parabel. You mean that while there may be an illusion of a ‘reader of this page’ it can be seen that the ‘reader’ is just more text string, more construction from nothing?
Cookie. Exactly. The reader is an illusion and we are illusory as well.
Parabel. I am not!
Cookie. Precisely, you are not!
Parabel. This goes too far. I think that Minsky is saying that observers can observe, yes. But they do not have control.
Cookie. Observers seem to have a little control. They can look here or here or here …
Parabel. Yes, but no ultimate control. An observer is just a kind of reference that points to its own processes. This sentence observes itself.
Cookie. So you say that observation is just self-reference occurring in the text strings?
Parabel. That is all it amounts to. Of course the illusion is generated by a peculiar distinction that occurs where part of the text string is divided away and named the “observer” and “it” seems to be ‘reading’ the other part of the text. The part that reads often has a complex description that makes it ‘look’ like it is not just another text string.
Cookie. Even text strings is just a way of putting it. We are expressions in imaginary distinctions emanated from nothing at all and returning to nothing at all. We are what distinctions would be if there could be distinctions.
Parabel. Well that says very little.
Cookie. Actually there is very little to say.
Parabel. I don’t get this ‘local chaos’ stuff. Minsky is just talking about the inchoate realm before distinctions are drawn.
Cookie. lakfdjl
Parabel. Are you becoming inchoate?
Cookie. &Y*
Parabel. Y
Cookie.
Parabel.

Best,
Lou

My reply:
Dear Louis, I see that the Minsky reference in the beginning of the quote triggered a reaction. But recall that Minsky appears in a quote by Brooks, which itself appears in a post by Marius, which is a follow up of an older post. That’s where my interest is. This post only gathers evidence that what I call “no semantics” is an idea which is not new, essentially.
So let me go back to the main idea, which is that there are positive advances which can be made under the constraint to never use global notions, semantics being one of them.
As for the story about Cookie and Parabel, why is it framed into text strings universe and discusses about  a “central locus of control”? I can easily imagine Cookie and Parabel having a discussion before writing was invented, say for example in a cave which much later will be discovered by modern humans in Lascaux.
I don’t believe that there is a central locus of control. I do believe that semantics is a mean to tell the story, any story, as if there is a central locus of control. There is no “central” and there is very little “control”.
This is not a negative stance, it is a call for understanding life phenomena from points of view which are not ideologically loaded by “control” and “central”. I am amazed by the life variety, beauty and vastness, and I feel limited by the semantics point of view. I see in a string of text thousands of years of cultural conventions taken for granted, I can’t forget that a string of text becomes so to me only after a massive processing which “semantics” people take as granted as well, that during this discussion most of me is doing far less trivial stuff, like collaborating and fighting with billions of other beings in my gut, breathing, seeing, hearing, moving my fingers. I don’t forget that the string of text is recreated by my brain 5 times per second.
And what is an “illusion”?
A third post
In the last post https://plus.google.com/+MariusBuliga/posts/K28auYf69iy I gave two quotes, one from Brooks “Intelligence without representation” (where he quotes Minsky en passage, but contains much more than this brief Minsky quote) and the other from Braitenberg “Vehicles: Experiments in Synthetic Psychology”.
Here is another quote, from a reputed cognitive science specialist, who convinced me about the need for a no semantics point of view with his article “Brain a geometry engine”.
The following quote is by Jan Koenderink “Visual awareness”
http://www.gestaltrevision.be/pdfs/koenderink/Awareness.pdf

“What does it mean to be “visually aware”? One thing, due to Franz Brentano (1838-1917), is that all awareness is awareness of something. […]
The mainstream account of what happens in such a generic case is this: the scene in front of you really exists (as a physical object) even in the absence of awareness. Moreover, it causes your awareness. In this (currently dominant) view the awareness is a visual representation of the scene in front of you. To the degree that this representation happens to be isomorphic with the scene in front of you the awareness is veridical. The goal of visual awareness is to present you with veridical representations. Biological evolution optimizes veridicality, because veridicality implies fitness.  Human visual awareness is generally close to veridical. Animals (perhaps with exception of the higher primates) do not approach this level, as shown by ethological studies.
JUST FOR THE RECORD these silly and incoherent notions are not something I ascribe to!
But it neatly sums up the mainstream view of the matter as I read it.
The mainstream account is incoherent, and may actually be regarded as unscientific. Notice that it implies an externalist and objectivist God’s Eye view (the scene really exists and physics tells how), that it evidently misinterprets evolution (for fitness does not imply veridicality at all), and that it is embarrassing in its anthropocentricity. All this should appear to you as in the worst of taste if you call yourself a scientist.”  [p. 2-3]

[Remark: all these quotes appear in previous posts at chorasimilarity]

_______________________________________________________________

The illustrated shuffle trick, used for tree duplication

 Duplication of trees of fanouts is done in chemlambda by the shuffle trick.
The actors are the nodes FO (fanout), FOE (the other fanout) and FI (fanin). They are related by the rewrites FI-FOE (which resembles to the beta rewrite, but is a bit different), FO-FOE (which is a distributivity rewrite of the FO node wrt to the FOE node) and FI-FO (which is a distributivity rewrite of the FI node wrt the FO node.
In the shuffle trick there are used the rewrites FO-FOE and FI-FOE.
Properly speaking there is no trick at all, in the sense that it is unstaged. It happens when there is a FO-FOE pattern for a rewrite, which in turn, after application, creates a FI-FOE pattern.
The effects are several:
  • FO nodes migrate from the root to the leaves
  • FOE nodes migrate from the leaves to the root
  • and there is a shuffle of the leaves which untangles correctly the copies of the tree.

All this is achieved not by setting as goal the duplication! As I wrote, there is no scenario behind, it is just an emergent effect of local rewrites.

Here is the shuffle trick illustrated

shuffle

For explanatory reasons the free out ports (i.e the FROUT nodes) are coloured with red and blue.

Watch carefully to see the 3 effects of the shuffle trick!
Before: there are two pairs of free out ports, each pair made of a blue and a red out ports. After the shuffle trick there is a pair of blue ports and a pair of red ports.

Before: there is a green node (FO fanout) and two pale yellow nodes (FOE fanouts).
After: there is one pale yellow node instead (FOE fanout) and two green nodes (FO fanouts)

OK, let’s see how this trick induces the duplication of a tree made of FO nodes.
First, we have to add a FI node at the root and FOE nodes at the leaves. In the following illustrations I coloured the free in ports (of the FI node) and the free out ports (of the FROUT nodes) with red and blue, for explanatory purposes.
There are two extremal cases of a tree duplication.
The first is the duplication of a tree made of FO nodes, such that all right ports are leaves (thus the tree extends only in the left direction).
In this case the shuffle trick is applied (unstaged!) all over the tree at once!
fotreeleft
In the opposite extremal case we want to duplicate a tree made of FO nodes, such that all LEFT ports are leaves (thus the tree extends only in the RIGHT direction).
In this case the shuffle trick propagates towards the root.
fotreeright
______________________________________________________________

Lambda calculus and other “secret alien technologies” translated into real chemistry

That is the shortest description.
A project in chemical computing: lambda calculus and other “secret alien technologies” translated into real chemistry.
http://chorasimilarity.github.io/chemlambda-gui/dynamic/molecular.html
factor4
Question:
  • which programming language is based on “secret alien technology”?

The gif is a detail from the story of the factorial
http://chorasimilarity.github.io/chemlambda-gui/dynamic/lisfact_2_mod.html

Join me on erasing the distinction between virtual and real!

And do not believe me, nor trust authority (because it has its own social agenda). Use instead a validation technique:

  1. download the gh-pages branch of the repo from this link https://github.com/chorasimilarity/chemlambda-gui/archive/gh-pages.zip
  2. unzip it and go the “dynamic” folder
  3. edit the copy you have of the  latest main script  https://github.com/chorasimilarity/chemlambda-gui/blob/gh-pages/dynamic/check_1_mov2_rand_metabo_bb.awk to change the parameters (number of cycles, weights of moves, visualisation parameters)
  4. use the command “bash moving_random_metabo_bb.sh”   (see what it contains https://github.com/chorasimilarity/chemlambda-gui/blob/gh-pages/dynamic/moving_random_metabo_bb.sh )
  5. you shall see the list of all .mol files from the “dynamic” folder. If you want to reproduce a demo, or better a new random run of a computation shown in a demo, then choose the file.mol which corresponds to the file.html name of the demo page http://chorasimilarity.github.io/chemlambda-gui/dynamic/demos.html

An explanation of the algorithm embedded in the main script is here

https://chorasimilarity.wordpress.com/2015/04/27/a-short-complete-description-of-chemlambda-as-a-universal-model-of-computation/

but in the latest main script there are additioned new moves for a busy beaver Turing machine, see the explanations in  the work in progress

http://chorasimilarity.github.io/chemlambda-gui/dynamic/turingchem.html

On the other side, if you are a lazy person who wishes to rely on anonymous people who declare they read the work, then go play elsewhere. Here are used the highest standards for validity checks.

___________________________________________________

One of the first articles with means for complete validation by reproducibility

I have not stressed enough this aspect. The article

M. Buliga, Molecular computers

is one of the first articles which comes with complete means of validation by reproducibility.

This means that along with the content of the article, which contains animations and links to demonstrations, comes a github repository with the scripts which can be used to validate (or invalidate, of course) this work.

I can’t show you here how the article looks like, but I can show you a gif created from this  video of a demonstration which appears also in the article (however, with simpler settings, in order to not punish too much the browser).

KebzRb

This is a chemical like computation of the Ackermann(2,2) function.

In itself, is intended to show that if autonomous computing molecules can be created by the means proposed in the article, then impressive feats can be achieved.

This is part of the discussion about peer review and the need to pass to a more evolved way of communicating science.There are several efforts in this direction, like for example PeerJ’s paper-now commented in this post. See also the post Fascinating: micropublications, hypothes.is for more!

Presently one of the most important pieces of this is the peer review, which is the social practice consisting in declarations of one, two, four, etc anonymous professionals that they have checked the work and they consider it valid.

Instead, an ideal should be the article which runs in the browser, i.e. one which comes with means which would allow anybody to validate it up to external resources, like the works by other authors.

(For example, if I write in my article that “According to the work [1]   A is true. Here we prove that B follows from A.” then I should provide means to validate the proof that A implies B, but it would be unrealistical to be ask me to provide means to validate A.)

This is explained in more detail in Reproducibility vs peer review.

Therefore, if you care about evolving the form of the scientific article, then you have a concrete, short example of what can be done in this direction.

Mind that I am stubborn enough to cling to this form of publication, not because I am afraid to submit these beautiful ideas to legacy journals, but because I want to promote new ways of sharing research by using the best content I can make.

_________________________________________

A short complete description of chemlambda as a universal model of computation

A mol file is any finite list of items of the following kind, which
satisfies conditions written further. Take a countable collection of
variables, denote them by a, b, c, … . There is a list of symbols:
L, A, FI, FO, FOE, Arrow, T, FRIN, FROUT. A mol file is a list made of
lines of the form:
–  L a b c    (called abstraction)
–  A a b c    (called application)
– FI a b c    (called fanin)
– FO a b c   (called fanout)
– FOE a b c  (called other fanout)
– Arrow a b   (called arrow)
– T a            (called terminal)
– FRIN a      (called free in)
– FROUT a    (called free out)

The variables which appear in a mol file are called port names.

Condition 1: any port name appears exactly twice

Depending on the symbol (L, A, FI, FO, FOE, Arrow, T, FRIN, FROUT),
every port name has two types, the first from the list (left, right,
middle), the second from the list (in,out). The convention is to write
this pair of types as middle.in, or left.out, for example.

Further I repeat the list of possible lines in a mol file, but I shall
replace a, b, c, … by their types:
– L middle.in left.out right.out
– A left.in right.in middle.out
– FI left.in right.in middle.out
– FO middle.in left.out right.out
– FOE middle.in left.out right.out
– Arrow middle.in middle.out
– T middle.in
– FRIN middle.out
– FROUT middle.in

Condition 2: each port name (which appears in exactly two places
according to condition 1) appears in a place with the type *.in and in
the other place with the type *.out

Two mol files define the same molecule up to the following:
– there is a renaming of port names from one mol file to the other
– any permutation of the lines in a mol file gives an equivalent mol file

(The reason is that a mol file is a notation for an oriented graph
with trivalent or 2-valent or univalent nodes, made of locally planar
trivalent nodes L, A, FI, FO, FOE , 2valent node Arrow and 1valent
nodes T, FRIN, FROUT. In this notation the port names come from an
arbitrary naming of the arrows, then the mol file is a list of nodes,
in arbitrary order, along with port names coming from the names of
arrows.)

(In the visualisations these graphs-molecules are turned into
undirected graphs, made of nodes of of various radii and colours. To
any line from a mol file, thus to any node from the oriented graph,
correspond up to 4 nodes in the graphs from the visualisations.
Indeed, the symbols L, A, FI, FO, appear as nodes or radius
main_const and colour red_col or green_col, FOE, T and Arrow have different colours. Their respective ports
appear as nodes of colour in_col or out_col, for the types “in” or
“out”, and radius “left”, “right”, “middle” for the corresponding
types.FRIN has the colour in_col, FROUT has the colour out_col)

The chemlambda rewrites are of the following form: replace a left
pattern (LT) consisting of 2 lines from a mol file, by a right pattern
(RT) which may consist of one, two, three or four lines. This is
written as LT – – > RT

The rewrites are: (with the understanding that port names 1, 2, … c,
from LT  represent port names which exist in the mol file before the
rewrite and j, k, … from RT but not appearing in LT represent new
port names)

http://chorasimilarity.github.io/chemlambda-gui/dynamic/moves.html

– A-L (or beta):
L 1 2 c , A c 4 3 – – > Arrow 1 3 , Arrow 4 2

– FI-FOE (or fan-in):
FI 1 4 c , FOE c 2 3 – – > Arrow 1 3 , Arrow 4 2

– FO-FOE :
FO 1 2 c , FOE c 3 4 – – > FI j i 2 , FO k i 3 , FO l j 4 , FOE 1 k l

– FI-FO:
FI 1 4 c , FO c 2 3 – – > FO 1 i j , FI i k 2 , FI j l 3 , FO 4 k l

– L-FOE:
L 1 2 c , FOE c 3 4 – – > FI j i 2, L k i 3 , L l j 4 , FOE 1 k l

– L-FO:
L 1 2 c , FO c 3 4 – – > FI j i 2 , L k i 3 , L l j 4 , FOE 1 k l

– A-FOE:
A 1 4 c , FOE c 2 3 – – > FOE 1 i j , A i k 2 , A j l 3 , FOE 4 k l

– A-FO:
A 1 4 c , FO c 2 3 – – > FOE 1 i j , A i k 2 , A j l 3 , FOE 4 k l

– A-T:                                 A 1 2 3 , T 3 – – > T 1 , T 2

– FI-T:                                FI 1 2 3 , T 3 – – > T 1 , T 2

– L-T:                                 L 1 2 3 , T 3 – – > T 1 , T c , FRIN c

– FO2-T:                            FO 1 2 3 , T 2 – – > Arrow 1 3

– FOE2-T:                          FOE 1 2 3 , T 2 – – > Arrow 1 3

– FO3-T:                            FO 1 2 3 , T 3 – – > Arrow 1 2

– FOE3-T:                          FOE 1 2 3 , T 3 – – > Arrow 1 2

– COMB:                           any node M (excepting Arrow) any out
port c  , Arrow c d  – – >  M  d

Each of these rewrites are seen as a chemical interaction of the mol
file (molecule) with an invisible enzyme, which rewrites the LT into
RT.

(Actually one can group the rewrites into families, so there are
needed enzymes for (A-L, FI-FOE), for (FO-FOE, L-FOE, L-FO, A-FOE,
A-FO, FI-FOE, FI-FO) for (A-T, FI-T, L-T) and for (FO2-T, FO3-T,
FOE2-T, FOE3-T). COMB rewrites,a swell as the Arrow elements,  have a
less clear interpretation chemically and there may be not needed, or
even eliminated, see further how they appear in the reduction
algorithm.)

The algorithm (called “stupid”) of application  of the rewrites has
two variants: deterministic and random. Further I explain both. For
the random version there are needed some weights, denoted by wei_*
where * is the name of the rewrite.

(The algorithms actually used in the demos have a supplementary family
of weights, which are used in relation with the  count of the enzymes
used, I’ll pass this.)

The algorithm takes as input a mol file. Then there is a cycle which
repeats (indefinitely, or a prescribed number of times specified at
the beginning of the computation, or it stops if there are no rewrites
possible).

Then it marks all lines of the mol file as unblocked.
Then it creates an empty file of replacement proposals.

Then the  cycle is:

1- identify all LT  which do not contain blocked lines for the move
FO-FOE and mark the lines from the identified LT as “blocked”. Propose
to replace the LT’s by RT’s. In the random version flip a coin with
weight wei-FOFOE  for each instance of the LT identified and according
to the coin drop block or ignore the instance.

2- identify all LT which do not contain blocked lines for the  moves
(L-FOE, L-FO, A-FOE, A-FO, FI-FOE, FI-FO) and mark the lines from
these LT’s as “blocked” and propose to replace these by the respective
RTs. In the random version flip a coin with the respective weight
before deciding to block and replacement proposals.

3 – identify all LT which do not contain blocked lines for the  moves
(A-L, FI-FOE) and mark the lines from these LT’s as “blocked” and
propose to replace these by the respective RTs. In the random version
flip a coin with the respective weight before deciding to block and
replacement proposals.

4 – identify all LT which do not contain blocked lines for the  moves
(A-T, FI-T, L-T) and for (FO2-T, FO3-T, FOE2-T, FOE3-T) and mark the
lines from these LT’s as “blocked” and propose to replace these by the
respective RTs. In the random version flip a coin with the respective
weight before deciding to block and replacement proposals.

5- erase all LT which have been blocked and replace them by the
proposals. Empty the proposals file.

6- the COMB cycle: repeat  the application of COMB moves, in the same
style (blocks and proposals and updates) until no COMB move is
possible.

7- mark all lines as unblocked

The main cycle ends here.

The algorithm ends if (the number of cycles is attained, or there are
no rewrites performed in the last cycle, in the deterministic version,
or there were no rewrites in the last N cycles, with N predetermined,
in the random version).

___________________________________

Three models of chemical computing

+Lucius Meredith  pointed to stochastic pi-calculus ans SPiM  in a discussion about chemlambda. I take this as reference http://research.microsoft.com/en-us/projects/spim/ssb.pdf (say pages 8-13 to have a anchor for the discussion).

Analysis: SPiM side
– The nodes of the graphs are molecules, the arrows are channels.
– As described there the procedure is to take a (huge perhaps) CRN and to reformulate it more economically, as a collection of graphs where nodes are molecules, arrows are channels.
– There is a physical chemistry model behind which tells you which probability has each reaction.
– During the computation the reactions are known, all the molecules are known, the graphs don’t change and the variables are concentrations of different molecules.
– During the computation one may interpret the messages passed by the channels as decorations of a static graph.

The big advantage is that indeed, when compared with a Chemical Reactions Network approach,  the stochastic pi calculus transforms the CRN  into a much more realistical model. And much more economical.

chemlambda side:

Take the pet example with the Ackermann(2,2)=7 from the beginning of http://chorasimilarity.github.io/chemlambda-gui/dynamic/molecular.html

(or go to the demos page for more http://chorasimilarity.github.io/chemlambda-gui/dynamic/demos.html )

– You have one molecule (it does not matter if it is a connected graph or not, so you may think about it as being a collection of molecules instead).
– The nodes of the molecule are atoms (or perhaps codes for simpler molecular bricks). The nodes are not species of molecules, like in the SPiM.
– The arrows are bonds between atoms (or perhaps codes for other simple molecular bricks). The arrows are not channels.
– I don’t know which are all the intermediary molecules in the computation. To know it would mean to know the result of the computation before. Also, there may be thousands possible intermediary molecules. There may be an infinity of possible intermediary molecules, in principle, for some initial molecules.
-During the computation the graph (i.e. the molecule) changes. The rewrites are done on the molecule, and they can be interpreted as chemical reactions where invisible enzymes identify a very small part of the big molecule and rewrite them, randomly.

Conclusion: there is no relation between those two models.

Now, chemlambda may be related to +Tim Hutton  ‘s artificial chemistry.
Here is a link to a beautiful javascript chemistry
http://htmlpreview.github.io/?https://github.com/ModelingOriginsofLife/ParticleArtificialChemistry/blob/master/index.html
where you see that
-nodes of molecules are atoms
-arrows of molecules are bonds
-the computation proceeds by rewrites which are random
– but distinctly from chemlambda the chemical reactions (rewrites) happen in a physical 2D space (i.e based on the space proximity of the reactants)

As something in the middle between SPiM and jschemistry, there is a button which shows you a kind if instantaneous reaction graph!

Improve chemical computing complexity by a 1000 times factor, easily

That is, if autonomous computing molecules are possible, as described in the model shown in the Molecular computers.

To be exactly sure about about the factor, I need to know the answer for the following question:

What is the most complex chemical computation done without external intervention, from the moment when the (solution, DNA molecule, whatnot) is prepared, up to the moment when the result is measured?

Attention, I know that there are several Turing complete chemical models of computations, but they all involve some manipulations (heating a solution, splitting one in two, adding something, etc).

I believe, but I may be wrong, depending on the answer to this question, that the said complexity is not bigger than a handful of boolean gates, or perhaps some simple Turing Machines, or a simple CA.

If I am right, then compare with my pet example: the Ackermann function. How many instructions a TM, or a CA, or how big a circuit has to be to do this? 1000 times is a clement estimate. This can be done in my proposal easily.

So, instead of trying to convince you that my model is interesting because is related with lmbda calculus, maybe I can make you more interested if I tell you that for the same material input, the computational output is way bigger than in the best model you have.

Thank you for answering to the question, and possibly for showing me wrong.

___________________________

A second opinion on “Slay peer review” article

“It is no good just finding particular instances where peer review has failed because I can point you to specific instances where peer review has been very successful,” she said.
She feared that abandoning peer review would make scientific literature no more reliable than the blogosphere, consisting of an unnavigable mass of articles, most of which were “wrong or misleading”.
This is a quote from one of the most interesting articles I read these days: “Slay peer review ‘sacred cow’, says former BMJ chief” by Paul Jump.
http://www.timeshighereducation.co.uk/news/slay-peer-review-sacred-cow-says-former-bmj-chief/2019812.article#.VTZxYhJAwW8.twitter
I commented previously about replacing peer-review with validation by reproducibility
but now I want to concentrate on this quote, which, according to the author of the article,  has been made by “Georgina Mace, professor of biodiversity and ecosystems at University College London”.This is the pro argument in favour of the actual peer review system. Opposed to it, and main subject of the article, is”Richard Smith, who edited the BMJ between 1991 and 2004, told the Royal Society’s Future of Scholarly Scientific Communication conference on 20 April that there was no evidence that pre-publication peer review improved papers or detected errors or fraud.”

I am very much convinced by this, but let’s think coldly.

Pro peer review is that a majority of peer reviewed articles is formed by correct articles, while a majority of  “the blogosphere [is] consisting of an unnavigable mass of articles, most of which were “wrong or misleading””.

Contrary to peer review is that, according to “Richard Smith, who edited the BMJ between 1991 and 2004” :

“there was no evidence that pre-publication peer review improved papers or detected errors or fraud.”
“Referring to John Ioannidis’ famous 2005 paper “Why most published research findings are false”, Dr Smith said “most of what is published in journals is just plain wrong or nonsense”. […]
“If peer review was a drug it would never get on the market because we have lots of evidence of its adverse effects and don’t have evidence of its benefit.””
and moreover:

“peer review was too slow, expensive and burdensome on reviewers’ time. It was also biased against innovative papers and was open to abuse by the unscrupulous. He said science would be better off if it abandoned pre-publication peer review entirely and left it to online readers to determine “what matters and what doesn’t”.”

Which I interpret as confidence in the blogosphere-like medium.

Where is the truth? In the middle, as usual.

Here is my opinion, please form yours.

The new medium comes with new, relatively better means to do research. An important part of the research involves communication, and it is clear that the old system is already obsolete. It is kept artificially alive by authority and business interests.

However, it is also true that a majority of productions which are accessible via the new medium are of a very bad quality and unreliable.

To make another comparison, in the continuation of the one about the fall of academic painters and the rise of impressionists
https://chorasimilarity.wordpress.com/2013/02/16/another-parable-of-academic-publishing-the-fall-of-19th-century-academic-art/
a majority of the work of academic painters was good but not brilliant (reliable but not innovative enough), a majority of non academic painters produce crappy cute paintings which average people LOVE to see and comment about.
You can’t accuse a non affiliated painter that he shows his work in the same venue where you find all the cats, kids, wrinkled old people and cute places.

Science side, we live in a sea of crappy content which is loved by the average people.

The so  called attention economy consists mainly in shuffling this content from a place to another. This is because liking and sharing content is a different activity than creating content. Some new thinking is needed here as well, in order to pass over the old idea of scarce resources which are made available by sharing them.

It is difficult for a researcher, who is a particular species of a creator, to find other people willing to spend time not only to share original ideas (which are not liked because strange, by default), but also to invest  work into understanding it, into validating it, which is akin an act of creation.

That is why I believe that:
– there have to be social incentives for these researchers  (and that attention economy thinking is not helping this, being instead a vector of propagation for big budget PR and lolcats and life wisdom quotes)
– and that the creators of new scientific content have to provide as many as possible means for self-validation of their work.

_________________________________

A comparison of two models of computation: chemlambda and Turing machines

The purpose is to understand clearly what is this story about. The most simple stuff, OK? in order to feel it in familiar situations.

Proceed.
Chemlambda is a collection of rules about rewritings done on pieces of files in a certain format. Without an algorithm which tells which rewrite to use, where and when,  chemlambda does nothing.

In the sophisticated version of the Distributed GLC proposal, this algorithmic part uses the Actor Model idea. Too complicated!, Let’s go simpler!

The simplest algorithm for using the collection of rewrites from chemlambda is the following:
  1. take a file (in the format called “mol”, see later)
  2. look for all patterns in the file which can be used for rewrites
  3. if there are different patterns which overlap, then pick a side (by using an ordering or graph rewrites, like the precedence rules in arithmetic)
  4. apply all the rewrites at once
  5. repeat (either until there is no rewrite possible, or a given number of times, or forever)
 To spice things just a bit, consider the next simple algorithm, which is like the one described, only that we add at step 2:
  •  for every identified pattern flip a coin to decide to keep it or ignore it further in the algorithm
The reason  is that  randomness is the simplest way to say: who knows if I can do this rewrite when I want, or maybe I have in my computer only a part of the file, or maybe I have to know that a friend has a half of the pattern and I have the other, so I have to talk with him first, then agree to make together the rewrite. Who knows? Flip a coin then.

Now, proven facts.

Chemlambda with the stupid deterministic algorithm is Turing universal. Which means that implicitly this is a model of computation. Everything is prescribed from the top to the bottom. Is on the par with a Turing machine, or with a RAND model.

Chemlambda with the random stupid model seems to be also Turing universal, but I don’t have yet a proof for this. There is a reason for the fact that it is as powerful as the stupid deterministic model, but I won’t go there.

So the right image to have is that chemlambda with the  described algorithm can do anything any computer can.

The first question is, how? For example how compares chemlambda with a Turing machine? If it is at this basic level then it means it is incomprehensible, because we humans can’t make sense of a scroll of bytecode, unless we are highly trained in this very specialised task.

All computers do the same thing: they crunch machine code. No matter which high language you use to write a program, it is then compiled and eventually there is a machine code which is executed, and that is the level we speak.

It does not matter which language you use, eventually all is machine code. There is a huge architectural tower and we are on the top of it, but in the basement all looks the same. The tower is here for us to be easy to use the superb machine. But it is not needed otherwise, it is only for our comfort.

This is very puzzling when we look at chemlambda because it is claimed that chemlambda has something to do with lambda calculus, or lambda calculus is the prototype of a functional programming language. So it appears that chemlamdba should be associated with higher meaning and clever thinking, and abstraction of the abstraction of the abstraction.

No, from the point of view of the programmer.

Yes, from the point of view of the machine.

In order to compare chemlambda with a TM we have to put it in the same terms. So you can easily put a TM in terms of a rewrite system, such that it works with the same stupid deterministic algorithm. http://chorasimilarity.github.io/chemlambda-gui/dynamic/turingchem.html

It is not yet put there, but the conclusion is obvious: chemlambda can do lambda calculus with one rewrite, while an Universal Turing Machine needs about 20 rewrites to do what TM do.

Unbelievable!
Wait, what about distributivity, propagation, the fanin, all the other rewrites?
They are common, they just form a mechanism for signal transduction and duplication!
Chemlambda is much simpler than TM.

So you can use directly chemlambda, at this metal level, to perform lambda calculus. Is explained here
https://chorasimilarity.wordpress.com/2015/04/21/all-successful-computation-experiments-with-lambda-calculus-in-one-list/

And I highly recommend  to try to play with it by following the instructions.

You need a linux system, or any system where you have sh and awk.

Then

2. unzip it and go to the directory “dynamic”
3. open a shell and write:  bash moving_random_metabo.sh
4. you will get a prompt and a list of files with the extension .mol , write the name of one of them, in the form file.mol
5. you get file.html. Open it with a browser with js enabled. For reasons I don’t understand, it works much better in safari, chromium, chrome than in firefox.

When you look at the result of the computation you see an animation, which is the equivalent of seeing a TM head running here and there on a tape. It does not make much sense at first, but you can convince that it works and get a feeling about how it does it.

Once you get this feeling I will be very glad to discuss more!

Recall that all this is related  to the most stupid algorithm. But I believe it helps a lot to understand how to build on it.
____________________________________________________

Yes, “slay peer review” and replace it by reproducibility

Via Graham Steel the awesome article Slay peer review ‘sacred cow’, says former BMJ chief.

“Richard Smith, who edited the BMJ between 1991 and 2004, told the Royal Society’s Future of Scholarly Scientific Communication conference on 20 April that there was no evidence that pre-publication peer review improved papers or detected errors or fraud. […]

“He said science would be better off if it abandoned pre-publication peer review entirely and left it to online readers to determine “what matters and what doesn’t”.

“That is the real peer review: not all these silly processes that go on before and immediately after publication,” he said.”

That’s just a part of the article, go read the counter arguments by Georgina Mace.

Make your opinion about this.

Here is mine.

In the post Reproducibility vs peer review I write

“The only purpose of peer review is to signal that at least one, two, three or four members of the professional community (peers) declare that they believe that the said work is valid. Validation by reproducibility is much more than this peer review practice. […]

Compared to peer review, which is only a social claim that somebody from the guild checked it, validation through reproducibility is much more, even if it does not provide means to absolute truths.”

There are several points to mention:

  • the role of journals is irrelevant to anybody else than publishers and their fellow academic bureaucrats who work together to maintain this crooked system, for their own $ advantage.
  • indeed, an article should give by itself the means to validate its content
  • which means that the form of the article has to change from the paper version to a document which contains data, programs, everything which may help to validate the content written with words
  • and the validation process (aka post review) has to be put on the par with the activity of writing articles, Even if an article comes with all means to validate it (like the process described in  Reproducibility vs peer review ), the validation supposes work and by itself it is an activity akin to the one which is reported in the article. More than this, the validation may or may not function according to what the author of the work supposes, but in any case it leads to new scientific content.

In theory sounds great, but in practice it may be very difficult to provide a work with the means of validation (of course up to the external resources used in the work, like for example other works).

My answer is that: concretely it is possible to do this and I offer as example my article Molecular computers, which is published on github.io and it comes with a repository which contains all the means to confirm or refute what is written in the article.

The real problem is social. In such a system the bored researcher has to spend more than 10 min top to read an article he or she intends to use.

Then it is much easy, socially, to use the actual, unscientific system of replacing validation by authority arguments.

As well, the monkey system — you scratch my back and I’ll scratch yours — which is behind most of the peer reviews (only think about the extreme specialisation of research which makes that almost surely a reviewer competes or collaborates with the author), well, that monkey system will no longer function.

This is even a bigger problem than the one that publishing and academic bean counting will soon be obsolete.

So my forecast is that we shall keep a mix of authority based (read “peer review”) and reproducibility (by validation), for some time.

The authority, though, will take another blow.

Which is in favour of research. It is also economically sound, if you think that probably today a majority of funding for research go to researchers whose work pass peer reviews, but not validation.

______________________________________________

All successful computation experiments with lambda calculus in one list

What you see in this links: I take a lambda term, transform it into a artificial molecule, then let it reduce randomly, with no evaluation strategy. That is what I call the most stupid algorithm. Amazing is that is works.
You don’t have to believe me, because you can check it independently, by using the programs available in the github repo.

Here is the list of demos where lambda terms are used.

Now, details of the process:
– I take a lambda term and I draw the syntactic tree
– this tree has as leaves the variables, bound and free. These are eliminated by two tricks, one for the bound variables, the other for the free ones. The bound variables are eliminated by replacing them with new arrows in the graph, going from one leg of a lambda abstraction node, to the leaf where the variable appear. If there are more places where the same bound variable appears, then insert some fanout nodes (FO). For the free variable do the same, by adding for each free variable a tree of FO nodes. If the bound variable does not appear anywhere else then add a termination (T) node.
– in this way the graph which is obtained is no longer a tree. It is a trivalent graph mostly, with some free ends. It is an oriented graph. The free ends which correspond to a “in” arrow are there for each free variable. There is only one end which correspond to an “out” arrow, coming from the root of the syntactic tree.
– I give a unique name to each arrow in the graph
– then I write the “mol file” which represents the graph, as a list of nodes and the names of arrows connected to the nodes (thus an application node A which has the left leg connected to the arrow “a”, the right leg connected to the arrow “b” and the out leg connected to “c”, is described by one line “A a b c” for example.

OK, now I have the mol file, I run the scripts on it and then I look at the output.

What is the output?

The scripts take the mol file and transform it into a collection of associative arrays (that’s why I’m using awk) which describe the graph.

Then they apply the algorithm which I call “stupid” because really is stupidly simplistic: do a predetermined number of cycles, where in each cycle do the following
– identify the places (called patterns) where a chemlambda rewrite is possible (these are pairs of lines in the mol file, so pairs of nodes in the graph)
– then, as you identify a pattern, flip a coin, if the coin gives “0” then block the pattern and propose a change in the graph
– when you finish all this, update the graph
– some rewrites involve the introduction of some 2-valent nodes, called “Arrow”. Eliminate them in a inner cycle called “COMB cycle”, i.e. comb the arrows
-repeat

As you see, there is absolutely no care about the correctness of the intermediary graphs. Do they represent lambda terms? Generically no!
Are there any variable which are passed, or evaluations of terms which are done in some clever order (eager, lazy, etc)? Not at all, there are no other variables than the names of the arrows of the graph, or these ones have the property that they are names which appear twice in the mol file (once in a port “in”, 2nd in a port “out”). When the pattern is replaced these names disappear and the names of the arrows from the new pattern are generated on the fly, for example by a counter of arrows.

The scripts do the computation and they stop. There is a choice made over the way of seeing the computation and the results.
One obvious choice would be to see the computation as a sequence of mol files, corresponding to the sequence of graphs. Then one could use another script to transform each mol file into a graph (via, say, a json file) and use some graph visualiser to see the graph. This was the choice in the first scripts made.
Another choice is to make an animation of the computation, by using d3.js. Nodes which are eliminated are first freed of links and then they vanish, while new nodes appear, are linked with their ports, then linked with the rest of the graph.

This is what you see in the demos. The scripts produce a html file, which has inside a js script which uses d3.js. So the output of the scripts is the visualisation of the computation.

Recall hat the algorithm of computation is random, therefore it is highly likely that different runs of the algorithm give different animations. In the demos you see one such animation, but you can take the scripts from the repo and make your own.

What is amazing is that they give the right results!

It is perhaps bizzare to look at the computation and to not make any sense of it. What happens? Where is the evaluation of this term? Who calls whom?

Nothing of this happens. The algorithm just does what I explained. And since there are no calls, no evaluations, no variables passed from here to there, that means that you won’t see them.

That is because the computation does not work by the IT paradigm of signals sent by wires, through gates, but it works by what chemists call signal transduction. This is a pervasive phenomenon: a molecule enters in chemical reactions with others and there is a cascade of other chemical reactions which propagate and produce the result.

About what you see in the visualisations.
Because the graph is oriented, and because the trivalent nodes have the legs differentiated (i.e. for example there might be a left.in leg, a right.in leg and a out leg, which for symmetry is described as a middle.out leg) I want to turn it into an unoriented graph.
This is done by replacing each trivalent node by 4 nodes, and each free end or termination node by 2 nodes each.
For trivalent nodes there will be one main node and 3 other nodes which represents the legs. These are called “ports”. There will be a color-coded notation, and the choice made is to represent the nodes A (application) and FO by the main node colored green, the L (lambda) and FI (fanin, exists in chemlamda only) by red (actually in the demos this is a purple)
and so on. The port nodes are coloured by yellow for the “in” ports and by blue for the “out” ports. The “left”, right”, “middle” types are encoded by the radius of the ports.

__________________________________________________

OMG the busy beaver goes into abstract thinking

So called higher capacities, like abstraction, are highly hyped.  Look:
What happens when you apply the Church number 3 to a busy beaver? You get 3 busy beavers on the same tape.

Details will be added into the article Turing machines, chemlambda style.

If you want tot experiment, then click on “fork me on github” and copy the gh-pages branch of the repo. Then look in the dynamic folder for the script moving_random_metabo_bb.sh. In a terminal type “bash moving_random_metabo_bb.sh”, then type “church3bb.mol”. You shall get the file church3bb.html which you can see by using a js enabled browser.
The sh script calls an awk script, which produces the html file. The awk script is check_1_mov2_rand_metabo_bb.awk. Open it with a text editor and you shall see at the beginning all kinds of parameters which you can change (before calling the sh script), so that you may alter the duration, the speed, change between deterministic and random algorithms.
Finally, you also need a mol file to play. For this demo has been used the mol file church3bb.mol. You can also open it with a text editor and play with it.

UPDATE: Will tweak it more the next days, but the idea which I want to communicate is that TM can be seen as chemistry, like in chemlambda, and it can interact very well with the rest of the formalism. So you have these two pillars of computation on the same footing, together, despite the impression that they are somehow very different, one like hardware and the other like software.

______________________________________

Turing Machines, chemlambda style (I)

UPDATE: First animation and a more detailed version here. There will be many additions, as well as programs available from the gh-pages of the repo.

Once again: why do chemists try to reproduce silicon computers? Because that’s the movie.

    At first sight they look very different. Therefore, if we want to compare them, we have to reformulate them in ways which are similar. Rewrite systems are appropriate here.
    The Turing Machine (TM) is a model of computation which is well-known as being a formalisation of what a computer is. It is a machine which has some internal states (from a set S), has a head which read/writes symbols (from a set A) on a tape. The tape is seen as an infinite word made of letters from A. The set A has a special letter (call it “b” from blank) and the infinite words which describe tapes have to be such that all but a finite number of letters of that word are different from “b”. Imagine an infinite, in both directions, tape which has written symbols on it, such that “b” represents an empty space. The tape has only a finite part of it filled with letters from A, others than the blank letter.
    The action of the machine depends on the internal state and on the symbol read by the head. It is therefore a function of the internal state of the machine (element of S), the letter from the tape which is read (element of A), and outputs a letter from the alphabet A (which is written in the case where previously was the letter which has been read, changes its internal state, and the head moves one step along the tape, to the left (L) or right (R), or does not move at all (N).
    The TM can be seen as a rewrite system.
    For example, one could see a TM as follows. (Pedantically this is seen as a multi-headed TM without internal states; the only interest in this distinction is that it raises the question if there is really any meaning into discerning internal states from the tape symbols.) We start from a set (or type) of internal states S. Such states are denoted by A, B, C (thus exhibiting their type by the type of the font used). There are 3 special symbols: < is the symbol of the beginning of a list (i.e. word), > is the symbol of the end of a list (word) and M is the special symbol for the head move (or of the type associated to head moves). There is an alphabet A of external states (i.e. tape symbols), with b (the blank) being in A.
    A tape is then a finite word (list) of one of the forms < w A w’ > , < w M A w’ > , < w A M w’ >, where A is an internal state and w and w’ are finite words written with the alphabet A, which can be empty words as well.
    A rewrite replaces a left pattern (LT) by a right pattern (RT), and there are denoted as LT – – > RT . Here LT and RT are sub-words of the tape word. It supposed that all rewrites are context independent, i.e. LT is replaced by RT regardless of the place where LT appears in the tape word. The rewrite is called “local” if the lengths (i.e. number of letters) of LT and RT are bounded a priori.
    A TM is given as a list of Turing instructions, which have the form (current internal state, tape letter read, new internal state, tape letter write, move of the tape). In terms as explained here, all this can be expressed via local rewrites.

  • Rewrites which introduce blanks at the extremities of the written tape:
    • < A   – – >   < b A
    • A >   – – >   A b >
  • Rewrites which describe how the head moves:
    • A M a   – – >   a A
    • a M A   – – >   A a
  • Turing instructions rewrites:
    • a A c   – – >   d B M c   , for the Turing instruction (A, a, B, d, R)
    • a A c   – – >   d M B c   , for the Turing instruction (A, a, B, d, L)
    • a A c   – – >   d B c   , for the Turing instruction (A, a, B, d, N)
    Together with the algorithm “at each step apply all the rewrites which are possible, else stop” we obtain the deterministic TM model of computation. For any initial tape word, the algorithm explains what the TM does to that tape. < don’t forget to link that to a part of the Cartesian method “to be sure that I made an exhaustive enumeration” which is clearly going down today > Other algorithms are of course possible. Before mentioning some very simple variants of the basic algorithm, let’s see when it works.
    If we start from a tape word as defined here, there is never a conflict of rewrites. This means that there is never the case that two LT from two different rewrites overlap. It might be the case though, if we formulate some rewrites a bit differently. For example, suppose that the Turing rewrites are modified to:
  • a A   – – >   d B M   , for the Turing instruction (A, a, B, d, R)
  • a A   – – >   d M B   , for the Turing instruction (A, a, B, d, L)
    Therefore, the LT of the Turing rewrites is no longer of the form “a A c”, but of the form “a A”. Then it may enter in conflict with the other rewrites, like in the cases:

  • a A M c where two overlapping rewrites are possible
    • Turing rewrite: a A M c   – – >   d M B M c &nbsp which will later produce two possibilities for the head moves rewrites, due to the string M B M
    • head moves rewrite: a A M c   – – >   a c A &nbsp which then produces a LT for a Turing rewrite for c A, instead of the previous Turing rewrite for a A
  • a A > where one may apply a Turing rewrite on a A, or a blank rewrite on A >
    The list is non exhaustive. Let’s turn back to the initial formulation to the Turing rewrites and instead let’s change the definition of a tape word. For example, suppose we allow multiple TM heads on the same tape, more precisely suppose that we accept initial tape words of the form < w1 A w2 B w3 C … wN >. Then we shall surely encounter conflicts between head moves rewrites for patterns as a A M B c.
    The most simple solution for solving these conflicts is to introduce a priority of rewrites. For example we may impose that blank rewrites take precedence over head moves rewrites, which take precedence over Turing rewrites. More such structure can be imposed (like some head moves rewrites have precedence over others). Even new rewrites may be introduced, for example rewrites which allow multiple TMs on the same tape to switch place.
    Let’s see an example: the 2-symbols, 3-states

busy beaver machine

    . Following the conventions from this work, the tape letters (i.e. the alphabet A) are “b” and “1”, the internal states are A, B, C, HALT. (The state HALT may get special treatment, but this is not mandatory). The rewrites are:

  • Rewrites which introduce blanks at the extremities of the written tape:
    • < X   – – >   < b X   for every internal state X
    • A >   – – >   A b >   for every internal state X
  • Rewrites which describe how the head moves:
    • X M a   – – >   a X   , for every internal state X and every tape letter a
    • a M X   – – >   X a   , for every internal state X and every tape letter a
  • Turing instructions rewrites:
    • b A c   – – >   1 B M c   , for every tape letter c
    • b B c   – – >   b C M c   , for every tape letter c
    • b C c   – – >   1 M C c   , for every tape letter c
    • 1 A c   – – >   1 HALT M c   , for every tape letter c
    • 1 B c   – – >   1 B M c   , for every tape letter c
    • 1 C c   – – >   1 M A c   , for every tape letter c
    We can enhance this by adding the priority of rewrites, for example in the previous list, any rewrite has priority over the rewrites written below it. In this way we may relax the definition of the initial tape word and allow for multiple heads on the same tape. Or for multiple tapes.
    Suppose we put the machine to work with an infinite tape with all symbols being blanks. This corresponds to the tape word < A >. Further are the steps of the computation:
  • < A >   – – >   < b A >
  • <b A >   – – >   < b A b >
  • < b A b >   – – >   < 1 B M b >
  • < 1 B M b >   – – >   < 1 b B >
  • < 1 b B >   – – >   < 1 b B b >
  • < 1 b B b >   – – >   < 1 b C M b >
  • < 1 b C M b >   – – >   < 1 b b C >
  • < 1 b b C >   – – >   < 1 b b C b >
  • < 1 b b C b >   – – >   < 1 b 1 M C b >
  • < 1 b 1 M C b >   – – >   < 1 b C 1 b >
  • < 1 b C 1 b >   – – >   < 1 1 M C 1 b >
  • < 1 1 M C 1 b >   – – >   < 1 C 1 1 b >
  • < 1 C 1 1 b >   – – >   < 1 M A 1 1 b >
  • < 1 M A 1 1 b >   – – >   < A 1 1 1 b >
  • < A 1 1 1 b >   – – >   < b A 1 1 1 b >
  • < b A 1 1 1 b >   – – >   < 1 B M 1 1 1 b >
  • <1 B M 1 1 1 b >   – – >   < 1 1 B 1 1 b >
  • < 1 1 B 1 1 b >   – – >   < 1 1 B M 1 1 b >
  • < 1 1 B M 1 1 b >   – – >   < 1 1 1 B 1 b >
  • < 1 1 1 B 1 b >   – – >   < 1 1 1 B M 1 b >
  • < 1 1 1 B M 1 b >   – – >   < 1 1 1 1 B b >
  • < 1 1 1 1 B b >   – – >   < 1 1 1 1 B M b >
  • < 1 1 1 1 B M b >   – – >   < 1 1 1 1 b B >
  • < 1 1 1 1 b B >   – – >   < 1 1 1 1 b B b >
  • < 1 1 1 1 b B b >   – – >   < 1 1 1 1 b C M b >
  • < 1 1 1 1 b C M b >   – – >   < 1 1 1 1 b b C >
  • < 1 1 1 1 b b C >   – – >   < 1 1 1 1 b b C b >
  • < 1 1 1 1 b b C b >   – – >   < 1 1 1 1 b 1 M C b >
  • < 1 1 1 1 b 1 M C b >   – – >   < 1 1 1 1 b C 1 b >
  • < 1 1 1 1 b C 1 b >   – – >   < 1 1 1 1 1 M C 1 b >
  • < 1 1 1 1 1 M C 1 b >   – – >   < 1 1 1 1 C 1 1 b >
  • < 1 1 1 1 C 1 1 b >   – – >   < 1 1 1 1 M A 1 1 b >
  • < 1 1 1 1 M A 1 1 b >   – – >   < 1 1 1 A 1 1 1 b >
  • < 1 1 1 A 1 1 1 b >   – – >   < 1 1 1 HALT M 1 1 1 b >
  • < 1 1 1 HALT M 1 1 1 b >   – – >   < 1 1 1 1 HALT 1 1 b >
    At this stage there are no possible rewrites. Otherwise said, the computation stops. Remark that the priority of rewrites imposed a path of the rewrites applications. Also, at each step there was only one rewrite possible, even if the algorithm does not ask for this.
    More possibilities appear if we see the tape words as graphs. In this case we pass from rewrites to graph rewrites. Here is a proposal for this.
    I shall use the same kind of notation as in

chemlambda: the mol format

    . It goes like this, explained for the busy beaver TM example. We have 9 symbols, which can be seen as nodes in a graph:

  • < which is a node with one “out” port. Use the notation FRIN out
  • > which is a node with one “in” port. Use the notation FROUT in
  • b, 1, A, B, C, HALT, M which are nodes with one “in” and one”out” port. Use a notation Name of node in out
    The rule is to connect “in” ports with “out” ports, in order to obtain a tape word. Or a tape graph, with many busy beavers on it. (TO BE CONTINUED…)

Reproducibility vs peer review

Here are my thoughts about replacing peer review by validation. Peer review is the practice where the work of a researcher is commented by peers. The content of the commentaries (reviews) is clearly not important. The social practice is to not make them public, nor to keep a public record about those. The only purpose of peer review is to signal that at least one, two, three or four members of the professional community (peers) declare that they believe that the said work is valid. Validation by reproducibility is much more than this peer review practice. Validation means the following:

  • a researcher makes public (i.e. “publishes”) a body of work, call it W. The work contains text, links, video, databases, experiments, anything. By making it public, the work is claimed to be valid, provided that the external resources used (as other works, for example) are valid. In itself, validation has no meaning.
  • a second part (anybody)  can also publish a validation assessment of the work W. The validation assessment is a body of work as well, and thus is potentially submitted to the same validation practices described here. In particular, by publishing the validation assessment, call it W1, it is also claimed to be valid, provided the external resources (other works used, excepting W) are valid.
  • the validation assessment W1 makes claims of the following kind: provided that external works A,B,C are valid, then this piece D of the work W is valid because it has been reproduced in the work W1. Alternatively, under the same hypothesis about the external work, in the work W1 is claimed that the other piece E of the work D cannot be reproduced in the same.
  • the means for reproducibility have to be provided by each work. They can be proofs, programs, experimental data.

As you can see the validation can be only relative, not absolute. I am sure that scientific results are never amenable to an acyclic graph of validations by reproducibility. Compared to peer review, which is only a social claim that somebody from the guild checked it, validation through reproducibility is much more, even if it does not provide means to absolute truths. What is preferable: to have a social claim that something is true, or to have a body of works where “relative truth” dependencies are exposed? This is moreover technically possible, in principle. However, this is not easy to do, at least because:

  • traditional means of publication and its practices are based on social validation (peer review)
  • there is this illusion that there is somehow an absolute semantical categorification of knowledge, pushed forward by those who are technically able to implement a validation reproducibility scheme at a large scale.

UPDATE: The mentioned illusion is related to outdated parts of the cartesian method. It is therefore a manifestation of the “cartesian disease”.

I use further the post More on the cartesian method and it’s associated disease. In that post the cartesian method is parsed like this:

  • (1a) “never to accept anything for true which I did not clearly know to be such”
  • (1b) “to comprise nothing more in my judgement than what was presented to my mind”
  • (1c) “so clearly and distinctly as to exclude all ground of doubt”
  • (2a) “to divide each of the difficulties under examination into as many parts as possible”
  • (2b) “and as might be necessary for its adequate solution”
  • (3a) “to conduct my thoughts in such order that”
  • (3b) “by commencing with objects the simplest and easiest to know, I might ascend […] to the knowledge of the more complex”
  • (3c) “little and little, and, as it were, step by step”
  • (3d) “assigning in thought a certain order even to those objects which in their own nature do not stand in a relation of antecedence and sequence”

Let’s take several researchers who produce works, some works related to others, as explained in the validation procedure.

Differently from the time of Descartes, there are plenty of researchers who think in the same time, and moreover the body of works they produce is huge.

Every piece of the cartesian method has to be considered relative to each researcher and this is what causes many problems.

Parts (1a),(1b), (1c) can be seen as part of the validation technique, but with the condition to see “true”and “exclude all grounds of doubt” as relative to the reproducibility of work W1 by a reader who tries to validate it up to external resources.

Parts (2a), (2b) are clearly researcher dependent; in a interconnected world these parts may introduce far more complexity than the original research work W1.

Combined with (1c), this leads to the illusion that the algorithm which embodies the cartesian method, when run in a decentralized and asynchronous world of users, HALTS.

There is no ground for that.

But the most damaging is (3d). First, every researcher embeds a piece of work into a narrative in order to explain the work. There is nothing “objective” about that. In a connected world, with the help of Google and alike, who impose or seek for global coherence, the parts (3d) and (2a), (2b) transform the cartesian method into a global echo chamber. The management of work bloats and spill over the work itself and in the same time the cartesian method always HALT, but for no scientific reason at all.

__________________________________

Inflation created by the Curry’s paradox

Curry’s paradox expressed in lambda calculus.

I took the lambda term from there and I modified slightly the part which describes the IFTHEN (figured by an arrow in the wiki explanation)

IFTHEN a b  appears in chemlambda as

A 1 a2 out
A a1 b 1
FO a a1 a2

which if you think a little bit, behaves like IFTHENELSE a b a.

Once I built a term like the “r” from the wiki explanation, instead of  using rr, I made a graph by the following procedure:

– take the graph of r applied to something (i.e. suppose that the free out port of r is “1” then add A 1 in out)

– make a copy of that graph (i.e in mol notation duplicate the mol file of the previous graph, change the ports variable — here by adding the “a” postfix)

– then apply one to the other (i.e. modify

A 1 in out
A 1a ina outa

into

A 1 outa out,
A 1a out outa)

The initial mol file is:

A 1 outa out
A 1a out outa

L 2 3 1
A 4 7 2
A 6 5 4
FO 8 6 7

FO 3 9 10
A 9 10 8

L 2a 3a 1a
A 4a 7a 2a
A 6a 5a 4a
FO 8a 6a 7a

FO 3a 9a 10a
A 9a 10a 8a

The parameters are: cycounter=8; wei_FOFOE=0; wei_LFOELFO=0; wei_AFOAFOE=0; wei_FIFO=0; wei_FIFOE=0; wei_AL=0;

i.e is a deterministic run for 8 steps.

Done in chemlambda.

______________________________________________________________-

A citizen science project on autonomous computing molecules

 Wanted: chemists, or people who work(ed) with chemical molecules databases!
[update:  github.io version]
The  chemlambda project proposes the following. Chemlambda is a model of computation based on individual molecules, which compute alone, by themselves (in a certain well defined sense). Everything is formulated from the point of view of ONE molecule which interacts randomly with a family of enzymes.
So what?
Bad detail: chemlambda is not a real chemistry, it’s artificial.
Good detail: it is Turing universal in a very powerful sense. It does not rely on boolean gates kind of computation, but on the other pillar of computation which led to functional programming: lambda calculus.
So instead of molecular assemblies which mimic a silicon computer hardware, chemlambda can do sophisticated programming stuff with chemical reactions. (The idea that lambda calculus is a sort of chemistry appeared in the ALCHEMY (i.e. algorithmic chemistry) proposal by Fontana and Buss. Chemlambda is far more concrete and simple than Alchemy, principially different, but it nevertheless owes to Alchemy the idea that lambda calculus can be done chemically.)
From here,  the following reasoning.
(a) Suppose we can make this chemistry real, as explained in the article Molecular computers.  This looks reasonable, based on the extreme simplicity of chemlambda reactions. The citizen science part is essential for this step.
(b) Then is is possible to take further Craig Venter’s Digital Biological Converters (which already exist) idea and enhance it to the point of being able to “print” autonomous computing molecules. Which can do anything (amenable to a computation, so literary anything). Anything in the sense that they can do it alone, once printed.
The first step of such an ambitious project is a very modest one: identify the ingredients in real chemistry.
The second step would be to recreate with real chemistry some of the examples which have been already shown as working, such as the factorial, or the Ackermann function.
Already this second step would be a huge advance over the actual state of the art in molecular computing. Indeed, compare a handful of boolean gates with a functional programming like computation.
If it is, for example, a big deal to build with DNA some simple assemblies of boolean gates, then surely it is a bigger deal to be able to compute the Ackermann function (which is not primitive recursive, like the factorial) as the result of a random chemical process acting on individual molecules.
It looks perfect for a citizen science project, because what is missing is a human distributed search in existing databases, combined with a call for realization of possibly simple proofs of principles chemical experiments based on an existing simple and rigorous formalism.
Once these two steps are realized, then the proof of principle part ends and more practical directions open.
Nobody wants to compute factorials with chemistry, silicon computers are much better for this task. Instead, chemical tiny computers as described here are good for something else.
If you examine what happens in this chemical computation, then you realize that it is in fact a means towards self-building of chemical or geometrical structure at the molecular level. The chemlambda computations are not done by numbers, or bits, but by structure processing. Or this structure processing is the real goal!
Universal structure processing!
In the chemlambda vision page this is taken even further, towards the interaction with the future Internet of Things.

Biological teleportation taken further

The idea is simple. Put these together:

 

Craig Venter’s Digital Biological Converter

which could print autonomous computing  molecules.

That’s more than just teleportation.

Suppose you want to do anything in the real world. You can design a microscopic molecular computer for that, by using chemlambda. Send it by the web to a DBC like device. Print it. Plant it. Watch it doing the job.

Anything.

 

___________________________________________________________

 

 

Molecular computers proposal

This is a short and hopefully clear explanation, targeted for chemists (or bio-hackers), for the molecular computer idea.
http://chorasimilarity.github.io/chemlambda-gui/dynamic/molecular.html

I would be grateful for any input, including reviews, suggestions for improvements, demands of clarifications, etc.

(updated to fit phones formats)

____________________________________________

The moves of chemlambda v2 in mol format

UPDATE: see (Landing page for all chemlambda experiments) (local version)

UPDATE: There is now a page dedicated to chemlambda v2.

________

The mol file format for a chemlambda molecule is a list of lines. Each line represents a graphical element, like described at the chemlambda project index page.

The moves, or graph rewrites, are visualised at the moves page.

The expression of the moves in mol format can be inferred from the main script, but perhaps this is tedious, therefore I shall give them here directly.

The graphical elements are L, A, FI, FO, FOE, Arow, T, FRIN, FROUT, each with a certain number of ports, as explained in the mentioned index page. Each port has two types:

  • one can be “in” or “out”
  • the other can be “middle”, “left” or “right”

and there is a convention of writing the ports of any graphical element in a given order.

Here I shall write a graphical element in the following form. Instead of the line “L a b c” from the mol file I shall write “L[a,b,c]”, and the same for all other graphical elements. Then the mol file will be seen as a commutative and associative product of the graphical elements.

This goes back to the initial p