The name of the rose and the smell of the article

“A rose by any other name would smell as sweet” wrote Shakespeare.  Yes, but an article about the rose’s smell wouldn’t smell as sweet at all.

The form of the article as a mean for disseminating research is more and more questioned. I liked  Idiot things that we we do in our papers out of sheer habit by Mike Taylor, as an example.

An article is only the tip of an iceberg of results, proofs, experiments, software and hardware. There are more and more platforms of publication, or better said dissemination, where articles come together with auxiliary data.

In math there is the HoTT book example, the result of  a wonderful collaboration on github, which gives not only data, but also programs.

Let’s think about a hypothetical article about the smell of the rose. In reality that smell is a manifestation of a host of chemical reactions in the rose, in the nose, in the brain, etc. Taking example from the HoTT book, in the hypothetical article we would write about these reactions and other phenomena, we would add data, methodology explanations, and … why not the “smell program” itself, that is not only  a static description of the chemical reaction networks involved in the smell process, but  a simulation of this as well.

That would be great: instead of talking about it, we could experience it, tweak it, comment it!

It is technically possible, but is there somebody who does it?

I am motivated to ask this question because of a concrete need I have.

I’m preparing a web document which is something in between an article and a (say) remark.js slide show, which uses the demos from here http://chorasimilarity.github.io/chemlambda-gui/dynamic/demos.html Now, how could I submit something like this for peer review? That’s the question. Just the text, without the dynamic explanations, is too bland. Just the demos, with as many as possible text explanations, are not in the article ball park. Just the programs from the github repository, that’s not inviting.

But if it is possible to make it, why not try it?

_________________________________________________________________________

“They are artificial microbes that can be created to make …”

My brother  Dragos Buliga, a sharp film director and media producer, asked me to describe to him in lay terms what he sees in these demos pages.

I came up with this. (See more at chemlambda vision page.)

They are artificial microbes that can be created to make a computer do anything without knowing what they are doing, without needing supervision while doing it.

They are not viruses, because viruses need a host. Computer viruses have the OS as the host.

They are the host. Together they form a Microbiome OS, which is as unique as your own biological microbiome, shaped by the interactions you had with the world.

Because they don’t know what they are doing, it is as hard for an external viewer  to understand the meaning of  their activity as it is for a researcher looking through a microscope to understand the inner workings of real microbes.

Because they don’t need supervision to do it, they are ideal tools for the truly decentralized Internet of Things.

They are the means towards a cloudless future.

_____________________________________________________________

New pages of demos and dynamic visualizations of chemlambda moves

The following are artificial chemistry visualizations, made in d3.js. There are two main pages: the first is with demos for #chemlambda   reductions, the second is with dynamic visualizations of the graph rewrites.
Bookmark these pages if you are interested, because there you shall find new stuff on a day-by-day basis.

______________________________________________________

Living computations

What’s better than a movie? A live performance.

I just started new pages where you can see the last living computations with chemlambda:

  • a 20 nodes creature which I qualified previously as a quine, but is not, struggles to survive in a random environment (random reduction method) here
  • the reduction of the predecessor function from lambda calculus turned into a chemlambda reduction (random too) here
  • the self multplication of the S combinator in random conditions here
  • the reduction of Ackermann(2,2) in the  random model here (this is the one used for the video from the last post).
  • a complex reduction in chemlamdba. Here is the recipe:
    – you can write the Y combinator as an expression in the S,K,I, combinators: Y = S (K (S I I)) (S (S (K S) K) (K (S I I)))
    – so take this expression and apply it to the identity I. In combinatory logic this should reduce to something equivalent to YI, which then reduces forever, because it does not have a normal form
    -but we do something more funny, namely all this long string of combinators is transformed into a chemlambda molecule, and we add on top of it a node FO which makes all this big thing to self-reproduce.
    So, we have a bunch of reductions (from the long expression to YI) in parallel with the self-reproduction of the whole thing.
    Now, what you see is this, in a model of computation which uses a random reduction strategy!
    See it live here.

The sources are in this github repository.

_______________________________________________________________________

 

Dynamic rendering of the Ackermann function computation, deterministical and random

Source code here.

This video contains two examples of the computation of the Ackermann function by using artificial chemistry. The (graph rewriting) rules are those of chemlambda and there are two models of computation using them.


In the first one the rules are applied deterministically, in the order of their importance. The rules which increase the number of nodes are considered more important than those which decrease the number of nodes. The rules are applied in parallel, as long as there is no conflict (i.e. as long as they don’t apply to the same node). When there is conflict the rule with higher importance takes the priority.
In the first part of the video you see what happens if this model is applied to a graph which represents (according to the rules of chemlambda) the Ackermann function applied to (2,2). The expected result is 7 (as it appears in the Church encoding, which is then transformed in the chemlambda convention). There are no tricks, like pre-computing the expression of the function, everything goes at a very basic level. The application of the rules does not parallel the application of lambda calculus reduction rules to the lambda term which represents the Ack(2,2), with any reduction strategy. However, the result is obtained correctly, even if many of the intermediary steps are not graphs which represent a lambda term.

 

The model does not use any variable passing, nor any evaluation strategy, moreover!

In the second example is used a different model of computation. The rules are applied randomly. That means the following. For any configuration from the graph which may be subject to a rule, a coin is flipped and the rule is applied with probability 50%. The rules are equally important. The rules are still applied in parallel, in the sense that an update on the graph is done after all edges are visited.

As you see in the second part of the video, the process takes longer, because essentially at each step there are always less rules applied to the whole graph. The comparison is not very accurate, because the reduction process may depend on the particular run of the program. Even if lambda beta calculus (with some model of reduction) is confluent, chemlambda is surely not. It is an open problem if, starting from a graph which represents a lambda term, in chemlambda, knowing that in lambda calculus the term has a normal form, then the random model of computation with chemlambda always arrives eventually to the graph which represents the normal form of the respective term.
At least for the term I use here for Ack(2,2), it looks like that it does. This is of course not a proof.

 

UPDATE: A quine is reduced in chemlambda, first using a deterministic model of computation then using a model which has a random ingredient.


These two models are the same as the ones used for the Ackermann function video.
The quine is called the 9_quine, has been introduced in https://chorasimilarity.wordpress.com/…

In the deterministic reduction you see that at each step the graph reduces and reconstruct itself. It goes on forever like that.

In the random reduction the process is different. In fact, if you look at the list of reductions suffered by the 9_quine, then you see that after each cycle of reduction (in the deterministic version) the graph is isomorphic with the one before because there is an equilibrium between the rules which add nodes and the rules which destroy nodes.
In the random version this equilibrium is broken, therefore you see how the graph grows either by having more and more red nodes, or by having more and more green nodes.
However, because each rule is applied with equal probability, in the long term the graph veers towards a dominant green or towards dominant red states, from one to the other, endlessly.
This is proof that the reductions in chemlambda vary according to the order of application of moves. On the other side, this is evidence (but not proof) that there is a sort of fair effort towards eventual confluence. I use “confluence” in a vague manner, not related to lambda calculus (because the 9_quine does not come from a lambda term), but more related to the graph rewriting world.

_________________________________________________________________________

You can build pretty much anything with that

Progressing,  enjoy that,  details in few days.

UPDATE: and a nice video about the Omega combinator.

 

 

… and there is more.

I took two different reduction strategies for the same artificial chemistry (#chemlambda) and looked what they give in two cases:
– the Ackermann function
– the self-multiplication and reduction of (the ensuing two copies of) the Omega combinator.
The visualization is done in d3.js.
The first reduction strategy  is the one which I used previously in several demonstrations, the one which I call “stupid” also because is the simplest one can imagine.
The second reduction strategy is a random variant of the stupid one, namely a coin is flipped for each edge of the graph, before any consideration of performing a graph rewrite there.
The results can be seen in the following pages:
– Ackermann classic  http://imar.ro/~mbuliga/ackermann_2_2.html
-Ackermann random http://imar.ro/~mbuliga/random_ackermann_2_2.html
-Omega classic http://imar.ro/~mbuliga/omegafo.html
-Omega random http://imar.ro/~mbuliga/random_omegafo.html
I’ve always told that one can take any reduction strategy with chemlambda and that all are interesting.

 

 

___________________________________