or the live version, funnier.
Of course, an apology of the no semantics idea.
“A rose by any other name would smell as sweet” wrote Shakespeare. Yes, but an article about the rose’s smell wouldn’t smell as sweet at all.
The form of the article as a mean for disseminating research is more and more questioned. I liked Idiot things that we we do in our papers out of sheer habit by Mike Taylor, as an example.
An article is only the tip of an iceberg of results, proofs, experiments, software and hardware. There are more and more platforms of publication, or better said dissemination, where articles come together with auxiliary data.
In math there is the HoTT book example, the result of a wonderful collaboration on github, which gives not only data, but also programs.
Let’s think about a hypothetical article about the smell of the rose. In reality that smell is a manifestation of a host of chemical reactions in the rose, in the nose, in the brain, etc. Taking example from the HoTT book, in the hypothetical article we would write about these reactions and other phenomena, we would add data, methodology explanations, and … why not the “smell program” itself, that is not only a static description of the chemical reaction networks involved in the smell process, but a simulation of this as well.
That would be great: instead of talking about it, we could experience it, tweak it, comment it!
It is technically possible, but is there somebody who does it?
I am motivated to ask this question because of a concrete need I have.
I’m preparing a web document which is something in between an article and a (say) remark.js slide show, which uses the demos from here http://chorasimilarity.github.io/chemlambda-gui/dynamic/demos.html Now, how could I submit something like this for peer review? That’s the question. Just the text, without the dynamic explanations, is too bland. Just the demos, with as many as possible text explanations, are not in the article ball park. Just the programs from the github repository, that’s not inviting.
But if it is possible to make it, why not try it?
My brother Dragos Buliga, a sharp film director and media producer, asked me to describe to him in lay terms what he sees in these demos pages.
I came up with this. (See more at chemlambda vision page.)
They are artificial microbes that can be created to make a computer do anything without knowing what they are doing, without needing supervision while doing it.
They are not viruses, because viruses need a host. Computer viruses have the OS as the host.
They are the host. Together they form a Microbiome OS, which is as unique as your own biological microbiome, shaped by the interactions you had with the world.
Because they don’t know what they are doing, it is as hard for an external viewer to understand the meaning of their activity as it is for a researcher looking through a microscope to understand the inner workings of real microbes.
Because they don’t need supervision to do it, they are ideal tools for the truly decentralized Internet of Things.
They are the means towards a cloudless future.
I need a hard objective and harsh assessment of the demos, moves pages, all this effort I make. I am looking for funding, I don’t get one presently, so there might be something wrong I do.
Please be as harsh as possible. Thank you!
I am waiting for your comments. If you want to make a private comment then add in your message the following string
and the comment will go to the moderation queue.
If you have not made any comments here, until now, then by default the comment goes to moderation.
So, please mention in the comment if you want to keep it private.
Assessment for what?
or anything about chemlambda.
This is a big project, I see people are interested in more advanced stuff, like distributed computing, but they usually fail to understand the basics.
On the other side, I am a mathematician learning to program. So I’m lousy at that (for the moment), but I hope I make my point about the basics with these demos and help pages.
The following are artificial chemistry visualizations, made in d3.js. There are two main pages: the first is with demos for #chemlambda reductions, the second is with dynamic visualizations of the graph rewrites.
Bookmark these pages if you are interested, because there you shall find new stuff on a day-by-day basis.
What’s better than a movie? A live performance.
I just started new pages where you can see the last living computations with chemlambda:
The sources are in this github repository.
Source code here.
This video contains two examples of the computation of the Ackermann function by using artificial chemistry. The (graph rewriting) rules are those of chemlambda and there are two models of computation using them.
In the first one the rules are applied deterministically, in the order of their importance. The rules which increase the number of nodes are considered more important than those which decrease the number of nodes. The rules are applied in parallel, as long as there is no conflict (i.e. as long as they don’t apply to the same node). When there is conflict the rule with higher importance takes the priority.
In the first part of the video you see what happens if this model is applied to a graph which represents (according to the rules of chemlambda) the Ackermann function applied to (2,2). The expected result is 7 (as it appears in the Church encoding, which is then transformed in the chemlambda convention). There are no tricks, like pre-computing the expression of the function, everything goes at a very basic level. The application of the rules does not parallel the application of lambda calculus reduction rules to the lambda term which represents the Ack(2,2), with any reduction strategy. However, the result is obtained correctly, even if many of the intermediary steps are not graphs which represent a lambda term.
The model does not use any variable passing, nor any evaluation strategy, moreover!
In the second example is used a different model of computation. The rules are applied randomly. That means the following. For any configuration from the graph which may be subject to a rule, a coin is flipped and the rule is applied with probability 50%. The rules are equally important. The rules are still applied in parallel, in the sense that an update on the graph is done after all edges are visited.
As you see in the second part of the video, the process takes longer, because essentially at each step there are always less rules applied to the whole graph. The comparison is not very accurate, because the reduction process may depend on the particular run of the program. Even if lambda beta calculus (with some model of reduction) is confluent, chemlambda is surely not. It is an open problem if, starting from a graph which represents a lambda term, in chemlambda, knowing that in lambda calculus the term has a normal form, then the random model of computation with chemlambda always arrives eventually to the graph which represents the normal form of the respective term.
At least for the term I use here for Ack(2,2), it looks like that it does. This is of course not a proof.
UPDATE: A quine is reduced in chemlambda, first using a deterministic model of computation then using a model which has a random ingredient.
These two models are the same as the ones used for the Ackermann function video.
The quine is called the 9_quine, has been introduced in https://chorasimilarity.wordpress.com/…
In the deterministic reduction you see that at each step the graph reduces and reconstruct itself. It goes on forever like that.
In the random reduction the process is different. In fact, if you look at the list of reductions suffered by the 9_quine, then you see that after each cycle of reduction (in the deterministic version) the graph is isomorphic with the one before because there is an equilibrium between the rules which add nodes and the rules which destroy nodes.
In the random version this equilibrium is broken, therefore you see how the graph grows either by having more and more red nodes, or by having more and more green nodes.
However, because each rule is applied with equal probability, in the long term the graph veers towards a dominant green or towards dominant red states, from one to the other, endlessly.
This is proof that the reductions in chemlambda vary according to the order of application of moves. On the other side, this is evidence (but not proof) that there is a sort of fair effort towards eventual confluence. I use “confluence” in a vague manner, not related to lambda calculus (because the 9_quine does not come from a lambda term), but more related to the graph rewriting world.