Category Archives: distributed GLC

Summer report 2018, part 2

Continues from Summer report 2018, part 1.

On the evolution of the chemlambda project and social context. 

Stories about the molecular computer. The chemlambda project evolved in a highly unexpected way, from a scientific quest done completely in the open to a frantic exploration of a new territory. It became a stories generator machine. I was “in the zone” for almost two years. Instead of the initial goal of understanding the computational content of emergent algebras, the minimalisic chemlambda artificial chemistry concentrated on the molecular computer ideas.

This idea can be stated as: identify molecules and chemical reactions which work as the  interaction nets rewrites style of chemlambda. See the article Chemlambda strings for a simple explanation, as well as a recent presentation of the newest (available) version of chemlambda: v3. (It is conservative in the numbers of nodes and links, the presentation is aimed for a larger audience.)

This idea is new. Indeed, there are many other efforts towards molecular computing. There is the old ALCHEMY (algorithmic chemistry) where lambda calculus serves as inspiration, by taking the application operation as a chemical reaction and the lambda abstration as reactive sites in a molecule. There is the field of DNA and RNA computing where computations are embodied as molecular machines made of DNA or RNA building blocks. There is the pi calculus formalism, as pure in a sense as lambda calculus, based on communication channels names exclusively, which can be applied to chemistry. There is the idea of metabolic networks based on graph grammars.

But there is nowhere the idea to embed interaction networks rewrites into real chemical reactions. So not arbitrary graph grammars, but a highly selected class. Not metabolical networks in general, but molecules designed so individually compute. Not solutions well stirred in a lab. Not static or barely dynamic lego-like molecules. Not boolean gates computing but functional programming like computing.

From the side of CS, this is also new, because instead of concentrating of these rewrites as a tool for understanding lambda calculus reductions, we go far outside of the realm of lambda calculus terms into a pure random calculus with graphs.

But it has to be tried, right? Somebody has to try to identify this chemistry. Somebody has to try to use the functional programming basic concepts from the point of view of the machine, not the programmer.

For the mathematical and computing aspects see this mathoverflow question and answers.

For the general idea of the molecular computer see these html/js slides. They’ve been prepared for a TED talk with a very weird, in my opinion, story.

For the story side and ethical concerns see for example these two short stories posted at telegra.ph : Internet of smells, Home remodeling (a reuse of Proust).

In order to advance there is the need to find either, rather both funding and brain time from a team dedicated to this. Otherwise this project is stalled.

I tried very hard to find the funding and I have not succeeded (other weird stories, maybe some day will tell them).

I was stalled and I had to go back to my initial purpose: emergent algebras. However, being so close to inverse engineering of the  nature’s  OS gives new ideas.

After a year of efforts I understood that it all comes to stochastic luck, which can be groomed and used (somehow). This brings me to the stories of the present, for another post.

 

Advertisements

Summer report 2018, part 1

In this report I intend to present explanations about the scientific evolution of the chemlambda project, more about the social context and about new projects (em and stochastic evolution). I shall also write about my motivations and future intentions.

On the evolution of the chemlambda project and social context. 

Inception and initial motivations. This open notebook contains many posts wittnessing the inception of chemlambda. I started to learn about and understand some aspects of the theory of computation as a geometer. My goal was to understand the computational contents of working with the formalism of emergent algebras. I thought that basically any differential geometric computation reduces to a graph rewrite automaton, without passing through the usual road which is non-geometrical, i.e. reduction to cartesian numerical manipulations. The interest is obvious to me, although I discovered soon that it is not obvious to many. A preferred analogy which I used was the one concerning the fly and the researcher who tries to understand the visual system of the fly. The fly’s brain, a marvel of nature, does not work with, nor it contains a priori knowledge of cartesian geometry, while in the same time the explanations of the researcher are almost completely based on cartesian geometry considerations. Which is then the way of the fly’s brain? And why can it be explained by recourse to sophisticated (for a fly) abstractions?

That’s why I advanced the idea that there is an embedded mechanism in nature which makes abstractions concrete and runs them by the dumbest algorithm ever: random and local. In this sense, if all differential geometric computations can be executed by a graph rewrite automaton, by using only random rewrites, applied only locally (i.e. by using only a small number of nodes and arrows), then the fly’s brain way and the researcher brain way are simply the same, only the semantics (the researcher has) is different, being only a historical building based on centuries of inverse engineering techniques called geometry, physics, mathematics.

The emergent algebras formalism has actually two parts, the first which can be easily reduced to graph rewrites, the second one which concerns passing to the limit in a precise sense and therefore obtaining new, “emergent” rewrites and equivalences of rewrites. At that initial point I had nothing, not the pure graph rewrites formalism, nor the passing to the limit formalism, except some particular results (in metric geometry and intriguingly in some problems related to approximate groups, then a hot work of Tao and collaborators).

That is how GLC (graphic lambda calculus) appeared. It was, in retrospect, a particular formulation analoguous with interaction graphs, with the new ideas that it is applicable to geometry, via the fact that the emergent algebra rewrites are of the same kind as the interaction graphs rewrites. Interaction graphs are an old subject in CS, only that my point of view was completely different than the classical one. Where the functional programming wizards were interested in semantics, global, concepts and the power of humanly designed abstractions, I was interested into the minimal, machine (or fly’s brain) like, random and automatic aspects.

Because the approximate groups were a hot subject then, I embedded a little part of what I was thinking about into a grant collaboration financed locally. Recall that I was always an Open Science researcher, therefore I concentrated on openly (i.e. back then via arXiv) constructing the fundamentals from where particular applications on approximate groups would have been low hanging fruits. However, for what I believe are political reasons (I publicly expressed as usual my strong feelings against academic and political corruption, which debase me as a citizen of my country which I always loved very much), my grant funding was cancelled even if I did a lot of relevant work, by far the most publicly visible and original. Oh well, that’s life, I was never interested much in these political aspects.I learned the hard way a truth: my country has a great pool of talents which make me proud, but in the same time talent is here choked by a group of mediocre and opportunistic managers. They thrive not because their scientific talent, which is not inexistent, but only modest, they thrive because their political choices. This state of affairs created an inverted pyramid of power (as seen from the talent point of view).

I filed therefore  in my notebooks the problem of understanding how a linear dilation structure emerges  from an approximate group.  There was nothing to stop me to go full OS.

I wrote therefore the Chemical concrete machine paper because I meaned it: there should be a way to make a machine, the dumbest of all, which works like Nature. This was  an advance over GLC, because it had almost all rewrites local (excepting global fan-out) and because it advanced the idea of the dumbest algorithm and that the dumbest algorithm is the way the Nature works.

Moreover the interest in GLC soared and I had the ocasion to talk a lot with Louis Kauffman, a wonderful researcher which I always admired, the king of knot theory. There were also lots of CS guys interested into GLC and they tried to convince me that maybe GLC has the key to true decentralized computing. A project with some of them and with Louis (contained in this arXiv paper) was submitted to an american agency. Unfortunately, even if the theoretical basis was appreciated, the IT part was not well done, actually is was almost inexistent. My problem was that the ideas I advanced were not (even by Louis sometimes) accepted, I needed somebody (I am a mathematician, not a programmer, see?) to write some pretty simple programs and let them work to see if I’m right and semantics is just human BS or not.

For an an artificial life conference I wrote with Louis another presentation of chemlambda, after the GLC project was not accepted for US funding. The formalism was still not purely local. There, Louis presented his older and very interesting points of view about computation and knot theory. These were actually different than mine, because for me knot theory is yet another graph rewriting automaton (without a defined algorithm for functioning). Moreover, recall emergent algebras, I have not made Louis to be interested in my point of view that the Rademacher 3 move is emergent, not fundamental.

Louis Kauffman is the first programmer of chemlambda. Indeed, he succeded to make some reductions in chemlambda using Mathematica. I don’t have Mathematica, as I never use on my computers anything which is not open. I longed for somebody, a real programmer, to make those darned simple programs for chemlambda.

I was interested back then into understanding chemlambda quines and complex reductions. On paper that was very very hard to progress.

Also, I have not succeded to gather interest for the emergent algebras aspect. Chemlambda simplified the emergent algebra side by choosing a minimal set of nodes, some of them which had an emergent algebra interpretation, but nobody cared. It is hard though to find anybody familiar with modern metric geometry and analysis and also familiar with interaction nets.

After some depressing months I wrote the programs in two weeks and got the first chemlambda reduction made with a combination of awk programs and d3.js.  The final repository is here.

The version of chemlambda (call it v2) used is explained in the article Molecular computers. It is purely local.

From there my choice was to make chemlambda a flagship of Open Science. You know much of the story but you may not know how and why I built more that 400 chemlambda molecules. The truth is that behind the pretty animations, almost each molecule deserves a separate article, or otherwise stated, when you look at a chemlambda molecule in action you see a visual version of a mathematical proof.

The chemlambda formalism has been externally validated, first by chemlambda-py (which has though a rewrite wrongly implemented, but otherwise is OK) then by chemlambda-hask which is much more ambitious, being a platform for a haskell version.

As for the connection with knot theory you have the Zipper Logic article (though it is like chemlambda v1 not a purely local agorithm, but it can be easily made so by the same techniques as chemlambda v2).

I also used figshare for the chemlambda collection of simulations (which covers the animations shown in the chemlambda collection on G+, see them starting from an independent list).

As concerns the social communication aspects of this OS project, it was a huge success.

Busy beaver’s random church

The demo linked here would surely look impossible.

One computer, WHILE IT WORKS, is multiplied into 3 computers which  continue to work, each of them. This feat is done without any higher control and there are never conflicts which appear. All computers work asynchronously (mimicked here by randomness). Moreover, eventually they arrive to read and write on the same tape.

There are no calls, nothing you have seen before.Everything is local, no matter how you slice it into separated parts.

_________________________________________

As a handful of slime mold. That’s how smart is the whole net.

I record here, to be sure that it’s read, a message of explanations I wrote today.

Before that, or after that you may want to go check the new chemlambda demos, or to look at the older ones down the page, in case you have not noticed them. There are also additions in the moves page. Most related to this post is the vision page.

“First, I have lots of examples which are hand drawn. Some of them appeared from theoretical reasons, but the bad thing used to be that it was too complex to follow (or to be sure of that) on paper what’s happening. From the moment I started to use the mol files notation (aka g-patterns) it has been like I suddenly became able to follow in much more depth. For example that’s how I saw the “walker”, “ouroboros” and finally the first quine by reducing the graph associated to the predecessor (as written in lambda calculus). Finally, I am able now to see what happens dynamically.
However, all these examples are far from where I would like to be now, they correspond to only the introduction into the matter. But with every “technical” improvement there are new things which appear. For example, I was able to write a program which generates, one by one, each of about 1000 initial graphs made by 2 nodes and follow their reduction behaviour for a while, then study the interesting exemplars. That is how I found all kind of strange guys, like left or write propagators, switchers, back propagators, all kind of periodic (with period > 1) graphs.
So I use mainly as input mol files of graphs which I draw by hand or which I identify in bigger mol files as interesting. This is a limitation.
Another input would be a program which turns a lambda term into a mol file. The algorithm is here.
Open problem: pick a simplified programming language based on lambda calculus and then write a “compiler”, better said a parser, which turns it into a mol file. Then run the reduction with the favourite algorithm, for the moment the “stupid” determinist and the “stupid” random ones.
While this seems feasible, this is a hard project for my programming capabilities (I’d say more because of my bad character, if I don’t succeed fast then I procrastinate in that direction).
It is tricky to see how a program written in that hypothetical simple language reduces as a graph, for two reasons:
  1. it has to be pure lambda beta, but not eta (so the functions are not behaving properly, because of the lack of eta)
  2. the reductions in chemlambda do not parallel any reduction strategy I know about for lambda calculus.
The (2) makes things interesting to explore as a side project, let me explain. So there is an algorithm for turning a lambda term into a mol file. But from that part, the reductions of the graph represented by the mol file give other graphs which typically do not correspond to lambda terms. However, magically, if the initial lambda term can be reduced to a lambda term where no further reductions are possible, then the final  graph from the chemlambda reduction does correspond to that term. (I don’t have a proof for that, because even if I can prove that if restricted to lambda terms written onlly as combinators, chemlambda can parallel the beta reduction, the reduction of the basis combinators and is able to fan-out a term, it does not mean that what I wrote previously is true for any term, exactly because of the fact that during chemlambda reductions one gets out from the real of lambda terms.)
In other cases, like for the Y combinator, there is a different phenomenon happening. Recall that in chemlambda there is no variable or term which is transmitted from here to there, there is nothing corresponding to evaluation properly. In the frame of chemlambda the behaviour of the Y combinator is crystal clear though. The graph obtained from Y as lambda term, connected to an A node (application node) starts to reduce even if there is nothing else connected to the other leg of the A node. This graph reduces in chemlambda to just a pair of nodes, A and FO, which then start to shoot pairs of nodes A and FOE (there are two fanout nodes, FO and FOE). The Y is just a gun.
Then, if something else is connected to the other leg of the application node I mentioned, the following phenomenon happens. The other graph starts to self-multiply (due to the FOE node of the pair shot by Y) and, simultaneously, the part of it which is self-multiplied already starts to enter in reaction with the application node A of the pair.
This makes the use of Y spectacular but also problematic, due to too much exuberance of it. That is why, for example, even if the lambda term for the Ackermann function reduces correctly in chemlambda, (or see this 90 min film) I have not yet found a lambda term for the factorial which does the same (because the behaviour of Y has to be tamed, it produces too many opportunities to self-multiply and it does not stop, albeit there are solutions for that, by using quines).
So it is not straightforward that mol files obtained from lambda terms reduce as expected, because of the mystery of the reduction strategy, because of the fact that intermediary steps go outside lambda calculus, and because the graphs which correspond to terms which are simply trashed in lambda calculus continue to reduce in chemlambda.
On the other side, one can always modify the mol file or pre-reduce it and use it as such, or even change the strategy of the algorithm represented by the lambda term. For example, recall that because of lack of eta, the strategy based on defining functions and then calling them, or in general, just currying stuff (for any other reasons than for future easy self-multiplication) is a limitation (of lambda calculus).
These lambda terms are written by smart humans who stream everything according to the principle to turn everything into a function, then use the function when needed. By looking at what happens in chemlambda (and presumably in a processor), most of this functional book-keeping is beautiful for the programmer, but a a limitation from the point of view of the possible alternative graphs which do the same as the functionally designed one, but easier.
This is of course connected to chemistry. We may stare to biochemical reactions, well known in detail, without being able to make sense of them because things do happen by shortcuts discovered by evolution.
Finally, the main advantage of a mol file is that any collection of lines from the mol file is also a mol file. The free edges (those which are capped by the script with FRIN (i.e. free in) and FROUT (i.e. free out) nodes) are free as a property of the mol file where they reside, so if you split a mol file into several pieces and send them to several users then they are still able to communicate by knowing that this FRIN of user A is connected (by a channel? by an ID?) to that FROUT of user B. But this is for a future step which would take things closer to real distributed computing.
And to really close it, if we would put on every computer linked to internet a molecule then the whole net would be as complex (counting the number of molecules) as a mouse. But that’s not fair, because the net is not as structured as a mouse. Say as a handful of slime mold. That’s how smart is the whole net. Now, if we could make it smarter than that, by something like a very small API and a mol file as big as a cookie… “
_________________________________________________________________________

You can build pretty much anything with that

Progressing,  enjoy that,  details in few days.

UPDATE: and a nice video about the Omega combinator.

 

 

… and there is more.

I took two different reduction strategies for the same artificial chemistry (#chemlambda) and looked what they give in two cases:
– the Ackermann function
– the self-multiplication and reduction of (the ensuing two copies of) the Omega combinator.
The visualization is done in d3.js.
The first reduction strategy  is the one which I used previously in several demonstrations, the one which I call “stupid” also because is the simplest one can imagine.
The second reduction strategy is a random variant of the stupid one, namely a coin is flipped for each edge of the graph, before any consideration of performing a graph rewrite there.
The results can be seen in the following pages:
– Ackermann classic  http://imar.ro/~mbuliga/ackermann_2_2.html
-Ackermann random http://imar.ro/~mbuliga/random_ackermann_2_2.html
-Omega classic http://imar.ro/~mbuliga/omegafo.html
-Omega random http://imar.ro/~mbuliga/random_omegafo.html
I’ve always told that one can take any reduction strategy with chemlambda and that all are interesting.

 

 

___________________________________