Tag Archives: artificial chemistry

The life of a 10-nodes quine, short video

I’m working on an article on the 10-nodes quine (see here and here previous posts) and I have to prepare some figures, well rather js simulations of relevant phenomena.

I thought I’ll made a screencast for this work in progress and put it in the internet archive:

https://archive.org/details/the-life-of-a-10-nodes-quine

UPDATE: I found, by playing, an even more, apparently, assymetrical variant of chemlambda, but also I found a bug in the js version, which is, fortunately, not manifesting in any of the cases from the menu of this page.  For the moment is not completely clear for me if the more, apparently, assymetrical variant behaves so much better, when I reduce lambda terms, because of the bug. I think not. Will tell after I fix the thing. Is something unexpected, not in my current program, but exciting, at least because either way is beneficial: I found a bug or I found a bounty.

10-node quine can duplicate

Only recently I became aware that sometimes the 10-node quine duplicates. Here is a screenshot, at real speed, of one such duplication.

10-n0de-quine-duplicates

You can see for yourself by playing find-the-quine, either here or here.

Pick from the menu “10_nodes quine” or “original 10 nodes quine” (depending on where you play the game), then use “start”. When the quine dies just hit the “reload” button. From time to time you’ll see that it duplicates.  Most of times this quine is short lived, sometimes it lives long enough to make you want to hit “reload” before it dies.

 

 

Find the quine: who ordered this?

I put a version of the Find the Quine on github. You may help (and have fun) to find new chemlambda quines.

The page lets you generate random 10 nodes graphs (molecules), which are variants of a graph called “10_quine”. There are more than 9 billion different variants, therefore the space of all possibilities is vast.

Up to the moment 15 new quines candidates were found. You can see them in that page too.

New phenomena were discovered, to the point that now I believe that chemlambda quines are a dot in a sea of “who ordered this?”.

Who ordered this? Just look at “new quine? 3”.  “10 nodes quine 5”. It displays an amazing range of outcomes. One of them is that it dies fast, but other appear rather frequently. The graph just blooms not into a living creature, more like into a whole ecology.

You can see several interesting pieces there.

There is a “growing blue tip” which keeps the graph alive.

There are “red spiders” who try to reach for the growing blue tip and eat it. But the red spiders sometimes return to the rest of the graph and rearrange it. They live and die.

There is a “blue wave” which helps the growing blue tip by fattening it.

There is a “bones structure” which appears while the red spiders try to eat the growing blue tip. It looks like the bones structure is dead, except that sometimes the red spiders travel back and modify the bones into new structures.

And there are also graphs which clearly are not quines, but they are extremely sensitive to the order of rewrites. See for example “!quine, sensitive 1”. There seems to be a boundary between the small realm of quines and the rest of the graphs. “new quine? 3” is on one side of that boundary and “!quine, sensitive 1” is on the other side.

So, play “Find the Quine” and mail me if you find something interesting!

On top of the page there is a link to my pages to play and learn. Mind that the versions of find the quine there and at github are slightly different, because I update them all the time and so they are not synchronized. I use github in order to have a copy just in case. In some places I can update the github pages, in other places I can update my homepage…

 

 

How to use the chemlambda collection of simulations

The chemlambda_casting folder (1GB) of simulations is now available on Figshare [1].

How to use the chemlambda collection of simulations? Here’s an example. The synthesis from a tape video [2] is reproduced here with a cheap animated gif. The movie records the simulation file 3_tape_long_5346.html which is available for download at [1].

That simple.

If you want to run it in your computer then all you have to do is to download 3_tape_long_5346.html from [1], download from the same place d3.min.js and jquery.min.js (which are there for your convenience). Put the js libs in the same folder as the html file. Open the html file with a browser, strongly recommend Safari or Chrome (not Firefox which blocks with these d3.js animations, for reasons related to d3). In case your computer has problems with the simulation (I used a macbook pro with safari) then slow it like this: edit the html file (with any editor) and look for the line starting with

return 3000 + (4*(step+(Math.random()*

and replace the “4” by “150”, it should be enough.

Here is a longer explanation. The best would be to read carefully the README [4].
“Advanced”: If you want to make another simulation for the same molecule then follow the steps.

1. The molecule used is 3_tape_long_5346.mol which is available at the library of chemlambda molecules [3].

2. So download the content of the gh-pages branch of the chemlambda repository at github [4] as explained in that link.

3. then follow the steps explained there and you’ll get a shiny new 3_tape_long_5346.html which of course may be different in details than the initial one (it depends on the script used, if you use the random rewrites scripts then of course the order of rewrites may be different).

[1] The Chemlambda collection of simulations
https://doi.org/10.6084/m9.figshare.4747390.v1

[2] Synthesis from a tape
https://plus.google.com/+MariusBuliga/posts/Kv5EUz4Mdyp

[3] The library of chemlambda molecules
https://github.com/chorasimilarity/chemlambda-gui/tree/gh-pages/dynamic/mol

[4] Chemlambda repository (readme) https://github.com/chorasimilarity/chemlambda-gui/blob/gh-pages/dynamic/README.md

More about chemical transactions

There is much more about these chemical transactions and their proofs. First is that transactions are partially independent on the molecules. The blockchain may be useful only for having a distributed database of transactions and proofs, available for further use. But there’s more.

Think about this database as one of valid computations, which can then be reused in any combination or degree of parallelism. Then, that’s the field of several competitions.

The same transaction can have several proofs, shorter or longer. It can have big left pattern therefore costly to use it in another computation. Maybe a transaction goes too long and therefore it is not useful to use in combination with others.

When there is a molecule to reduce, the application of a transaction means:
– identify a subgraph isomorphic with the left pattern and pick one such subgraph
– apply the transaction to this particular subgraph (which is equivalent with: reduce only that subgraph of the molecule, and freeze the rest of the molecule, but do it in one step because the sequence of reductions is already pre-computed)

Now, which is more convenient, to reduce the molecule by using the random algorithm and the available graph rewrites, or to use some transactions which fit, which is fast (as concerns step 2) but costly (as concerns step 1), moreover it may be that there is a transaction with shorter proof for that particular molecule, which mixes parts of several available precomputed transactions.

Therefore the addition of transactions and their proofs (needed to be able to validate them) into the database should be made in such a way which profit from this competition.

If I see the reduction of a molecule (which may be itself distributed) as a service then besides the competition for making available the most useful transactions with the shortest proofs, there is another competition between brute force reducing it and using the available transactions, with all the time costs they need.

If well designed, these competitions should lead to the emergence of clusters of useful transactions (call such a cluster a “chemlisp”) and also to the emergence of better strategies for reducing molecules.

This will lead to more and more complex computations which are feasible with this system and probably fast enough they will become very hard to understand by a human mind, or even by using IT tools on a limited part of the users of the system.

Chemical transactions and their proofs

By definition a transaction is either a rewrite from the list of
accepted rewrites (say of chemlambda) or a composition of two
transaction which match. A transaction has a left and a right pattern
and a proof (which is the transaction expressed as a cascade of
accepted rewrites).

When you reduce a molecule, the output is a proof of a transaction.
The transaction proof itself is more important than the molecule from
the start. Indeed, if you think that the transaction proof looks like
a list

rm leftpattern1
add rightpattern1

where leftpattern1 is a list of lines of a mol file, same for the rightpattern1,

then you can deduce from the transaction proof only the following:
– the minimal initial molecule needed to apply this transaction, call
it the left pattern of the transaction
– the minimal final molecule appearing after the transaction, call it
the right pattern of the transaction

and therefore any transaction has:
– a left pattern
– a right pattern
– a proof made of a chain of other transaction which match (the right
pattern of transaction N contains the left pattern of transaction N+1)

It would be useful to think in term of transactions and their proofs
as the basic objects, not molecules.

The replicant

This is a molecular machine designed as a patch which would upgrade biological ribosomes. Once it attaches to a ribosome, it behaves in an almost similar ways as the synthetic ribosome Ribo-T, recently announced in  Nature 524,119–124(06 August 2015) doi:10.1038/nature14862  [1].  It thus enables an orthogonal genetic system, (i.e., citing from the mentioned Nature letter “genetic systems that could be evolved for novel functions without interfering with native translation”).

The novel function is designed for is more ambitious than specialized new proteins synthesis. It is, instead, a  two-ways translation device, between real chemistry and programmable artificial chemistry.

It behaves like a bootstrapper in computing. It is itself simulated in chemlambda, an artificial chemistry which was recently proposed as a means towards molecular computers [2].  The animation shows one of the first successful simulations.

 

spiral_boole_construct2_orig_in

 

With this molecular device in place, we can program living matter by using living cells themselves, instead of using, for example, complex, big 3D DNA printers like the ones developed by Craig Venter.

The only missing step, until recently, was the discovery of the basic translation of the building blocks of chemlambda into real chemistry.

I am very happy to make public a breakthrough result by Dr. Eldon Tyrell/Rosen, a genius who went out of academia some years ago and pursued a private career. It appears that he got interested early in this mix of lambda calculus, geometry and chemistry and he arrived to reproduce with real chemical ingredients two of the chemlambda graph rewrites: the beta rewrite and one of the DIST rewrites.

He tells me in a message  that he is working on prototypes of replicants already.

He suggested the name “replicant” instead of a synthetic ribosome because a replicant, according to him, is a device which replicates a computer program (in chemlambda molecular form) into a form compatible with the cellular DNA machine, and conversely, it may convert certain (RNA) strands into chemlambda molecules, i.e. back into  synthetic form corresponding to a computer program.

[1] Protein synthesis by ribosomes with tethered subunits,  C. Orelle, E. D. Carlson, T. Szal,  T. Florin,  M. C. Jewett, A. S. Mankin
http://www.nature.com/nature/journal/v524/n7563/full/nature14862.html

[2] Molecular computers, M Buliga
http://chorasimilarity.github.io/chemlambda-gui/dynamic/molecular.html

[This post is a reply to +Yonatan Zunger  post
https://plus.google.com/u/0/+YonatanZunger/posts/6a3C5Nm5fNS
where he shows that the INCEPT DATE of the Blade Runner replicant Roy Batty appears to be 8 Jan, 2016.
So here is a replicant, in the inception phase 🙂 ]

PS: The post appeared as well in the chemlambda collection:

https://plus.google.com/+MariusBuliga/posts/jQTh741YYdP

Mind tricks

One of my goals is to uncover the geometry in the computations. Most people see visualizations as cute, but unnecessary  additions.  Maybe with some very limited pedagogical value. Not the real thing.

The really funny thing is that, on average, people tend to take too seriously a visualization. Some animations trigger all sorts of reflexes which mislead the viewers into seeing too much.

The animal is there, skin deep. Eye deep.

 

A recent example of using visualizations for research  is how I arrived to build a kinesin like molecule by looking at the Y combinator and permutations.

This is a phenomenon which appeared previously in the artificial chemistry chemlambda. Recall how the analysis of the predecessor lambda term,  led to the introduction of chemlambda quines?

Same here.

Chemical computation is a sort of combinatorial movement, if this makes any sense. Lambda calculus or other means towards rigorous notions of computation clash with the reality: chemical computations, in living organisms, say, are horrendously complex movements of atoms and rearrangements of bonds. There is no input, nor output written with numbers. That’s all there is: movements and rearrangements.

Chemlambda marks some points by showing how to take lambda calculus as inspiration, then how we can see some interesting, movements and rearrangements related thing in the behaviour of the lambda term, then how we can exploit this for designing some pure (artificial) chemistry tour de force of unsupervised cascades of reactions which achieve some goal. Unsupervised, random!

OK, so here is a kinesin built in chemlambda. I see it works and I want to play a bit with it and also to show it.

The following animation has stirred some attention on 4chan, and less attention on google+, of course compared with others from the amazing chemlambda collection 🙂 (why I deleted it)

kinesinX2

It makes sense, you can relate with the two kinesins which meet together, they salute each other, then they go their way. One of them detaches from the microtubule (a particularly designed one, which allows kinesins to go in both directions, hm, because I can). The other roams a bit, quick, concerned.

It’s the result of randomness, but it conveys the right info, without introducing too much unrelated stuff.

The next one is about 4 kinesins on a circular microtubule.

kinesin_round

This is a bit to much. They look like suspiciously quick moving spiders… Not something to relate to.

But still, there is no false suggestion in it.

People love more the following one, where there are 8 kinesins.

kinesin_round_2

It looks like a creature which tries to feel the boundary of the frame. Cool, but misleading, because:

  • the coordinates of nodes of the graph in this representation are irrelevant
  • the boundary of the frame is not part of the model, it means nothing for the molecule.

In chemlambda there is a choice made: chemistry is separated from physics. The chemistry (so to say) part, i.e. the graph rewrites and the algorithm of application, is independent from the d3.js rendering of the evolution of the graph.

But people love to see graphs in space, they love to see boundaries and they relate with things which have an appearance of life (or some meaning).

That’s how we are made, no problem, but it plays mind tricks on us.

A clever influencer would play these tricks in favor of the model…

The viewers, if asked to support the research, would be less willing to do it after seeing the fast moving spiders…

I find this very entertaining!

For the record, here is another direction of thinking, inspired by the same permutations which led me to kinesins.

https://plus.google.com/u/0/+MariusBuliga/posts/1PpXctAdbnE

Deterministic vs random, an example of grandiose shows vs quieter, functioning anarchy

In the following video you can see the deterministic, at the right random evolution of the same molecule, duplex.mol from the chemlambda repository. They take about the same time.

The deterministic one is like a ballet, it has a comprehensible development, it has rhythm and drama. Clear steps and synchronization.

The random one is more fluid, less symmetric, more mysterious.

What do you prefer, a grand synchronized show or a functioning, quieter anarchy?

Which one do you think is more resilient?

What is happening here?

The molecule is inspired from lambda calculus. The computation which is encoded is the following. Consider the lambda term for the identity function, i.e. I=Lx.x. It has the property that IA reduces to A for any term A. In the molecule it appears as a red trivalent node with two ports connected, so it looks like a dangling red globe. Now, use a tree of fanouts to multiply (replicate) this identity 8 times, then build the term

(((II)(II))((II)(II)))(((II)(II))((II)(II)))

Then use one more fanout to replicate this term into two copies and reduce all. You’ll get two I terms, eventually.
In the deterministic version the following happens.

– the I term (seen as a red dangling node) is replicated (by sequence of two rewrites, detail) and gradually the tree of fanouts is destroyed

– simultaneously, the tree of applications (i.e. the syntactic tree of the term, but seen with the I’s as leaves) replicates by the fanout from the end

– because the reduction is deterministic, we’ll get 16 copies of I’s exactly when we’ll get two copies of the application tree, so in the next step there will be a further replication of the 16 I’s into 32 and then there will be two, disconnected, copies of the molecule which represents ((II)(II))((II)(II))

– after that, this term-molecule reduces to (II)(II), then to II, then to I, but recall that there are two copies, therefore you see this twice.

In the random version everything mixes. Anarchy. Some replications of the I’s reach the tree of applications before it has finished to replicate itself, then reductions of the kind II –> I happen in the same time with replications of other pieces. And so on.
There is no separation of stages of this computation.
And it still works!

I used quiner_experia, with the mol file duplex.mol. The first time I modified all the weights to 0 (to have deterministic application) and took the rise parameter=0 (this is specific to quiner_experia, not present in quiner) too, because the rise parameter lower the probabilities of new rewrites, exponentially, during the same step, in order to give fair chances to any subset of all possible rewrites possible.
Then I made a screencast of the result, without speeding it, and by using safari to run the result.
For the random version I took all the weights equal to 1 and the rise parameter equal to 8 (empirically, this gives the most smooth evolution, for a generic molecule from the list of examples). Ran the result with safari and screencasted it.
Then I put the two movies one near the other (deterministic at left, random at right) and made a screencast of them running in parallel. (Almost, there is about 1/2 second difference because I started the deterministic one first, by hand).
That’s it, enjoy!
For chemlambda look at the trailer from the collections of videos I have here on vimeo.

Replication, 4 to 9

In the artificial chemistry chemlambda  there exist molecules which can replicate, they have a metabolism and they may even die. They are called chemlambda quines, but a convenient shorter name is: microbes.
In this video you see 4 microbes which replicate in complex ways. They are based on a simpler microbe whose life can be seen live (as a suite of d3.js animations) at [1].
The video was done by screencasting the evolution of the molecule 5_16_quine_bubbles_hyb.mol and with the script quiner_experia, all available at the chemlambda GitHub repository [2].

[1] The birth and metabolism of a chemlambda quine. (browsers recommended: safari, chrome/chromium)
chorasimilarity.github.io/chemlambda-gui/dynamic/A_L_eggshell.html

[2] The chemlambda repository: github.com/chorasimilarity/chemlambda-gui/blob/gh-pages/dynamic/README.md

Artificial chemistry suggests a hypothesis about real biochemistry

What is the difference between a molecule encoded in a gene and the actual molecule produced by a ribosome from a copy of the encoding?

synthetic_bigpred

Here is a bootstrapping hypothesis based on the artificial chemistry chemlambda.

This is a universal model of computation which is supposed to be very simple, though very close to real chemistry of individual molecules. The model is implemented by an algorithm, which uses as input a data format called a mol file.

The language use does not matter, although there may be more elegant versions than mine, like the one by +sreejith s  (still work in progress though) [1].

Since the model is universal it implies that the algorithm and the mol data structure can be themselves turned into an artificial molecule which reacts with the usual invisible enzymes from the model and does the computation of the reduction of the original molecule.

The boostrapping hypothesis is that the original molecule is like the synthetized molecule and that the mol file format turned into a molecule is the stored version of the molecule.

In the post there  is mentioned a previous post [2], where this was tried by hand for a molecule called the 20 quine, but now there are new scripts in the chemlambda repository which allow to do the same for any molecule (limited by the computer and the browser of course).

The final suggestion is that the hypothesis can be continued along these lines, by saying that the “enzymes” which do the rewrite are (in this boostrapping sense) the rewriting part of the algorithm.

[1] Chemlambda-py

[2] Invisible puppeteer

https://plus.google.com/+MariusBuliga/posts/2XMSyKJrPPW

Synthetic stuff. Now there is a script which allows to synthetize any chemlambda molecule, like in the previous Invisible puppeteer post.
Look in the chemlambda repository, namely at the pair synthetic.sh and synthetic.awk from this branch of the repository (i.e. the active branch).
In this animation you see the result of the “synthetic” bigpred.mol (which was the subject of the recent hacker news link).

What I did:
– bash synthetic.sh and choose bigpred.mol
– the output is the file synt.mol
– bash quineri.sh and choose synt.mol (I had a random evolution, with cycounter=150, time_val=5 (for safari, but for chromium I choose time_val=10 or even 20).
– the output is synt.html
– I opened synt.html with a text editor to change a bit some stuff: at lines 109-110 I choose a bigger window         var w = 800,   h = 800; and smaller charges and gravity (look for  and modify to .charge(-10)
.gravity(.08) ).

Then I opened the synt.html with safari. (Also worked with chromium). It’s a hard time for the browser because the initial graph has more than 1400 nodes (and the problem is not coming from setTimeout because, compared with other experiments, there are not as many, but it comes from an obscure problem of d3.js with prototypes; anyway this makes firefox lousy, which is a general trend at firefox, don’ know why, chromium OK and safari great. I write this as a faithful user of firefox!).
In this case even the safari had to think a bit about life, philosophy, whatnot, before it started.

I made a screencast with Quicktime and then sped it up progressively to 16X, in order to be able to fit it into less than 20s.

Then I converted the .mov screencast to .gif and I proudly present it to you!

It worked!

Now that’s a bit surprising, because, recall, what I do is to introduce lots of new nodes and bonds, approx 6X the initial ones, which separate the molecule which I want to synthetize into a list of nodes and a list of edges. The list of edges is transformed into a permutation.

Now during the evolution of the synt molecule, what happens is that the permutation is gradually applied (because if randomness) and it will mix with the evolution of the active pieces which start already to rewrite.

But it worked, despite the ugly presence of a T node, which is the one which sometimes may create problems due to such interferences if the molecule is not very well designed.

At the end I recall what I believe is the take away message, namely that the mol file format is a data structure which itself can be turned into a molecule.

The inner artificial life of a cell, a game proposal

The  inner life of a cell is an excellent, but passive window

It is also scripted according to usual human expectations: synchronous moves, orchestrated reactions at a large scale. This is of course either something emergent in real life, or totally unrealistic.

As you know, I propose to use the artificial chemistry chemlambda for making real, individual molecular computing devices, as explained in Molecular computers.

But much more can be aimed at, even before the confirmation that molecular computers, as described there,  can be built by us humans.

Of course that Nature builds them everywhere, we are made of them. It works without any external control, not as a sequence of lab operations, asynchronously, in a random environment, and it is very hard to understand if there is a meaning behind the inner life of a living cell, but it works nevertheless without the need of a semantics to streamline its workings.

So obvious, however so far from IT way of seeing computation.

Despite the huge and exponential advances in synthetic biology, despite the fact that many of these advances are related to IT, despite that more and more ways to control biological workings, I believe that there has to be a way to attack the problem of computations in biology from the basis. Empirical understanding is great and will fuel for some time this amazing synthetic biology evolution, but why not thinking about understanding how life works, instead of using biological parts to make functions, gates and other components of the actual IT paradigm?

As a step, I propose to try to build a game-like artificial life cell, based on chemlambda. It should look and feel like the Inner life of a cell video, only that it would be interactive.

There are many bricks already available, some molecules (each with its own story and motivation) are in the chemlambda repository, other are briefly described, with animations, in the chemlambda collection.

For example:
– a centrosome and the generated microtubules like in

https://plus.google.com/+MariusBuliga/posts/1mUDCRRynfH
kinesins  as adapted walkers like in

https://plus.google.com/+MariusBuliga/posts/8agbhCH6L7B

– molecules built from other ones like RNA from DNA

– programmed computations (mixing logic and biologic)

– all in an environment looking somehow like this

https://plus.google.com/+MariusBuliga/posts/DFuT9D7coZL
Like in a game, you would not be able to see the whole universe at once, but you could choose to concentrate to this part or that part.
You could build superstructures from chemlambda quines and other bricks, then you could see what happens either in a random environment or in one where, for example, reductions happen triggered by the neighbourhood of your
mouse pointer (as if the mouse pointer is a fountain of invisible enzymes).

Videos like this, about the internal working of a neuron


would become tasks for the gamer.

______________________________________________________________

An apology of molecular computers and answers to critics

This is how a molecular computer would look like, if seen with a magically powerful microscope. It is a single molecule which interacts randomly with other molecules, called “enzymes”, invisible in this animation.

molecular_computer_new

There is no control over the order of the chemical reactions. This is the idea, to compute without control.

The way it works is like this: whenever a reaction happens, this creates the conditions for the next reaction to happen.

There is no need to use a supercomputer to model such a molecule, nor is it reasonable to try, because of big number of the atoms.

It is enough instead to find real molecular assemblies for nodes, ports and bonds, figured here by colored circles and lines.

The only computations needed are those for simulating the family of rewrites – chemical reactions. Every such rewrite involves up to 4 nodes, therefore the computational task is handy.

Verify once that the rewrites are well done, independently of the situation where you want to apply them, that is all.

Once such molecular compounds are found, the next task is to figure out how to build (by chemical reactions) such molecules.

But once one succeeds to build one molecule, the rest is left to Nature way of computing: random, local, asynchronous.

From this stage there is no need to simulate huge molecules in order to know they work. That is something given by the chemlambda formalism.

It is so simple: translate the rewrites into real chemistry, they are easy, then let go the unneeded control from that point on.

This animation is a screencast of a part of the article Molecular computers
http://chorasimilarity.github.io/chemlambda-gui/dynamic/molecular.html
and everything can be validated (i.e. verified by your own) by using the chemlambda repository
https://github.com/chorasimilarity/chemlambda-gui/tree/gh-pages/dynamic

Now I’ll pass to a list of critics which, faced with the evidence, they look uninformed:
1. Chemlambda is one of those rewriting systems everybody knows. Ignorant claim, while it is true that some rewrites appear all over the place, from string theory to knot theory to category theory to geometry of interaction, the family of graphs considered is not the same, because those graphs are purely combinatorial object and they don’t need a global embedding, like all other formalism do, in a schematic space-time. Moreover, the choice of the rewrites is such that the system works only by local rewriting and no global control on the cascade of rewrites. No other formalism from the family does that.

2.  Is well known that all this is already done in the category theory treatment of lambda calculus.

False, if one really reads what they do in category theory with lambda calculus, then one figures quick that they can’t do much for untyped lambda beta calculus, that is without eta reduction. This is mentioned explicitly in Barendregt, for example, but the hype around categories and lambda calculus is so pervasive that people believe more than what actually is.

3.  Chemical computing is old stuff: DNA computing, membrane computing, the chemical abstract machine, algorithmic chemistry.

Just because it is chemical computing, it does not mean that it is in the family mentioned.

The first name of chemlambda was “chemical concrete machine” and there there are comparison with the chemical abstract machine
http://arxiv.org/abs/1309.6914
(btw I see that some people discover now “catalysts” without credits in the written papers)
The cham is a formalism working with multisets of molecules, not with individual ones, and the computation is done by what corresponds to lab operation (splitting a solution in two, heating, cooling, etc)
The membrane computing work is done around membranes which enclose containers of multisets of molecules, the membrane themselves being some abstract concepts, of a global nature, whil ein reality, as well as in chemlambda, everything is a molecule. Membranes exist in reality but they are made of many molecular compounds.
DNA computing is an amazing research subject, which may be related to chemlambda if there is a realization of chemlambda nodes, ports and bonds, but not otherwise, because there is not, up to my knowledge, any model in DNA computing with the properties: individual molecules, random reactions, not lab operations.
Algorithmic chemistry is indeed very much related to chemlambda, by the fact that it proposes a chemical view on lambda calculus. But from this great insight, the paths are very different. In algorithmic chemistry the application operation from lambda calculus represents a chemical reaction and the lambda abstraction signals a reactive site. In chemlambda the application and lambda abstraction corresponds to atoms of molecules. Besides, chemlambda is not restricted to lambda calculus, only some of the chemlambda molecules can be put in relation with lambda terms, but even for those, the reactions they enter don’t guarantee that the result is a molecule for a lambda term.

Conclusion: if you are a chemist, consider chemlambda, there is nothing like it already proposed. The new idea is to let control go and instead chain the randomly appearing reactions by their spatial patterns, not by lab operations, nor by impossibly sophisticated simulations.
Even if in reality there would be more constraints (coming from the real spatial shapes of the molecules constructed from these fundamental bricks) this would only influence the weights of the random encounters with the enzymes, thus not modifying the basic formalism.
And if it works in reality, even for only situations where there are cascades of tens of reactions, not hundreds or thousands, even that would be a tremendous advance in chemical computing, when compared with the old idea of copying boolean gates and silicon computers circuits.

______________________________________

Appeared also in the chemlambda collection microblog

https://plus.google.com/+MariusBuliga/posts/DE6mWMbieFk

______________________________________

What if… it can be done? An update of an old fake news post

In May 2014 I made a fake news post (with the tag WHAT IF) called Autodesk releases Seawater. It was about this big name who just released a totally made up product called Seawater.

“SeaWater is a design tool for the artificial life based decentralized Internet of Things.”

In the post it is featured this picture

[source]

scoop-of-water-magnified-990x500… and I wrote:

“As well, it could be  just a representation of the state of the IoT in a small neighbourhood of you, according to the press release describing SeaWater, the new product of Autodesk.”

Today I want to show you this:

3_27_quine

or better go and look in fullscreen HD this video

The contents is explained in the post from the microblogging collection chemlambda

27 microbes. “This is a glimpse of the life of a community of 27 microbes (aka chemlambda quines). Initially the graph has 1278 nodes (atoms) and 1422 edges (bonds). There are hundreds of atoms refreshed and bonds made and broken at once.”

Recall that all this is done with the most simple algorithm, which turns chemlambda into an asynchronous graph rewrite automaton.

A natural development would be to go further, exactly like described in the Seawater post.

Because it can be done 🙂

_________________________________

The Internet can be your pet

or  you could have a pet which embeds and run a copy of the whole Internet.

The story from this post  starts from this exploratory posting on Google plus from June 2nd, 2015, which zooms from sneakernet to sneakernet  delay-tolerant networking to  Interplanetary Internet to Nanonetworks to DNA digital data storage to molecular computers.

I’ll reproduce the final part, then I’ll pass to the Internet as you pet thing.

“At this stage things start to be interesting. There is this DNA digital data storage technique
http://en.wikipedia.org/wiki/DNA_digital_data_storage
which made the news recently by claiming that the whole content of the Net fits into a spoon of DNA (prepared so to encode it by the technique).

I have not been able to locate the exact source of that claim, but let’s believe it because it sounds reasonable (if you think at the scales involved).

It can’t be the whole content of the net, it must mean the whole passive content of the net. Data. A instant (?) picture of the data, no program execution.

But suppose you have that spoonful of DNA, how do you use it? Or what about also encoding the computers which use this data, at a molecular level.

You know, like in the post about one Turing Machine, Two Turing Machines https://plus.google.com/+MariusBuliga/posts/4T19daNatzt
if you want classical computers running on this huge DNA tape.

Or, in principle, you may be able to design a molecular google search …
molecule, which would interact with the DNA data to retrieve some piece of it.

Or you may just translate all programs running on all computers from all over the world into lambda calculus, then turn them into chemlambda molecules, maybe you get how much, a cup of molecular matter?

Which attention:
– it executes as you look at it
– you can duplicate it into two cups in a short matter of time, in the real world
– which makes the sneakernet simply huge related to the virtual net!

Which brings of course molecular computers proposal to the fore
http://chorasimilarity.github.io/chemlambda-gui/dynamic/molecular.html  ”

Let’s develop this a bit! (source)

The following are projections about the possible future of biochemical computations with really big data: the whole, global data produced and circulated by humans.

They are based on estimates of the information content in the biosphere from the article [1] and on a proposal for life like molecular computing.

Here are the facts. In [1] there are given estimates about the information content of DNA in the biosphere, which are used further.

One estimate is that there are about 5 X 10^11 tonnes of DNA, in a biomass of about 2 X 10^12 tonnes, which gives a proportion of DNA in biomass of about 1/40.

This can be interpreted as: in order to run the biochemical computations with 1g of DNA there are needed about 40g of biochemical machinery.

From the estimate that the biomass contains about 5 X 10^30 cells, it follows that 4g of DNA are contained (and thus run in the biochemical computation) in 10^13 cells.

The Internet has about 3 X 10^9 computers and the whole data stored is equivalent with about 5g of DNA [exact citation not yet identified, please provide a source].

Based on comparisons with the Tianhe-2 supercomputer (which has about 3 X 10^6 cores) it follows that the whole Internet processes in a way equivalent as a magnitude order to 10^3 such supercomputers.
From [1] (and from the rather dubious equivalence of FLOPS with NOPS)  we get that the whole biosphere has a power of 10^15 X 10^24 NOPS, which gives for 10^13 cells (the equivalent of 4g of DNA) about 10^17 NOPS. This shows that approximately the power of the biochemical computation of 4g of DNA (embedded in the biochemical machinery of about 160g) is of the same order with the power of computation of the whole internet.

Conclusion until now: the whole Internet could be run in a “pet” living organism of about 200g. (Comparable to a rat.)

This conclusion holds only if there is a way to map silicon and TM based computers into biochemical computations.

There is a huge difference between these two realms, which comes from the fact that the Internet and our computers are presently built as a hierarchy, with multiple levels of control, while in the same time the biochemical computations in a living cell do not have any external control (and there is no programming).

It is therefore hard to understand how to map the silicon and TM based computations (which run one of the many computation paradigms embedded into the way we conceive programming as a discipline of hierarchical control) into a decentralized, fully asynchronous, in a ransom environment biochemical computation.

But this is exactly the proposal made in [2], which shows that in principle this can be done.

The details are that in [2] is proposed an artificial chemistry (instead of the real world chemistry) and a model of computation which satisfies all the requirements of biochemical computations.
(See very simple examples of such computations in the chemlambda collection https://plus.google.com/u/0/collection/UjgbX )

The final conclusion, at least for me, is that provided there is a way to map this (very basic) artificial chemistry into real chemical reactions, then one day you might have the whole Internet as a copy which runs in your pet.

[1] An Estimate of the Total DNA in the Biosphere,
Hanna K. E. Landenmark,  Duncan H. Forgan,  Charles S. Cockell,
http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002168

[2] Molecular computers,
Marius Buliga, 2015
http://chorasimilarity.github.io/chemlambda-gui/dynamic/molecular.html

Real or artificial chemistries? Questions about experiments with rings replications

The replication mechanism for circular bacterial chromosomes is known. There are two replication forks which propagate in two directions, until they meet again somewhere and the replication is finished.

Bidirectionalrep2

[source, found starting from the wiki page on circular bacterial chromosomes]

In the artificial chemistry chemlambda something similar can be done. This leads to some interesting questions. But first, here is a short animation which describes the chemlambda simulation.

ringduplication

The animation has two parts, where the same simulation is shown. In the first part some nodes are fixed, in order to ease the observation of the propagation of the replication, which is like the propagation of the replication forks. In the second part there is no node fixed, which makes easy to notice that eventually we get two ring molecules from one.

____________

If the visual evidence convinced you, then it is time to go to the explanations and questions.

But notice that:

  • The replication of circular DNA molecules is done with real chemistry
  • The replication of the circular molecule from the animation is done with an artificial chemistry model.

The natural question to ask is: are these chemistries the same?

The answer may be more subtle than a simple yes or no. As more visual food for thinking, take a look at a picture from the Nature Nanotechnology Letter “Self-replication of DNA rings” http://www.nature.com/nnano/journal/vaop/ncurrent/full/nnano.2015.87.html by Junghoon Kim, Junwye Lee, Shogo Hamada, Satoshi Murata & Sung Ha Park

nnano.2015.87-f1

[this is Figure 1 from the article]

This is a real ring molecule, made of patterns which, themselves are made of DNA. The visual similarity with the start of the chemlambda simulation is striking.

But this DNA ring is not a DNA ring as in the first figure. It is made by humans, with real chemistry.

Therefore the boldfaced question can be rephrased as:

Are there real chemical assemblies which react as of they are nodes and bonds of the artificial chemistry?

Like actors in a play, there could be a real chemical assembly which plays the role of a red atom in the artificial chemistry, another real chemical assembly which plays the role of a green atom, another for a small atom (called “port”) and another for a bond between these artificial chemistry atoms.

From one side, this is not surprising, for example a DNA molecule is figured as a string of letters A, C, T, G, but each letter is a real molecule. Take A (adenine)

800px-Adenine-3D-balls

[source]

Likewise, each atom from the artificial chemistry (like A (application), L (lambda abstraction), etc) could be embodied in real chemistry by a real molecule. (I am not suggesting that the DNA bases are to be put into correspondence with artificial chemistry atoms.)

Similarly, there are real molecule which could play the role of bonds. As an ilustration (only), I’ll take the D-deoxyribose, which is involved into the backbone structure of a DNA molecule.

D-deoxyribose_chain-3D-balls

[source]

So it turns out that it is not so easy to answer to the question, although for a chemist may be much easier than for a mathematician.

___________

 0. (few words about validation)  If you have the alternative to validate what you read, then it is preferable to authority statements or hearsay from editors. Most often they use anonymous peer-reviews which are unavailable to the readers.

Validation means that the reader can make an opinion about a piece of research by using the means which the author provides. Of course that if the means are not enough for the reader, then it is the author who takes the blame.

The artificial chemistry  animation has been done by screen recording of the result of a computation. As the algorithm is random, you can produce another result by following the instructions from
https://github.com/chorasimilarity/chemlambda-gui/blob/gh-pages/dynamic/README.md
I used the mol file model1.mol and quiner.sh. The mol file contains the initial molecule, in the mol format. The script quiner.sh calls the program quiner.awk, which produces a html and javascript file (called model1.html), which you can see in a browser.

I  added text to such a result and made a demo some time ago

http://chorasimilarity.github.io/chemlambda-gui/dynamic/model1.html

(when? look at the history in the github repo, for example: https://github.com/chorasimilarity/chemlambda-gui/commits/gh-pages/dynamic/model1.html)

1. Chemlambda is a model of computation based on artificial chemistry which is claimed to be very closed to the way Nature computes (chemically), especially when it comes to the chemical basis of computation in living organisms.
This is a claim which can be validated through examples. This is one of the examples. There are many other more; the chemlambda collection shows some of them in a way which is hopefully easy to read (and validate).
A stronger claim, made in the article Molecular computers (link further), is that chemlambda is real chemistry in disguise, meaning that there exist real chemicals which react according to the chemlambda rewrites and also according to the reduction algorithm (which does random rewrites, as if produced by random encounters of the parts of the molecule with invisible enzymes, one enzyme type per rewrite type).
This claim can be validated by chemists.
http://chorasimilarity.github.io/chemlambda-gui/dynamic/molecular.html

2. In the animation the duplication of the ring molecule is achieved, mostly, by the propagation of chemlambda DIST rewrites. A rewrite from the family DIST typically doubles the number of nodes involved (from 2 to 4 nodes), which vaguely suggest that DIST rewrites may be achieved by a DNA replication mechanism.
(List of rewrites here:
http://chorasimilarity.github.io/chemlambda-gui/dynamic/moves.html )

So, from my point of view, the question I have is: are there DNA assemblies for the nodes, ports and bonds of chemlambda molecules?

3. In the first part of the animation you see the ring molecule with some (free in FRIN and free out FROUT) nodes fixed. Actually you may attach more graphs at the free nodes (yellow and magenta 1-valent nodes in the animation).

You can clearly see the propagation of the DIST rewrites. In the process, if you look closer, there are two nodes which disappear. Indeed, the initial ring has 9 nodes, the two copies have 7 nodes each. That is because the site where the replication is initiated (made of two nodes) is not replicated itself. You can see the traces of it as a pair of two bonds which connect, each, a free in with a free out node.

In the second part of the animation, the same run is repeated, this time without fixing the FRIN and FROUT nodes before the animation starts. Now you can see the two copies, each with 7 nodes, and the remnants of the initiation site, as a two pairs of FRIN-FROUT nodes.

_________________________________________

Screen recording of the reading experience of an article which runs in the browser

The title probably needs parsing:

SCREEN RECORDING {

READING {

PROGRAM EXECUTION {

RESEARCH ARTICLE }}}

An article which runs in the browser is a program (ex. html and javascript)  which is executed by the browser. The reader has access to the article as a program, to the data and other programs which have been used for producing the article, to all other articles which are cited.

The reader becomes the reviewer. The reader can validate, if he wishes, any piece of research which is communicated in the article.

The reader can see or interact with the research communicated. By having access to the data and programs which have been used, the reader can produce other instances of the same research (i.e virtual experiments).

In the case of the article presented as an example, embedded in the article are animations of the Ackermann function computation and the other concerning the building of a molecular structure. These are produced by using an algorithm which has randomness in the composition, therefore the reader may produce OTHER instances of these examples, which may or may not be consistent with the text from the article. The reader may change parameters or produce completely new virtual experiments, or he may use the programs as part of the toolbox for another piece of research.

The experience of the reader is therefore:

  • unique, because of the complete freedom to browse, validate, produce, experiment
  • not limited to reading
  • active, not passive
  • leading to trust, in the sense that the reader does not have to rely on hearsay from anonymous reviewers

In the following video there is a screen recording of these possibilities, done for the article

M. Buliga, Molecular computers, 2015, http://chorasimilarity.github.io/chemlambda-gui/dynamic/molecular.html

This is the future of research communication.

____________________________________________________________

A neuron-like artificial molecule

This is a neuron-like artificial molecule, simulated in chemlambda.

neuron

It has been described in the older post

https://chorasimilarity.wordpress.com/2015/03/04/how-to-put-a-y-combinator-into-a-neuron-and-lock-it-with-a-quine/

Made with quiner.sh and the file neuron.mol as input.
If you want to make your own, then follow the instructions from here:

https://github.com/chorasimilarity/chemlambda-gui/blob/gh-pages/dynamic/README.md

It is on the list of demos: http://chorasimilarity.github.io/chemlambda-gui/dynamic/neuron.html

__________
Needs explanations, because this is not exactly the image used for a neuron in a neural network.

_________
1. In neural networks a neuron is a black box with some inputs and some outputs, which computes and sends (the same) signal at the outputs as a function of the signals from the inputs. A connection between an output of one neuron and an an input of another has a weight.

The interpretation is that outputs represent the axon of a neuron, which is like a tree with the root in the neuron’s soma. The inputs of the neuron represents the dendrites. For a neuron in a neural network, a connection between an input (leaf of a dendrite) and an output (leaf of an axon) has a weight because it models an average of many contact points (synapses) between dendrites and axons.

The signals sent represent electrical activity in the neural network.

2. Here, the image is different.

Each neuron is seen as a bag of chemicals. The electrical signals are only a (more easy to measure) aspect of the cascades of chemical reactions which happen inside the neuron’s some, in the dendrites, axon and synapses.

Therefore a neuron in this image is the chemical reactions which happen.

From here we go a bit abstract.

3. B-type unorganised machines. In his fundamental research “Intelligent Machinery” Turing introduced his  B-type neural networks
http://www.alanturing.net/turing_archive/pages/Reference%20Articles/connectionism/Turing%27s%20neural%20networks.html
or B-type unorganised machine, which is made by more or less identical neurons (they compute a boolean function) which are connected via a connection-modifier box.

A connection modifier box has an input and an output, which are part of the connection between two neurons. But it has also some training connections, which can modify the function  computed by the connection modifier.

What is great is that Turing explains that the connection modifier can me made by neurons
http://www.alanturing.net/turing_archive/archive/l/l32/L32-007.html
so that actually a B-type unorganised machine is an A-type unorganised machine (same machine without connection modifiers), where we see the connection modifiers as certain, well chosen patterns of neurons.

OK, the B-type machines compute by signals sent and received by neurons nevertheless. They compute boolean values.

4. What would happen  if we pass from Turing to Church?

Let’s imagine the same thing, but in lambda calculus: neurons which reduce lambda terms, in networks which have some recurring patterns which are themselves made by neurons.

Further: replace the signals (which are now lambda terms) by their chemical equivalent — chemlambda molecules — and replace the connection between them by chemical bonds. This is what you see in the animation.

Connected to the magenta dot (which is the output of the axon) is a pair of nodes which is related to the Y combinator, as seen as a molecule in chemlambda. ( Y combinator in chemlambda explained in this article http://arxiv.org/abs/1403.8046 )

The soma of this neuron is a single node (green), it represents an application node.

There are two dendrites (strings of red nodes) which have each 3 inputs (yellow dots). Then there are two small molecules at the end of the dendrites which are chemical signals that the computation stops there.

Now, what happens: the neuron uses the Y combinator to build a tree of application nodes with the leaves being the input nodes of the dendrites.

When there is no more work to do the molecules which signal these interact with the soma and the axon and transform all into a chemlambda quine (i.e. an artificial bug of the sort I explained previously) which is short living, so it dies after expelling some “bubbles”  (closed strings of white nodes).

5. Is that all? You can take “neurons” which have the soma any syntactic tree of a lambda term, for example. You can take neurons which have other axons than the Y-combinator. You can take networks of such neurons which build and then reduce any chemlambda molecule you wish.
__________________________________________

Artificial life, standard computation tests and validation

In previous posts from the [update: revived] chemlambda collection  I wrote about the simulation of various behaviours of living organisms by the artificial chemistry called chemlambda.
UPDATE: the links to google+ animations are no longer viable. There are two copies of the collection, which moreover are enhanced:

______________

There are more to show in this direction, but there is already an accumulation of them:
jellyfish
20_20_hyb
9_9_hyb
As the story is told backwards, from present to the past, there will be more about reproduction later.
Now, that is one side of the story: these artificial microbes or molecules manifest life characteristics, stemming from an universal, dumb simple algorithm, which does random rewrites as if the molecule encounters invisible rewriting enzymes.

 

So, the mystery is not in the algorithm. The algorithm is only a sketchy physics.

But originally this formalism has been invented for computation.

 

It does pass very well standard computation steps, as well as nonstandard ones (from the point of view of biologists, who perhaps don’t hold enough sophisticated views as to differentiate between boring boolean logic gates and recursive but not primitive recursive functions like the Ackermann function).

In the following animation you see a few seconds screenshot of the computation of a factorial function.

facto

Recall that the factorial is something a bit more sophisticated than a AND boolean gate, but it is primitively recursive, so is less sophisticated than the Ackermann function.

 

During these few seconds there are about 5-10 rewrites. The whole process has several hundreds of them.

How are they done? Who decides the order? How are criteria satisfied or checked, what is incremented, when does the computation stop?

That is why it is wonderful:

  • the rewrites are random
  • nobody decides the order, there’s no plan
  • there are no criteria to check (like equality of values), there are no values to be incremented or otherwise to be passed
  • the computation stops when there are no possible further rewrites (thus, according to the point of view used with the artificial life forms, the computation stops when the organism dies)

Then, how it works? Everything necessary is in the graphs and the rewrite patterns they show.

It is like in Nature, really. In Nature there is no programmer, no function, no input and output, no higher level. All these are in the eyes of the human observer, who then creates a story which has some predictive power if it is a scientific one.

All I am writing can be validated by anybody wishing to do it. Do not believe me, it is not at all necessary to appeal to authority here.

So I conclude:  this is a system of a simplified world which manifest life like behaviours and universal computation power, in the presence of randomness. In this world there is no plan, there is no control and there are no externally imposed goals.

Very unlike the Industrial Revolution thinking!

This thread of posts can be seen at once in the chemlambda collection
https://plus.google.com/u/0/collection/UjgbX

If you want to validate this wordy post yourself then go to the github repository and read the instructions
https://github.com/chorasimilarity/chemlambda-gui/blob/gh-pages/dynamic/README.md

The live computation of the factorial is here
http://chorasimilarity.github.io/chemlambda-gui/dynamic/lisfact_2_mod.html

Which means that if you want to produce another random computation of the factorial then you have to use the file
lisfact_2_mod.mol and to follow the instructions.

________________________________________________________

Lambda calculus and other “secret alien technologies” translated into real chemistry

That is the shortest description.
A project in chemical computing: lambda calculus and other “secret alien technologies” translated into real chemistry.
http://chorasimilarity.github.io/chemlambda-gui/dynamic/molecular.html
factor4
Question:
  • which programming language is based on “secret alien technology”?

The gif is a detail from the story of the factorial
http://chorasimilarity.github.io/chemlambda-gui/dynamic/lisfact_2_mod.html

Join me on erasing the distinction between virtual and real!

And do not believe me, nor trust authority (because it has its own social agenda). Use instead a validation technique:

  1. download the gh-pages branch of the repo from this link https://github.com/chorasimilarity/chemlambda-gui/archive/gh-pages.zip
  2. unzip it and go the “dynamic” folder
  3. edit the copy you have of the  latest main script  https://github.com/chorasimilarity/chemlambda-gui/blob/gh-pages/dynamic/check_1_mov2_rand_metabo_bb.awk to change the parameters (number of cycles, weights of moves, visualisation parameters)
  4. use the command “bash moving_random_metabo_bb.sh”   (see what it contains https://github.com/chorasimilarity/chemlambda-gui/blob/gh-pages/dynamic/moving_random_metabo_bb.sh )
  5. you shall see the list of all .mol files from the “dynamic” folder. If you want to reproduce a demo, or better a new random run of a computation shown in a demo, then choose the file.mol which corresponds to the file.html name of the demo page http://chorasimilarity.github.io/chemlambda-gui/dynamic/demos.html

An explanation of the algorithm embedded in the main script is here

https://chorasimilarity.wordpress.com/2015/04/27/a-short-complete-description-of-chemlambda-as-a-universal-model-of-computation/

but in the latest main script there are additioned new moves for a busy beaver Turing machine, see the explanations in  the work in progress

http://chorasimilarity.github.io/chemlambda-gui/dynamic/turingchem.html

On the other side, if you are a lazy person who wishes to rely on anonymous people who declare they read the work, then go play elsewhere. Here are used the highest standards for validity checks.

___________________________________________________

Three models of chemical computing

+Lucius Meredith  pointed to stochastic pi-calculus ans SPiM  in a discussion about chemlambda. I take this as reference http://research.microsoft.com/en-us/projects/spim/ssb.pdf (say pages 8-13 to have a anchor for the discussion).

Analysis: SPiM side
– The nodes of the graphs are molecules, the arrows are channels.
– As described there the procedure is to take a (huge perhaps) CRN and to reformulate it more economically, as a collection of graphs where nodes are molecules, arrows are channels.
– There is a physical chemistry model behind which tells you which probability has each reaction.
– During the computation the reactions are known, all the molecules are known, the graphs don’t change and the variables are concentrations of different molecules.
– During the computation one may interpret the messages passed by the channels as decorations of a static graph.

The big advantage is that indeed, when compared with a Chemical Reactions Network approach,  the stochastic pi calculus transforms the CRN  into a much more realistical model. And much more economical.

chemlambda side:

Take the pet example with the Ackermann(2,2)=7 from the beginning of http://chorasimilarity.github.io/chemlambda-gui/dynamic/molecular.html

(or go to the demos page for more http://chorasimilarity.github.io/chemlambda-gui/dynamic/demos.html )

– You have one molecule (it does not matter if it is a connected graph or not, so you may think about it as being a collection of molecules instead).
– The nodes of the molecule are atoms (or perhaps codes for simpler molecular bricks). The nodes are not species of molecules, like in the SPiM.
– The arrows are bonds between atoms (or perhaps codes for other simple molecular bricks). The arrows are not channels.
– I don’t know which are all the intermediary molecules in the computation. To know it would mean to know the result of the computation before. Also, there may be thousands possible intermediary molecules. There may be an infinity of possible intermediary molecules, in principle, for some initial molecules.
-During the computation the graph (i.e. the molecule) changes. The rewrites are done on the molecule, and they can be interpreted as chemical reactions where invisible enzymes identify a very small part of the big molecule and rewrite them, randomly.

Conclusion: there is no relation between those two models.

Now, chemlambda may be related to +Tim Hutton  ‘s artificial chemistry.
Here is a link to a beautiful javascript chemistry
http://htmlpreview.github.io/?https://github.com/ModelingOriginsofLife/ParticleArtificialChemistry/blob/master/index.html
where you see that
-nodes of molecules are atoms
-arrows of molecules are bonds
-the computation proceeds by rewrites which are random
– but distinctly from chemlambda the chemical reactions (rewrites) happen in a physical 2D space (i.e based on the space proximity of the reactants)

As something in the middle between SPiM and jschemistry, there is a button which shows you a kind if instantaneous reaction graph!

All successful computation experiments with lambda calculus in one list

What you see in this links: I take a lambda term, transform it into a artificial molecule, then let it reduce randomly, with no evaluation strategy. That is what I call the most stupid algorithm. Amazing is that is works.
You don’t have to believe me, because you can check it independently, by using the programs available in the github repo.

Here is the list of demos where lambda terms are used.

Now, details of the process:
– I take a lambda term and I draw the syntactic tree
– this tree has as leaves the variables, bound and free. These are eliminated by two tricks, one for the bound variables, the other for the free ones. The bound variables are eliminated by replacing them with new arrows in the graph, going from one leg of a lambda abstraction node, to the leaf where the variable appear. If there are more places where the same bound variable appears, then insert some fanout nodes (FO). For the free variable do the same, by adding for each free variable a tree of FO nodes. If the bound variable does not appear anywhere else then add a termination (T) node.
– in this way the graph which is obtained is no longer a tree. It is a trivalent graph mostly, with some free ends. It is an oriented graph. The free ends which correspond to a “in” arrow are there for each free variable. There is only one end which correspond to an “out” arrow, coming from the root of the syntactic tree.
– I give a unique name to each arrow in the graph
– then I write the “mol file” which represents the graph, as a list of nodes and the names of arrows connected to the nodes (thus an application node A which has the left leg connected to the arrow “a”, the right leg connected to the arrow “b” and the out leg connected to “c”, is described by one line “A a b c” for example.

OK, now I have the mol file, I run the scripts on it and then I look at the output.

What is the output?

The scripts take the mol file and transform it into a collection of associative arrays (that’s why I’m using awk) which describe the graph.

Then they apply the algorithm which I call “stupid” because really is stupidly simplistic: do a predetermined number of cycles, where in each cycle do the following
– identify the places (called patterns) where a chemlambda rewrite is possible (these are pairs of lines in the mol file, so pairs of nodes in the graph)
– then, as you identify a pattern, flip a coin, if the coin gives “0” then block the pattern and propose a change in the graph
– when you finish all this, update the graph
– some rewrites involve the introduction of some 2-valent nodes, called “Arrow”. Eliminate them in a inner cycle called “COMB cycle”, i.e. comb the arrows
-repeat

As you see, there is absolutely no care about the correctness of the intermediary graphs. Do they represent lambda terms? Generically no!
Are there any variable which are passed, or evaluations of terms which are done in some clever order (eager, lazy, etc)? Not at all, there are no other variables than the names of the arrows of the graph, or these ones have the property that they are names which appear twice in the mol file (once in a port “in”, 2nd in a port “out”). When the pattern is replaced these names disappear and the names of the arrows from the new pattern are generated on the fly, for example by a counter of arrows.

The scripts do the computation and they stop. There is a choice made over the way of seeing the computation and the results.
One obvious choice would be to see the computation as a sequence of mol files, corresponding to the sequence of graphs. Then one could use another script to transform each mol file into a graph (via, say, a json file) and use some graph visualiser to see the graph. This was the choice in the first scripts made.
Another choice is to make an animation of the computation, by using d3.js. Nodes which are eliminated are first freed of links and then they vanish, while new nodes appear, are linked with their ports, then linked with the rest of the graph.

This is what you see in the demos. The scripts produce a html file, which has inside a js script which uses d3.js. So the output of the scripts is the visualisation of the computation.

Recall hat the algorithm of computation is random, therefore it is highly likely that different runs of the algorithm give different animations. In the demos you see one such animation, but you can take the scripts from the repo and make your own.

What is amazing is that they give the right results!

It is perhaps bizzare to look at the computation and to not make any sense of it. What happens? Where is the evaluation of this term? Who calls whom?

Nothing of this happens. The algorithm just does what I explained. And since there are no calls, no evaluations, no variables passed from here to there, that means that you won’t see them.

That is because the computation does not work by the IT paradigm of signals sent by wires, through gates, but it works by what chemists call signal transduction. This is a pervasive phenomenon: a molecule enters in chemical reactions with others and there is a cascade of other chemical reactions which propagate and produce the result.

About what you see in the visualisations.
Because the graph is oriented, and because the trivalent nodes have the legs differentiated (i.e. for example there might be a left.in leg, a right.in leg and a out leg, which for symmetry is described as a middle.out leg) I want to turn it into an unoriented graph.
This is done by replacing each trivalent node by 4 nodes, and each free end or termination node by 2 nodes each.
For trivalent nodes there will be one main node and 3 other nodes which represents the legs. These are called “ports”. There will be a color-coded notation, and the choice made is to represent the nodes A (application) and FO by the main node colored green, the L (lambda) and FI (fanin, exists in chemlamda only) by red (actually in the demos this is a purple)
and so on. The port nodes are coloured by yellow for the “in” ports and by blue for the “out” ports. The “left”, right”, “middle” types are encoded by the radius of the ports.

__________________________________________________

Turing Machines, chemlambda style (I)

UPDATE: First animation and a more detailed version here. There will be many additions, as well as programs available from the gh-pages of the repo.

Once again: why do chemists try to reproduce silicon computers? Because that’s the movie.

    At first sight they look very different. Therefore, if we want to compare them, we have to reformulate them in ways which are similar. Rewrite systems are appropriate here.
    The Turing Machine (TM) is a model of computation which is well-known as being a formalisation of what a computer is. It is a machine which has some internal states (from a set S), has a head which read/writes symbols (from a set A) on a tape. The tape is seen as an infinite word made of letters from A. The set A has a special letter (call it “b” from blank) and the infinite words which describe tapes have to be such that all but a finite number of letters of that word are different from “b”. Imagine an infinite, in both directions, tape which has written symbols on it, such that “b” represents an empty space. The tape has only a finite part of it filled with letters from A, others than the blank letter.
    The action of the machine depends on the internal state and on the symbol read by the head. It is therefore a function of the internal state of the machine (element of S), the letter from the tape which is read (element of A), and outputs a letter from the alphabet A (which is written in the case where previously was the letter which has been read, changes its internal state, and the head moves one step along the tape, to the left (L) or right (R), or does not move at all (N).
    The TM can be seen as a rewrite system.
    For example, one could see a TM as follows. (Pedantically this is seen as a multi-headed TM without internal states; the only interest in this distinction is that it raises the question if there is really any meaning into discerning internal states from the tape symbols.) We start from a set (or type) of internal states S. Such states are denoted by A, B, C (thus exhibiting their type by the type of the font used). There are 3 special symbols: < is the symbol of the beginning of a list (i.e. word), > is the symbol of the end of a list (word) and M is the special symbol for the head move (or of the type associated to head moves). There is an alphabet A of external states (i.e. tape symbols), with b (the blank) being in A.
    A tape is then a finite word (list) of one of the forms < w A w’ > , < w M A w’ > , < w A M w’ >, where A is an internal state and w and w’ are finite words written with the alphabet A, which can be empty words as well.
    A rewrite replaces a left pattern (LT) by a right pattern (RT), and there are denoted as LT – – > RT . Here LT and RT are sub-words of the tape word. It supposed that all rewrites are context independent, i.e. LT is replaced by RT regardless of the place where LT appears in the tape word. The rewrite is called “local” if the lengths (i.e. number of letters) of LT and RT are bounded a priori.
    A TM is given as a list of Turing instructions, which have the form (current internal state, tape letter read, new internal state, tape letter write, move of the tape). In terms as explained here, all this can be expressed via local rewrites.

  • Rewrites which introduce blanks at the extremities of the written tape:
    • < A   – – >   < b A
    • A >   – – >   A b >
  • Rewrites which describe how the head moves:
    • A M a   – – >   a A
    • a M A   – – >   A a
  • Turing instructions rewrites:
    • a A c   – – >   d B M c   , for the Turing instruction (A, a, B, d, R)
    • a A c   – – >   d M B c   , for the Turing instruction (A, a, B, d, L)
    • a A c   – – >   d B c   , for the Turing instruction (A, a, B, d, N)
    Together with the algorithm “at each step apply all the rewrites which are possible, else stop” we obtain the deterministic TM model of computation. For any initial tape word, the algorithm explains what the TM does to that tape. < don’t forget to link that to a part of the Cartesian method “to be sure that I made an exhaustive enumeration” which is clearly going down today > Other algorithms are of course possible. Before mentioning some very simple variants of the basic algorithm, let’s see when it works.
    If we start from a tape word as defined here, there is never a conflict of rewrites. This means that there is never the case that two LT from two different rewrites overlap. It might be the case though, if we formulate some rewrites a bit differently. For example, suppose that the Turing rewrites are modified to:
  • a A   – – >   d B M   , for the Turing instruction (A, a, B, d, R)
  • a A   – – >   d M B   , for the Turing instruction (A, a, B, d, L)
    Therefore, the LT of the Turing rewrites is no longer of the form “a A c”, but of the form “a A”. Then it may enter in conflict with the other rewrites, like in the cases:

  • a A M c where two overlapping rewrites are possible
    • Turing rewrite: a A M c   – – >   d M B M c &nbsp which will later produce two possibilities for the head moves rewrites, due to the string M B M
    • head moves rewrite: a A M c   – – >   a c A &nbsp which then produces a LT for a Turing rewrite for c A, instead of the previous Turing rewrite for a A
  • a A > where one may apply a Turing rewrite on a A, or a blank rewrite on A >
    The list is non exhaustive. Let’s turn back to the initial formulation to the Turing rewrites and instead let’s change the definition of a tape word. For example, suppose we allow multiple TM heads on the same tape, more precisely suppose that we accept initial tape words of the form < w1 A w2 B w3 C … wN >. Then we shall surely encounter conflicts between head moves rewrites for patterns as a A M B c.
    The most simple solution for solving these conflicts is to introduce a priority of rewrites. For example we may impose that blank rewrites take precedence over head moves rewrites, which take precedence over Turing rewrites. More such structure can be imposed (like some head moves rewrites have precedence over others). Even new rewrites may be introduced, for example rewrites which allow multiple TMs on the same tape to switch place.
    Let’s see an example: the 2-symbols, 3-states

busy beaver machine

    . Following the conventions from this work, the tape letters (i.e. the alphabet A) are “b” and “1”, the internal states are A, B, C, HALT. (The state HALT may get special treatment, but this is not mandatory). The rewrites are:

  • Rewrites which introduce blanks at the extremities of the written tape:
    • < X   – – >   < b X   for every internal state X
    • A >   – – >   A b >   for every internal state X
  • Rewrites which describe how the head moves:
    • X M a   – – >   a X   , for every internal state X and every tape letter a
    • a M X   – – >   X a   , for every internal state X and every tape letter a
  • Turing instructions rewrites:
    • b A c   – – >   1 B M c   , for every tape letter c
    • b B c   – – >   b C M c   , for every tape letter c
    • b C c   – – >   1 M C c   , for every tape letter c
    • 1 A c   – – >   1 HALT M c   , for every tape letter c
    • 1 B c   – – >   1 B M c   , for every tape letter c
    • 1 C c   – – >   1 M A c   , for every tape letter c
    We can enhance this by adding the priority of rewrites, for example in the previous list, any rewrite has priority over the rewrites written below it. In this way we may relax the definition of the initial tape word and allow for multiple heads on the same tape. Or for multiple tapes.
    Suppose we put the machine to work with an infinite tape with all symbols being blanks. This corresponds to the tape word < A >. Further are the steps of the computation:
  • < A >   – – >   < b A >
  • <b A >   – – >   < b A b >
  • < b A b >   – – >   < 1 B M b >
  • < 1 B M b >   – – >   < 1 b B >
  • < 1 b B >   – – >   < 1 b B b >
  • < 1 b B b >   – – >   < 1 b C M b >
  • < 1 b C M b >   – – >   < 1 b b C >
  • < 1 b b C >   – – >   < 1 b b C b >
  • < 1 b b C b >   – – >   < 1 b 1 M C b >
  • < 1 b 1 M C b >   – – >   < 1 b C 1 b >
  • < 1 b C 1 b >   – – >   < 1 1 M C 1 b >
  • < 1 1 M C 1 b >   – – >   < 1 C 1 1 b >
  • < 1 C 1 1 b >   – – >   < 1 M A 1 1 b >
  • < 1 M A 1 1 b >   – – >   < A 1 1 1 b >
  • < A 1 1 1 b >   – – >   < b A 1 1 1 b >
  • < b A 1 1 1 b >   – – >   < 1 B M 1 1 1 b >
  • <1 B M 1 1 1 b >   – – >   < 1 1 B 1 1 b >
  • < 1 1 B 1 1 b >   – – >   < 1 1 B M 1 1 b >
  • < 1 1 B M 1 1 b >   – – >   < 1 1 1 B 1 b >
  • < 1 1 1 B 1 b >   – – >   < 1 1 1 B M 1 b >
  • < 1 1 1 B M 1 b >   – – >   < 1 1 1 1 B b >
  • < 1 1 1 1 B b >   – – >   < 1 1 1 1 B M b >
  • < 1 1 1 1 B M b >   – – >   < 1 1 1 1 b B >
  • < 1 1 1 1 b B >   – – >   < 1 1 1 1 b B b >
  • < 1 1 1 1 b B b >   – – >   < 1 1 1 1 b C M b >
  • < 1 1 1 1 b C M b >   – – >   < 1 1 1 1 b b C >
  • < 1 1 1 1 b b C >   – – >   < 1 1 1 1 b b C b >
  • < 1 1 1 1 b b C b >   – – >   < 1 1 1 1 b 1 M C b >
  • < 1 1 1 1 b 1 M C b >   – – >   < 1 1 1 1 b C 1 b >
  • < 1 1 1 1 b C 1 b >   – – >   < 1 1 1 1 1 M C 1 b >
  • < 1 1 1 1 1 M C 1 b >   – – >   < 1 1 1 1 C 1 1 b >
  • < 1 1 1 1 C 1 1 b >   – – >   < 1 1 1 1 M A 1 1 b >
  • < 1 1 1 1 M A 1 1 b >   – – >   < 1 1 1 A 1 1 1 b >
  • < 1 1 1 A 1 1 1 b >   – – >   < 1 1 1 HALT M 1 1 1 b >
  • < 1 1 1 HALT M 1 1 1 b >   – – >   < 1 1 1 1 HALT 1 1 b >
    At this stage there are no possible rewrites. Otherwise said, the computation stops. Remark that the priority of rewrites imposed a path of the rewrites applications. Also, at each step there was only one rewrite possible, even if the algorithm does not ask for this.
    More possibilities appear if we see the tape words as graphs. In this case we pass from rewrites to graph rewrites. Here is a proposal for this.
    I shall use the same kind of notation as in

chemlambda: the mol format

    . It goes like this, explained for the busy beaver TM example. We have 9 symbols, which can be seen as nodes in a graph:

  • < which is a node with one “out” port. Use the notation FRIN out
  • > which is a node with one “in” port. Use the notation FROUT in
  • b, 1, A, B, C, HALT, M which are nodes with one “in” and one”out” port. Use a notation Name of node in out
    The rule is to connect “in” ports with “out” ports, in order to obtain a tape word. Or a tape graph, with many busy beavers on it. (TO BE CONTINUED…)

Molecular computers proposal

This is a short and hopefully clear explanation, targeted for chemists (or bio-hackers), for the molecular computer idea.
http://chorasimilarity.github.io/chemlambda-gui/dynamic/molecular.html

I would be grateful for any input, including reviews, suggestions for improvements, demands of clarifications, etc.

(updated to fit phones formats)

____________________________________________

As a handful of slime mold. That’s how smart is the whole net.

I record here, to be sure that it’s read, a message of explanations I wrote today.

Before that, or after that you may want to go check the new chemlambda demos, or to look at the older ones down the page, in case you have not noticed them. There are also additions in the moves page. Most related to this post is the vision page.

“First, I have lots of examples which are hand drawn. Some of them appeared from theoretical reasons, but the bad thing used to be that it was too complex to follow (or to be sure of that) on paper what’s happening. From the moment I started to use the mol files notation (aka g-patterns) it has been like I suddenly became able to follow in much more depth. For example that’s how I saw the “walker”, “ouroboros” and finally the first quine by reducing the graph associated to the predecessor (as written in lambda calculus). Finally, I am able now to see what happens dynamically.
However, all these examples are far from where I would like to be now, they correspond to only the introduction into the matter. But with every “technical” improvement there are new things which appear. For example, I was able to write a program which generates, one by one, each of about 1000 initial graphs made by 2 nodes and follow their reduction behaviour for a while, then study the interesting exemplars. That is how I found all kind of strange guys, like left or write propagators, switchers, back propagators, all kind of periodic (with period > 1) graphs.
So I use mainly as input mol files of graphs which I draw by hand or which I identify in bigger mol files as interesting. This is a limitation.
Another input would be a program which turns a lambda term into a mol file. The algorithm is here.
Open problem: pick a simplified programming language based on lambda calculus and then write a “compiler”, better said a parser, which turns it into a mol file. Then run the reduction with the favourite algorithm, for the moment the “stupid” determinist and the “stupid” random ones.
While this seems feasible, this is a hard project for my programming capabilities (I’d say more because of my bad character, if I don’t succeed fast then I procrastinate in that direction).
It is tricky to see how a program written in that hypothetical simple language reduces as a graph, for two reasons:
  1. it has to be pure lambda beta, but not eta (so the functions are not behaving properly, because of the lack of eta)
  2. the reductions in chemlambda do not parallel any reduction strategy I know about for lambda calculus.
The (2) makes things interesting to explore as a side project, let me explain. So there is an algorithm for turning a lambda term into a mol file. But from that part, the reductions of the graph represented by the mol file give other graphs which typically do not correspond to lambda terms. However, magically, if the initial lambda term can be reduced to a lambda term where no further reductions are possible, then the final  graph from the chemlambda reduction does correspond to that term. (I don’t have a proof for that, because even if I can prove that if restricted to lambda terms written onlly as combinators, chemlambda can parallel the beta reduction, the reduction of the basis combinators and is able to fan-out a term, it does not mean that what I wrote previously is true for any term, exactly because of the fact that during chemlambda reductions one gets out from the real of lambda terms.)
In other cases, like for the Y combinator, there is a different phenomenon happening. Recall that in chemlambda there is no variable or term which is transmitted from here to there, there is nothing corresponding to evaluation properly. In the frame of chemlambda the behaviour of the Y combinator is crystal clear though. The graph obtained from Y as lambda term, connected to an A node (application node) starts to reduce even if there is nothing else connected to the other leg of the A node. This graph reduces in chemlambda to just a pair of nodes, A and FO, which then start to shoot pairs of nodes A and FOE (there are two fanout nodes, FO and FOE). The Y is just a gun.
Then, if something else is connected to the other leg of the application node I mentioned, the following phenomenon happens. The other graph starts to self-multiply (due to the FOE node of the pair shot by Y) and, simultaneously, the part of it which is self-multiplied already starts to enter in reaction with the application node A of the pair.
This makes the use of Y spectacular but also problematic, due to too much exuberance of it. That is why, for example, even if the lambda term for the Ackermann function reduces correctly in chemlambda, (or see this 90 min film) I have not yet found a lambda term for the factorial which does the same (because the behaviour of Y has to be tamed, it produces too many opportunities to self-multiply and it does not stop, albeit there are solutions for that, by using quines).
So it is not straightforward that mol files obtained from lambda terms reduce as expected, because of the mystery of the reduction strategy, because of the fact that intermediary steps go outside lambda calculus, and because the graphs which correspond to terms which are simply trashed in lambda calculus continue to reduce in chemlambda.
On the other side, one can always modify the mol file or pre-reduce it and use it as such, or even change the strategy of the algorithm represented by the lambda term. For example, recall that because of lack of eta, the strategy based on defining functions and then calling them, or in general, just currying stuff (for any other reasons than for future easy self-multiplication) is a limitation (of lambda calculus).
These lambda terms are written by smart humans who stream everything according to the principle to turn everything into a function, then use the function when needed. By looking at what happens in chemlambda (and presumably in a processor), most of this functional book-keeping is beautiful for the programmer, but a a limitation from the point of view of the possible alternative graphs which do the same as the functionally designed one, but easier.
This is of course connected to chemistry. We may stare to biochemical reactions, well known in detail, without being able to make sense of them because things do happen by shortcuts discovered by evolution.
Finally, the main advantage of a mol file is that any collection of lines from the mol file is also a mol file. The free edges (those which are capped by the script with FRIN (i.e. free in) and FROUT (i.e. free out) nodes) are free as a property of the mol file where they reside, so if you split a mol file into several pieces and send them to several users then they are still able to communicate by knowing that this FRIN of user A is connected (by a channel? by an ID?) to that FROUT of user B. But this is for a future step which would take things closer to real distributed computing.
And to really close it, if we would put on every computer linked to internet a molecule then the whole net would be as complex (counting the number of molecules) as a mouse. But that’s not fair, because the net is not as structured as a mouse. Say as a handful of slime mold. That’s how smart is the whole net. Now, if we could make it smarter than that, by something like a very small API and a mol file as big as a cookie… “
_________________________________________________________________________

New pages of demos and dynamic visualizations of chemlambda moves

The following are artificial chemistry visualizations, made in d3.js. There are two main pages: the first is with demos for #chemlambda   reductions, the second is with dynamic visualizations of the graph rewrites.
Bookmark these pages if you are interested, because there you shall find new stuff on a day-by-day basis.

______________________________________________________

You can build pretty much anything with that

Progressing,  enjoy that,  details in few days.

UPDATE: and a nice video about the Omega combinator.

 

 

… and there is more.

I took two different reduction strategies for the same artificial chemistry (#chemlambda) and looked what they give in two cases:
– the Ackermann function
– the self-multiplication and reduction of (the ensuing two copies of) the Omega combinator.
The visualization is done in d3.js.
The first reduction strategy  is the one which I used previously in several demonstrations, the one which I call “stupid” also because is the simplest one can imagine.
The second reduction strategy is a random variant of the stupid one, namely a coin is flipped for each edge of the graph, before any consideration of performing a graph rewrite there.
The results can be seen in the following pages:
– Ackermann classic  http://imar.ro/~mbuliga/ackermann_2_2.html
-Ackermann random http://imar.ro/~mbuliga/random_ackermann_2_2.html
-Omega classic http://imar.ro/~mbuliga/omegafo.html
-Omega random http://imar.ro/~mbuliga/random_omegafo.html
I’ve always told that one can take any reduction strategy with chemlambda and that all are interesting.

 

 

___________________________________

Rant about Jeff Hawkins “the sensory-motor model of the world is a learned representation of the thing itself”

I enjoyed very much the presentation given by Jeff Hawkins “Computing like the brain: the path to machine intelligence”

 

Around 8:40

Hawkins_1

This is something which could be read in parallel with the passage I commented in the post   The front end visual system performs like a distributed GLC computation.

I reproduce some parts

In the article by Kappers, A.M.L.; Koenderink, J.J.; Doorn, A.J. van, Basic Research Series (1992), pp. 1 – 23,

Local Operations: The Embodiment of Geometry

the authors introduce the notion of  the  “Front End Visual System” .

Let’s pass to the main part of interest: what does the front end?  Quotes from the section 1, indexed by me with (a), … (e):

  • (a) the front end is a “machine” in the sense of a syntactical transformer (or “signal processor”)
  • (b) there is no semantics (reference to the environment of the agent). The front end merely processes structure
  • (c) the front end is precategorical,  thus – in a way – the front end does not compute anything
  • (d) the front end operates in a bottom up fashion. Top down commands based upon semantical interpretations are not considered to be part of the front end proper
  • (e) the front end is a deterministic machine […]  all output depends causally on the (total) input from the immediate past.

Of course, today I would say “performs like a distributed chemlambda computation”, according to one of the strategies described here.

Around 17:00 (Sparse distributed representation) . ” You have to think about a neuron being a bit (active:  a one, non active: a zero).  You have to have many thousands before you have anything interesting [what about C. elegans?]

Each bit has semantic meaning. It has to be learned, this is not something that you assign to it, …

… so the representation of the thing itself is its semantic representation. It tells you what the thing is. ”

That is exactly what I call “no semantics”! But is much better formulated as a positive thing.

Why is this a form of “no semantics”? Because as you can see the representation of the thing itself edits out the semantics, in the sense that “semantics” is redundant, appears only at the level of the explanation about how the brain works, not in the brain workings.

But what is the representation  of the thing itself? A chemical blizzard in the brain.

Let’s put together the two ingredients into one sentence:

  • the sensory-motor model of the world is a learned representation of the thing itself.

Remark that as previously, there is too much here: “model” and “representation” sort of cancel one another, being just superfluous additions of the discourse. Not needed.

What is left: a local, decentralized.  asynchronous, chemically based, never ending “computation”, which is as concrete as the thing (i.e. the world, the brain) itself.

I put “computation” in quotes marks because this is one of the sensible points: there should be a rigorous definition of what that “computation” means. Of course, the first step would be fully rigorous mathematical proof of principle that such a “computation”, which satisfies the requierements listed in the previous paragraph, exists.

Then, it could be refined.

I claim that chemlambda is such a proof of principle. It satisfies the requirements.

I don’t claim that brains work based on a real chemistry instance of chemlambda.

Just a proof of principle.

But how much I would like to test this with people from the frontier mentioned by Jeff Hawkins at the beginning of the talk!

In the following some short thoughts from my point of view.

While playing with chemlambda with the strategy of reduction called “stupid” (i.e. the simplest one), I tested how it works on the very small part (of chemlambda) which simulates lambda calculus.

Lambda calculus, recall, is one of the two pillars of computation, along with the Turing machine.

In chemlambda, the lambda calculus appears as a sector, a small class of molecules and their reactions. Contrary to the Alchemy of Fontana and Buss, abstraction and application (operations from lambda calculus) are both concrete (atoms of molecules). The chemlambda artificial chemistry defines some very general, but very concrete local chemical interactions (local graph rewrites on the molecules) and some (but not all) can be interpreted as lambda calculus reductions.

Contrary to Alchemy, the part which models lambda calculus is concerned only with untyped lambda calculus without extensionality, therefore chemical molecules are not identified with their function, not have they definite functions.

Moreover, the “no semantics” means concretely that most of the chemlambda molecules can’t be associated to a global meaning.

Finally, there are no “correct” molecules, everything resulted from the chemlambda reactions goes, there is no semantics police.

So from this point of view, this is very nature like!

Amazingly,  the chemical reductions of molecules which represent lambda terms reproduce lambda calculus computations! It is amazing because with no semantics control, with no variable passing or evaluation strategies, even if the intermediary molecules don’t represent lambda calculus terms, the computation goes well.

For example the famous Y combinator reduces first to only a small (to nodes and 6 port nodes molecule), which does not have any meaning in lambda calculus, and then becomes just a gun shooting “application” and “fanout” atoms (pair which I call a “bit”). The functioning of the Y combinator is not at all sophisticated and mysterious, being instead fueled by the self-multiplication of the molecules (realized by unsupervised local chemical reactions) which then react with the bits and have as effect exactly what the Y combinator does.

The best example I have is the illustration of the computation of the Ackermann function (recall: a recursive but not primitive recursive function!)

What is nice in this example is that it works without the Y combinator, even if it’s a game of recursion.

But this is a choice, because actually, for many computations which try to reproduce lambda calculus reductions, the “stupid” strategy used with chemlambda is a bit too exuberant if the Y combinator is used as in lambda calculus (or functional programming).

The main reason is the lack of extension, there are no functions, so the usual functional programming techniques and designs are not the best idea. There are shorter ways in chemlambda, which employ better the “representation of the thing itself is its own semantic interpretation” than FP.

One of those techniques is to use instead of long linear and sequential lambda terms (designed as a composition of functions), so to use instead of that another architecture, one of neurons.

For me, when I think about a neural net and neural computation, I tend to see the neurons and synapses as loci of chemical activity. Then  I just forget about these bags of chemicals and I see a chemical connectome sort of thing, actually I see a huge molecule suffering chemical reactions with itself, but in a such a way that its spatial extension (in the neural net), phisically embodied by neurons and synapses and perhaps glial cells and whatnot, this spatial extention is manifested in the reductions themselves.

In this sense, the neural architecture way of using the Y combinator efficiently in chemlambda is to embed it into a neuron (bag of chemical reactions), like sketched in the following simple experiment

Now, instead of a sequential call of duplication and application (which is the way the Y combinator is used in lambda calculus), imagine a well designed network of neurons which in very few steps build a (huge, distributed) molecule (instead of a perhaps very big number of sequential steps) which at it’s turn reduce itself in very few steps as well, and then this chemical connectome ends in a quine state, i.e. in a sort of dynamic equilibrium (reactions are happening all the time but they combine in such a way that the reductions compensate themselves into a static image).

Notice that the end of the short movie about the neuron is a quine.

For chemlambda quines see this post.

In conclusion there are chances that this massively parallel (bad name actually for decentralized, local) architecture of a neural net, seen as it’s chemical working, there are chances that chemlambda really can do not only any computer science computation, but also anything a neural net can do.

_________________________________________________________________

Where’s the ship? (lots of questions part II)

I explain in Lots of questions, part I how Plato and Brazil made me want to switch from math to biology.  Eventually it seems I ended in fundamentals of computing, but there is this strange phenomenon. I can’t figure it how it works, or why, or even if is widespread or rare. I think is widespread, but I don’t have clear evidence about it other than the old saying that people don’t change.

So Plato+Rio gives geometry+biology gives artificial chemistry+distributed computing.

Obvious.

I don’t get how this functions.

Makes no sense.

Now I have a hint that we are the computation,  we execute ourselves during our lifetime, our brains are just part of the seed, part of the program. We don’t really have billions of neurons and cells, everything is just the state of a computation.  Part of the seed is our genetic inheritance, other part of the seed is our geographical and more largely cultural inheritance. We are not separated from the external medium, there is no external medium, exactly like there is no me and the Net, only many actors interacting asynchonously and locally according to some protocols. In the case of real life the protocols are casted in   real molecules, at a finer scale only emerging phenomena of a much faster and wider computation going on, of a geometrical nature. But the principle is scale independent, that is how we manage space (perception and interaction) in our brains.

So we don’t change.

Take this blog, I make from time to time some counts. For the last 3 months gives this. There are 491 posts on chorasimilarity. In the last week 78 of them have been read, last month 219, last quarter 331. This series makes no sense unlese there are very long range relations between the posts, relations which are perceived by enough readers of this blog.

Oh, great!

Two mysteries. The first is that I have no idea why exactly there long time correlations arrive in my writing. The second mystery is why do you perceive them too.

So there is this strange phenomenon, which I can’t explain.

I remark though that there has to be something starts the new computation cycle, the new turn of spiral, the new chamber of the snail shell.

It is stimulation.

Last time was Plato and Rio.

I feel that I lack something in order to tell you more and for me to learn more in the absence of enough external stimuli.

I know I can build really new and also classical stuff, but I loose interest in time without stimuli. That is why I change every few years what I do. It is not rewarding for me to see that after I left a field somebody picks an idea and makes it stronger, it is not rewarding to see that I was right when nobody believed.  Maybe I just have a nose for good ideas which float in the air and I detect them before many others, but I don’t have the right spce and culture position to make them grow really big. You know, just an explorer who comes back home after a lonely expedition and tells you about blue seas and wide skies with strange constellations. Yeah, OK creep. But then, after some years the trend is to go to bath in those blue seas. And where is the creep? Just coming home, telling about that new jungle and the road from there to the clouds.

Stimulation. Trust. New worlds await. Need my ship, now.

_________________________________________________________________

 

 

More detailed argument for the nature of space buillt from artificial chemistry

There are some things which were left uncleared. For example, I have never suggested to use networks of computers as a substitute for space, with computers as nodes, etc. This is one of the ideas which are too trivial. In the GLC actors article is proposed a different thing.

First to associate to an initial partition of the graph (molecule) another graph, with nodes being the partition pieces (thus each node, called actor, holds a piece of the graph) and edges being those edges of the original whole molecule which link nodes of graphs from different partitions. This is the actors diagram.
Then to interpret the existence of an edge between two actor nodes as a replacement for a spatial statement like (these two actors are close). Then to remark that the partition can be made such that the edges from the actor diagram correspond to active edges of the original graph (an active edge is one which connects two nodes of the molecule which form a left pattern), so that a graph rewrite applied to a left pattern consisting of a pair of nodes, each in a different actor part, produces not only a change of the state of each actor (i.e. a change of the piece of the graph which is hold by each actor), but also a change of the actor diagram itself. Thus, this very simple mechanism produces by graph rewrites two effects:

  • “chemical” where two molecules (i.e. the states of two actors) enter in reaction “when they are close” and produce two other molecules (the result of the graph rewrite as seen on the two pieces hold by the actors), and
  • “spatial” where the two molecules, after chemical interaction, change their spatial relation with the neighboring molecules because the actors diagram itself has changed.

This was the proposal from the GLC actors article.

Now, the first remark is that this explanation has a global side, namely that we look at a global big molecule which is partitioned, but obviously there is no global state of the system, if we think that each actor resides in a computer and each edge of an actor diagram describes the fact that each actor knows the mail address of the other which is used as a port name. But for explanatory purposes is OK, with the condition to know well what to expect from this kind of computation: nothing more than the state of a finite number of actors, say up to 10, known in advance, a priori bound, as is usual in the philosophy of local-global which is used here.

The second remark is that this mechanism is of course only a very
simplistic version of what should be the right mechanism. And here
enter the emergent algebras, i.e. the abstract nonsense formalism with trees and nodes and graph rewrites which I have found trying to
understand sub-riemannian geometry (and noticing that it does not
apply only to sub-riemannian, but seems to be something more general, of a computational nature, but which computation, etc). The closeness,  i.e. the neighbourhood relations themselves are a global, a posteriori view, a static view of the space.

In the Quick and dirty argument for space from chemlambda I propose the following. Because chemlambda is universal, it means that for any program there is a molecule such that the reductions of this molecule simulate the execution of the program. Or, think about the chemlambda gui, and suppose even that I have as much as needed computational power. The gui has two sides, one which processes mol files and outputs mol files of reduced molecules, and the other (based on d3.js) which visualizes each step. “Visualizes” means that there is a physics simulation of the molecule graphs as particles with bonds which move in space or plane of the screen. Imagine that with enough computing power and time we can visualize things in as much detail as we need, of course according to some physics principles which are implemented in the program of visualization. Take now a molecule (i.e. a mol file) and run the program with the two sides reduction/visualization. Then, because of chemlambda universality we know that there exist another molecule which admit chemlambda reductions which simulate the reductions of the first molecule AND the running of the visualization program.

So there is no need to have a spatial side different from the chemical side!

But of course, this is an argument which shows something which can be done in principle but maybe is not feasible in practice.

That is why I propose to concentrate a bit on the pure spatial part. Let’s do a simple thought experiment: take a system with a finite no of degrees of freedom and see it’s state as a point in a space (typically a symplectic manifold) and it’s evolution described by a 1st order equation. Then discretize this correctly(w.r.t the symplectic structure)  and you get a recipe which describes the evolution of the system which has roughly the following form:

  • starting from an initial position (i.e. state), interpret each step as a computation of the new position based on a given algorithm (the equation of evolution), which is always an algebraic expression which gives the new position as a  function of the older one,
  • throw out the initial position and keep only the algorithm for passing from a position to the next,
  • use the same treatment as in chemlambda or GLC, where all the variables are eliminated, therefore renounce in this way at all reference to coordinates, points from the manifold, etc
  • remark that the algebraic expressions which are used  always consists  of affine (or projective) combinations of  points (and notice that the combinations themselves can be expressed as trees or others graphs which are made by dilation nodes, as in the emergent algebras formalism)
  • indeed, that  is because of the evolution equation differential  operators, which are always limits of conjugations of dilations,  and because of the algebraic structure of the space, which is also described as a limit of  dilations combinations (notice that I speak about the vector addition operation and it’s properties, like associativity, etc, not about the points in the space), and finally because of an a priori assumption that functions like the hamiltonian are computable themselves.

This recipe itself is alike a chemlambda molecule, but consisting not only of A, L, FI, FO, FOE but also of some (two perhaps)  dilation nodes, with moves, i.e. graph rewrites which allow to pass from a step to another. The symplectic structure itself is only a shadow of a Heisenberg group structure, i.e. of a contact structure of a circle bundle over the symplectic manifold, as geometric  prequantization proposes (but is a mathematical fact which is, in itself, independent of any interpretation or speculation). I know what is to be added (i.e. which graph rewrites which particularize this structure among all possible ones). Because it connects to sub-riemannian geometry precisely. You may want to browse the old series on Gromov-Hausdorff distances and the Heisenberg group part 0, part I, part II, part III, or to start from the other end The graphical moves of projective conical spaces (II).

Hence my proposal which consist into thinking about space properties as embodied into graph rewriting systems, inspred from the abstract nonsense of emergent algebras, combining  the pure computational side of A, L, etc with the space  computational side of dilation nodes into one whole.

In this sense space as an absolute or relative vessel does not exist more than the  Marius creature (what does exist is a twirl of atoms, some go in, some out, but is too complex to understand by my human brain) instead the fact that all beings and inanimate objects seem to agree collectively when it comes to move spatially is in reality a manifestation of the universality of this graph rewrite system.

Finally, I’ll go to the main point which is that I don’t believe that
is that simple. It may be, but it may be as well something which only
contains these ideas as a small part, the tip of the nose of a
monumental statue. What I believe is that it is possible to make the
argument  by example that it is possible that nature works like this.
I mean that chemlambda shows that there exist a formalism which can do this, albeit perhaps in a very primitive way.

The second belief I have is that regardless if nature functions like this or not, at least chemlambda is a proof of principle that it is possible that brains process spatial information in this chemical way.

__________________________________________________________________

Lots of questions, part I

There is a button for “publish”. So what?

I started this open notebook  with the goal to disseminate some of my work and ideas.  There are always many subjects to write about, this open notebook has almost 500 posts. Lately the rhythm of about a post every 3 days slowed down to a post a week.  I have not run out of ideas, or opinions. It is only that I don’t get anything in return.

I explain what I mean by getting something in return. I don’t believe that one should expect a compensation for the time and energy converted into a post. There are always a million posts to read.  There is not a lot of time to read them. It is costly in brain time to understand them, and probably, from the point of view of the reader, the result of this investment does not worth the effort.

So it’s completely unreasonable to think that my posts should have any treatment out of the usual.

Then, what can be the motivation to have an open notebook, instead of just a notebook? Besides vanity, there is not much.

But vanity was not my motivation, although it feels very good to have a site like this one. Here is why:   from the hits I can see that people read old posts as frequently as new posts. You have to agree that this is highly unusual for a blog. So, incidentally,  perhaps this is not a blog, doh.

I put vanity aside and I am now closer to the real motivations for maintaining this open notebook.

Say you have a new idea, product, anything which behaves like a happy virus who’s looking for hosts to multiply. This is pretty much the problem of any creator: to find hosts.  OK, what is available for a creator who is not a behemoth selling sugar solutions or other BRILLIANT really simple viruses like phones, political ideas, contents for lazy thinking trolls and stuff like this?

What if I don’t want to sell ideas, but instead I want to find those rare people with similar interests?

I don’t want to entertain anybody, instead that’s a small fishing net in the big sea.

OK, this was the initial idea. That compared to the regular ways, meaning writing academic articles, going to conferences, etc, there might be more chances to talk with interesting people if I go fishing in the high seas, so to say.

These are my expectations. That I might find interesting people to work with, based on common passions, and to avoid the big latency of the academic world, so that we can do really fast really good things now.

I know that it helps a lot to write simple. To dilute the message. To appeal to authority, popularity, etc.

But I expect that there is a small number of you guys who really think as fast as I do. And then reply to me, simultaneously to Marius.Buliga@imar.ro and Marius.Buliga@gmail.com .

Now that my expectations are explained, let’s look at the results. I have to put things in context a bit.

This site was called initially lifeinrio@wordpress.com . I wanted to start a blog about how is it to live in Rio with wife and two small kids. Not a bad subject, but I have not found the time for that side project, because I was just in the middle of an epiphany. I wanted to switch fields, I wanted to move from pure and applied mathematics to somewhere as close as possible to biology and neuroscience. But mind you that I wanted also to bring with me the math. Not to make a show of it, but to use the state of mind of a mathematician in these great emerging fields. So, instead of writing about my everyday life experiences, I started to write to everybody I found on the Net who was not (apparently) averse to mathematics and who was also somebody in neuroscience. You can imagine that my choices were not very well informed, because these fields were so far from what I knew before. Nevertheless I have found out interesting people, telling them about why I want to switch. Yes, why? Because of the following  reasons: (1) I am passionate about making models of reality, (2) I’m really good at finding unexpected points of view, (3) I learn very fast, (4) I understood that pure or applied math needs a challenge beyond the Cold War ones (i.e. theories of everything, rocket science, engineering).  OK, I’ll stop here with the list, but there were about 100 more reasons, among them being to understand what space is from the point of the view of a brain.

I got fast into pretty weird stuff. I started to read philosophy, getting hooked by Plato. Not in the way the usual american thinker does. They believe that they are platonic but they are empiricists, which is exactly the poor (brain) version of platonism. I shall stop giving kicks to empiricists, because they have advanced science in many ways in the last century.  Anyway empiricism looks more and more like black magic these days. Btw, have you read anything by Plato? If you do, then try to go to the source. Look for several sources,  you are not a good reader of ancient Greek.  Take your time, compare versions, spell the originals (so to say), discover the usual phenomenon that more something is appreciated, more shit inside.

Wow, so here is it a mathematician who wants to move to biology, and he uses Plato as a vehicle. That’s perhaps remarkabl…y stupid to do, Marius. What happened, have you ran out of the capacity to do math? Are you out in the field where people go when they can’t stand no more the beauty and hardness of mathematics? Everybody knows, since that guy who wrote with Ramanujan and later, after R was dead, told us that mathematics is for young people. (And probably white wealthy ones.)

No, what happened was that the air of Rio gave me the guts I have lost during the education process. Plato’s Timaeus spoke to me in nontrivial ways, in particular. I have understood that I am really on the side of geometers, not on the side of language people. And that there is more chance to understand brains if we try to model what the language people assume it works by itself, the low level, non rational processes of the brain. Those who need no names, no language, those highly parallel ones. For those, I discovered, there was no math to apply.  You may say that for example vision is one of the most studied subjects and that really there is a lot of maths already used for that. But if you say so then you are wrong.  There is no model of vision up to now, which explains how biological vision works without falling into the internal or external homunculus fallacies. If you look to computer vision, you know, you can do anything with computers, provided you have enough of them and enough time. There is a huge gap between computer vision and biological vision, a fundamental one.

OK, when I returned home to Bucharest I thought what if I reuse the lifeinrio.wordpress.com and transform it into chorasimilarity.worpress.com? This word chorasimilarity is made of “chora”, which is the feminine version of “choros”, which means place or space. Plato invented the “chora” as a term he used in his writings. “Similarity” was because of my background in math: I was playing with “emergent algebras”, which I invented previously of going on the biology tangent. In fact these emergent algebras made me think first that it is needed a new math, and that maybe they are relevant for biological vision.

I stop a bit to point to the post Scale is a place in the brain, which is about research on grid cells and place cells (research which just got a Nobel in medicine in 2014).

Emergent algebras are about similarity. They make visible that behind is hidden an abstract graph rewrite system. Which in turn can be made concrete by transforming it into chemistry. An artificial chemistry.  But also, perhaps, a real one. Or, the brain is most of it chemistry. Do you see how everything gets in place?  Chora is just chemistry in the brain. Being universal, it is not surprising that we distilled, us humans, a notion of space from that.

There is a lot of infrastructure to build in order to link all these in a coherent way.

Walker eating bits and a comment on the social side of research

This post has two parts: the first part presents an experiment and the second part is a comment on the social side of research today.

Part 1: walker eating bits.  In this post I introduced the walker, which has been also mentioned in the previous post.

I made several experiments with the walker, I shall start by describing the most recent one, and then I shall recall (i.e. links to posts and vids) the ones before.

I use the chemlambda gui which is available for download from here.

What I did: first I took the walker and made it walk on a trail which is generated on the fly by a pair A-FOE of nodes. I knew previously that such a pair A-FOE generates a trail of A and FO nodes, because this is the basic behaviour of the Y combinator in chemlambda. See the illustration of this (but in an older version, which uses only one type of fanout nodes, the FO) here.  Part of it was described in the pen-and-paper version in the ALIFE14 article with Louis Kauffman.

OK, if you want to see how the walker walks on the trail then you have to download first the gui and then use the gui with the file walker.mol.

Then I modified the experiment in order to feed the walker with a bit.

A bit is a pair of A-FO nodes, which has the property that it is a propagator. See here the illustration of this fact.

For this I had to modify the mol file, which I did. The new mol file is walker_eating_bit.mol .

The purpose of the experiment is to see what happens when the walker is fed with a bit. Will it preserve its shape and spill out a residue on the trail? Or will it change and degenerate to a molecule which is no longer able to walk?

The answer is shown in the following two screenshots. The first one presents the initial molecule described by the walker_eating_bit.mol.

walker_eating_bit_step0
At the extreme right you see the pair A-FOE which is generating the trail (A is the green big node with two smaller yellow ones and a small blue one and the FOE is the big yellow node with two smaller blue ones and a small yellow one). If you feel lost in the notation, then look a bit at the description in the visual tutorial.

In the middle you see the walker molecule. At the right there is the end of the trail. The walker walks from left to right, but because the trail is generated from right to left, this is seen as if the walker stays in place and the trail at its left gets longer and longer.

OK. Now, I added the bit, I said. The bit is that other pair of two green nodes, at the right of the figure, immediately at the left of the A-FOE pair from the extreme right.

The walker is going to eat this pair. What happens?

I spare you the details and I show you the result of 8 reduction steps in the next figure.

walker_eating_bit_step8

You see that the walker already passed over the bit, processed it and spat it as a pair A-FOE. Then the walker continued to walk some more steps, keeping its initial shape.

GREAT! The walker has a metabolism.

Previous experiments.  If you take the walker on the trail and you glue the ends of the trail then you get a walker tchoo-tchoo going on a circular trail. But wait, due to symmetries, this looks as a molecule which stays the same after each reduction step. Meaning this is a chemlambda quine. I called such a quine an ouroboros. In the post Quines in chemlambda you see such an ouroboros obtained from a walker which walk on a circular train track  made of only one pair.

I previously fed the walker with a node L and a termination node T, see this post for pen and paper description and this video for a more visual description, where the train track is made circular as described previously.

That’s it with the first part.

Part 2: the telling silence of the expert. The expert is no lamb in academia. He or she fiercely protect the small field of expertise where is king or queen. As you know if you read this open notebook, I have the habit of changing the research fields from time to time. This time, I entered into the the radar of artificial chemistry and graph rewriting systems, with an interest in computation. Naturally I tried to consult as many as possible experts in these fields. Here is the one and only contribution from the category theory church.  Yes, simply having a better theory does not trump racism and turf protection.  But fortunately not everything can be solved by good PR only. As it becomes more and more clear, the effect of promotion of mediocrity in academia, which was consistently followed  since the ’70, has devastating effects on the academia itself. Now we have become producers of standardised units of research, and the rest is just the monkey business about who’s on top. Gone is the the trust in science, gone are the funds, but hey, for some the establishment will still provide a good retirement.

The positive side of this big story, where I only offer concrete, punctual examples, is that the avalanche which was facilitated by the open science movement (due to the existence of the net) will change forever the academia in particular. Not into a perfect realm, because there are no such items in the real world catalogue. But the production of scientific research in the old ways of churches and you scratch my back and I’ll scratch yours is now exposed to more eyes than before and soon enough we shall witness a phenomenon similar to the one happened more than 100 years ago in art, where ossified academic art sunk into oblivion and an explosion of creativity ensued, simply because of the exposure of academic painting along with alternative (and, mixed with garbage, much more creative artists) works in the historical impressionist revolution.

______________________________________________

 

Chemlambda artificial chemistry at ALIFE14: article and supplementary material

Louis Kauffman presents today our article in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York

Chemlambda, universality and self-multiplication

Marius Buliga and Louis H. Kauffman

Abstract (Excerpt)

We present chemlambda (or the chemical concrete machine), an artificial chemistry with the following properties: (a) is Turing complete, (b) has a model of decentralized, distributed computing associated to it, (c) works at the level of individual (artificial) molecules, subject of reversible, but otherwise deterministic interactions with a small number of enzymes, (d) encodes information in the geometrical structure of the molecules and not in their numbers, (e) all interactions are purely local in space and time. This is part of a larger project to create computing, artificial chemistry and artificial life in a distributed context, using topological and graphical languages.

DOI: http://dx.doi.org/10.7551/978-0-262-32621-6-ch079
Pages 490-497

Supplementary  material:

____________________________________________________________

 

Experimental alife IoT with Tessel

Here is an idea for testing the mix between the IoT and chemlambda.  This post almost qualifies as an What if? one but not quite, because in principle it might be done right now, not in the future.

Experiments have to start from somewhere in order to arrive eventually to something like a Microbiome OS.

Take Tessel.

Tessel is a microcontroller that runs JavaScript.
It’s Node-compatible and ships with Wifi built in.

Imagine that there is one GLC actor per Tessel device. The interactions between GLC actors may be done partially via the Wifi.

The advantage is that one may overlap the locality of graph rewrites of chemlambda with space locality.

 

Each GLC actor has as data a chemlambda molecule, with it’s in and out free arrows (or free chemical bonds) tagged with names of other actors.

A bond between two actors form, in the chemlambda+Tessel world, if the actors are close enough to communicate via Wifi.

Look for example at the beta move, done via the behaviour 1 of GLC actors. This may involve up to 6 actors, namely the two actors which communicate via the graphic beta move and at most 4 other actors which have to modify their tags of neighbouring actors names as a result of the move. If all are locally connected by Wifi then this becomes straightforward.

What would be the experiment then? To perform distributed, decentralized computations with chemlambda (in particular to do functional programming in a decentralized way) which are also sprawled over the physical world.  The Tessel devices involved in the computation don’t have to be all in possible Wifi connections with the others, on the contrary, only local connections would be enough.

Moreover, the results of the computations could as well have physical effects (in the sense that the states of the actors could produce effects in the real world) and as well the physical world could be used as input for the computation (i.e. the sensors connected to Tessel devices could modify the state of the actor via a core-mask mechanism).

That would play the role of a very primitive, but functional, experimental ancestor of a Microbiome OS.

 

_______________________________________________

 

 

FAQ: chemlambda in real and virtual worlds

Q1. is chemlambda different than previous models, like the algorithmic chemistry of Fontana and Buss, or the CHAM of Berry and Boudol?

A1. Yes. In chemlambda we work with certain graphs made of nodes (atoms) and bond (arrows), call such a graph a  molecule.  Then:

  • (A) We work with individual molecules, not with populations of molecules. The molecules encode information in their shape, not in their number.
  • (B) different from algorithmic chemistry, the application and abstraction are atoms of the molecules.
  • (C) There are no variables in chemlambda and there is no need to introduce one species of molecules per variable, like in the previous models.
  • (D) chemlambda and it’s associated computing model (distributed GLC)  work well in a decentralized world, there is no need for having a global space or a global time for the computation.

There is a number of more technical differences, like (non exhaustively):

  • (E)  molecules  are not identified with their functions. Technically, chemlambda rejects eta reduction, so even for those molecules which represent lambda terms, they are not identified (as happens when we use eta reduction) with their function. This calls for an “extreme” functional programming style.
  • (F) only a small part of the chemlambda molecules correspond to lambda terms (there is a lambda calculus “sector”).
  • (G) there is no  global semantics.

__________________________________________________
Q2.  is chemlambda a kind of computing with chemical reaction networks (CRNs)?

A2. No. Superficially, there are resemblances, and really one may imagine CRNs based on chemlambda, but this is not what we are doing.

__________________________________________________

Q3. Why do you think chemlambda has something to tell about the real or even about the living world?

A3. Because the real world, in it’s fundamental workings, does not seem to care about 1D language based constructs which we cherish in our explanation. The real and especially the living world seems to be based on local, asynchronous interactions which are much more like signal transductions and much less like information passing. (See How is different signal transduction from information theory? )
Everything happens locally, in nontrivial but physical ways, leading to emergent complex behaviours. Nature does not care about coordinates and names of things or abstractions, unless they are somehow physically embodied. This is the way chemlambda functions.

__________________________________________________
Q4. Why do you think chemlambda has something to say about the virtual  world of the Net?

A4. Because it puts the accent on alife instead of AI, on decentralization instead of pyramidal constructs.  A microbial ecology like internet is much more realistic to hope to realize than one based on benevolent pyramidal AI constructs (be them clouds, or other corporations constructs). Because real and virtual worlds are consistent only locally.

__________________________________________________
Q5. What about the Internet of Things?

A5. We hope to give to the IoT the role of the bridge which unites two kinds of computations real-virtual, under the same chemistry.

__________________________________________________
Q6. What would happen in your dream world?

A6. There are already some (fake) news about it here: what if

__________________________________________________

 

 

 

Zipper logic and RNA pseudoknots

UPDATE: compare with the recent Chemlambda strings (working version).

Zipper logic is a variant of chemlambda, enhanced a bit with a pattern identification move called CLICK.

Or, chemlambda is an artificial chemistry.  Does it have a natural, real counterpart?

It should. Finding one is part of the other arm of the research thread presented here, which aims to unite (the computational part of) the real world with the virtual world, moved by the same artificial life/real life basic bricks. The other arm is distributed GLC.

A lot of help is needed for this, from specialists!Thank you for pointing to the relevant references, which I shall add as I receive them.

Further is a speculation, showing that zipper graphs can be turned into something which resembles very much with RNA pseudoknots.

(The fact that one can do universal computation with RNA pseudoknots is not at all surprising, maybe with the exception that one can do functional style programming with these.)

It is a matter of notation changes. Instead of using oriented crossings for denoting zippers, like in the thread called “curious crossings” — the last  post is Distributivity move as a transposition (curious crossings II) — let’s use another, RNA inspired one.

rna_2

So:

  • arrows (1st row) are single stranded RNA pieces. They have a natural 5′-3′ orientation
  • termination node (2nd row) is a single strand RNA with a “termination word on it”, figured by a magenta rectangle
  • half-zippers are double strands of RNA (3rd and 4th rows). The red rectangle is (a pair of antiparallel conjugate) word(s) and the yellow rectangle is a tag word.

Remark that any (n) half-zipper can be transformed into a sequence of (1) half-zippers, by repeated applications if the TOWER moves, therefore (n) half-zippers appear in this notation like strings of repeated words.

Another interesting thing is that on the 3rd row of the figure we have a notation for the lambda abstraction node and on the 4th row we have one of the application  node form chemlambda. This invites us to transform the other two nodes — fanin and fanout — of chemlambda into double stranded RNA. It can be done like this.

rna_3

On the first two rows of this picture we see the encoding of the (1) half-zippers, i.e. the encoding of the lambda abstraction and application, as previously.

On the 3rd row is the encoding of the fanout node and on the 4th row the one of the fanin.

In this way, we arrived into the world of RNA. The moves transform as well into manipulations of RNA strings, under the action of (invisible and to be discovered) enzymes.

But why RNA pseudoknots? It is obvious, as an exercise look how the zipper I and K combinators look in this notation.

rna_4

We see hairpins.

___________________________________

Conclusion:  help needed for chemistry experts. Low hanging fruits here.

___________________________________

Autodesk releases SeaWater (another WHAT IF post)

[ This is another  WHAT IF  post  which  responds to the challenge formulated in  Alife vs AGI.  You are welcome to suggest another one or to make your own.]

The following is a picture of a random splash of sea water, magnified 25 times [source]

scoop-of-water-magnified-990x500

As well, it could be  just a representation of the state of the IoT in a small neighbourhood of you, according to the press release describing SeaWater, the new product of Autodesk.

“SeaWater is a design tool for the artificial life based decentralized Internet of Things. Each of the tiny plankton beings which appear in the picture is actually a program, technically called a GLC actor. Each plankton being has it’s own umwelt, it’s own representation of the medium which surrounds it. Spatially close beings in the picture share the same surrounding and thus they can interact. Likewise, the tiny GLC actors interact locally one with another,  not in real space, but on the Net. There is no real space in the Net, instead, SeaWater represents them closer when they do interact.

Sea Water is a tool for Net designers. We humans are visual beings. A lot of our primate brains powers can be harnessed for designing the alife decentralized computing which form the basis of the Internet of Things.

It improves very much the primitive tools which give things like this picture [source]

ack_6

 

 

Context. Recall that IoT is only a bridge between two worlds: the real one, where life is ruled by real chemistry and the artificial one, based on some variant of an artificial chemistry, aka  chemlambda.

As Andrew Hessel points out, life is a programming language (chemically based),  as well as the virtual world. They are the same, sharing the same principles of computation. The IoT is a translation tool which unites these worlds and lets them be one.

This is the far reaching goal. But in the meantime we have to learn how to design this. Imagine that we may import real beings, say microbes, to our own unique Microbiome OS.  There is no fundamental difference between synthetic life, artificial life and real life, at least at this bottom level.

Instead of aiming for human or superhuman artificial intelligence, the alife decentralized computing community wants to build a world where humans are not treated like bayesian units by  pyramidal centralized constructs.  There is an immense computing power already at the bottom of alife, where synthetic biology offers many valuable lessons.

______________________________

UPDATE.  This is real: Autodesk Builds Its Own Virus, as the Software Giant Develops Design Tools for Life Itself.

Chemlambda, universality and self-multiplication

Together with Louis Kauffman, we submitted  the following article:

M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication,   arXiv:1403.8046

to the ALIFE 14: THE FOURTEENTH INTERNATIONAL CONFERENCE ON THE SYNTHESIS AND SIMULATION OF LIVING SYSTEMS .

The article abstract is:

We present chemlambda (or the chemical concrete machine), an artificial chemistry with the following properties: (a) is Turing complete, (b) has a model of decentralized, distributed computing associated to it, (c) works at the level of individual (artificial) molecules, subject of reversible, but otherwise deterministic interactions with a small number of enzymes, (d) encodes information in the geometrical structure of the molecules and not in their numbers, (e) all interactions are purely local in space and time.

This is part of a larger project to create computing, artificial chemistry and artificial life in a distributed context, using topological and graphical languages.

Please enjoy the nice text and the 21 figures!

In this post I want to explain in few words what is this larger project, because it is something which is open to anybody to contribute and play with.

We look at the real living world as something ruled by chemistry. Everywhere in the real world there are local chemical interactions, local in space and time. There is no global, absolute, point of view which is needed to give meaning to this alive world.  Viruses, bacteria and even prebiotic chemical entities form the scaffold of this world, until very recently, when the intelligent armchair philosophers appeared and invented what they call “semantics”. Before the meaning of objects, there was life.

Likewise, we may imagine a near future where the virtual world of the Internet is seamlessly interlinked with the real world, by means of the Internet of Things and artificial chemistry.

Usually we are presented a future where   artificial intelligences  and rational machines   give expert advices  or make decisions based on statistics of real life interactions between us humans or between us and the objects we manipulate. This future is one of gadgets, useful objects and  virtual bubbles for the bayesian generic human. Marginally, in this future, we humans, we may chit chat and ask corporations for better gadgets or for more useful objects. This is the future of cloud computing, that is centralized distributed computing.

This future world does not look at all like the real world.

Because the real world is not centralized. Because individual entities which participate in the real world do live individual lives and have individual interactions.

Because we humans want to discuss and interact with others more than we want better gadgets.

We think  then about a future of a virtual world based on  decentralized computing with an artificial chemistry, a world where  individual entities, real or virtual,  interact by the means of an artificial chemistry, instead of being baby-sitted by statistically benevolent artificial intelligences.

Moreover, the Internet of Things, the bridge between the real and the virtual world, should be designed as a translation tool between real chemistry and artificial chemistry. Translation of what? Of  decentralized purely local computations.

This is the goal of the project, to see if such a future is possible.

It is a fun goal and there is much to learn and play with. It is by no means something which appeared from nowhere, instead it is a natural project, based on lots of bits of past and present research.

_________________________

Why Distributed GLC is different from Programming with Chemical Reaction Networks

I use the occasion to bookmark the post at Azimuth Programming with Chemical Reaction Networks, most of all because of the beautiful bibliography which contains links to great articles which can be freely downloaded. Thank you John Baez for putting in one place such an essential list of articles.

Also, I want to explain very briefly why CRNs are not used in Distributed GLC.

Recall that Distributed GLC  is a distributed computing model which is based on an artificial chemistry called chemlambda, itself a variant (slightly different) of graphic lambda calculus, or GLC.

There are two stages of the computation:

  1. define the initial participants at the computation, each one called an “actor”. Each actor is in charge of a chemlambda molecule. Molecules of different actors may be connected, each such connection being interpreted as a connection between actors.  If we put together all molecules of all actors then we can glue them into one big molecule. Imagine this big molecule as a map of the world and actors as countries, each coloured with a different colour.  Neighbouring countries correspond to connected actors. This big molecule is a graph in the chemlambda formalism. The graph which has the actors as nodes and neighbouring relations as edges is called the “actors diagram” and is a different graph than the big molecule graph.
  2. Each actor has a name (like a mail address, or like the colour of a country). Each actor knows only the names of neighbouring actors. Moreover, each actor will behave only as a function of the molecule it has and according to the knowledge of his neighbouring actors behaviour. From this point, the proper part of the computation, each actor is by itself. So, from this point on we use the way of seeing of the Actor Model of Carl Hewitt.  Not the point of view of Process Algebra. (See  Actor model and process calculi.)  OK, each actor has 5 behaviours, most of them consisting into graph rewrites of it’s own molecule or between molecules of neighbouring actors. These graph rewrites are like chemical reactions between molecules and enzymes, one enzyme per graph rewrite. Finally, the connections between actors (may) change as a result of these graph rewrites.

That is the Distributed GLC model, very briefly.

It is different from Programming with CRN because of several reasons.

1.  Participants at the computation are individual molecules. This may be unrealistic for real chemistry and lab measurements of chemical reactions, but this is not the point, because the artificial chemistry chemlambda is designed to be used on the Internet. (However, see the research project on  single molecule quantum computer).

2. There is no explicit stochastic behaviour. Each actor in charge of it’s molecule behaves deterministically. (Or not, there is nothing which stops the model to be modified by introducing some randomness into the behaviour of each actor, but that is not the point here). There are not huge numbers of actors, or some average behaviour of those.

That is because of point 1. (we stay at the level of individual molecules and their puppeteers, their actors) and also because we use the Actor Model style, and not Process Algebra.

So, there is an implicit randomness, coming from the fact that the computation is designed Actor Model style, i.e. such that it may work differently, depending on the particular physical  timing of messages which are sent between actors.

3.  The computation is purely local. It is also decentralized. There is no corporate point of view of counting the number of identical molecules, or their proportion in a global space – global time solution.  This is something reasonable from the point of view of a distributed computation over the Internet.

__________________________________

All this being said,  of course that it would be interesting to see what happens with CRNs of reactions of molecules in chemlambda.  May be very instructive, but this would be a different model.

That is why Distributed GLC does not use the CRN point of view.

__________________________________

The first thread of the Moirai

I just realized that maybe +Carl Vilbrandt  put the artificial life thread of ideas in my head, with this old comment at the post Ancient Turing machines (I): the three Moirai:

Love the level of free association  between computation and Greek philosophy. Very creative.
In this myth computation = looping cycles of life. by the Greek goddess of Mnemosyne (one of the least rememberer of the gods requires the myth of the Moirai to recall how the logic of life / now formalized as Lisp works.
As to the vague questions:
1. Yes they seem to be the primary hackers of necessity.
2. Yes The emergent the time space of spindle of necessity can only be by the necessary computational facts of matter.
3. Of course at this scale of myth of wisdom it a was discreet and causal issue. Replication of them selfs would have been no problem.
Lisp broken can’t bring my self write in any other domain. So lisp is the language of life. With artifical computation resources science can at last define life and design creatures.

Thank you!

When I wrote that post, things were not as clear to me as now. Basically I was just trying to generate all graphs of GLC (in the newer version all molecules of the artificial chemistry called “chemlambda“) by using the three Moirai as a machine.

This thread is now a big dream and a project in the making, to unify the meatspace with the virtual space by using the IoT as a bridge between the real chemistry of the real world and the artificial chemistry of the virtual world.

The true Internet of Things, decentralized computing and artificial chemistry

A thing is a discussion between several participants.  From the point of view of each participant, the discussion manifests as an interaction between the participant with the other participants, or with itself.

There is no need for a global timing of the interactions between participants involved in the discussion, therefore we talk about an asynchronous discussion.

Each participant is an autonomous entity. Therefore we talk about a decentralized discussion.

The thing is the discussion and the discussion is the thing.

When the discussion reaches an agreement, the agreement is an object. Objects are frozen discussions, frozen things.

In the true Internet of Things, the participants can be humans or virtual entities. The true internet of Things is the thing of all things, the discussion of all discussions. Therefore the true Internet of Things has to be asynchronous and decentralized.

The objects of the true Internet of Things are the objects of discussions. For example a cat.

Concentrating exclusively on objects is only a manifestation of the modern aversion of having a conversation. This aversion manifests in many ways (some of them extremely useful):

  • as a preference towards analysis, one of the tools of the scientific method
  • as the belief in the semantics, as if there is a meaning which can be attached to an object, excluding any discussion about it
  • as externalization of discussions, like property rights which are protected by laws, like the use of the commons
  • as the belief in objective reality, which claims that the world is made by objects, thus neglecting the nature of objects as agreements reached (by humans) about some selected aspects of reality
  • as the preference towards using bottlenecks and pyramidal organization as a mean to avoid discussions
  • as various philosophical currents, like pragmatism, which subordinates things (discussions) to their objects (although it recognizes the importance of the discussion itself,  as long as it is carefully crippled in order that it does not overthrow the object’s importance).

Though we need agreements, we need to rely on objects (as evidence), there is no need to limit the future true Internet of Things to an Internet of Objects.

______________________________________

We already have something  called Internet of Things, or at least something which will become an Internet of Things, but it seems to be designed as an Internet of Objects. What is the difference? Read Notes for “Internet of things not Internet of objects”.

Besides humans, there will be  the other participants in the  IoT,  in fact the underlying connective mesh which should support the true Internet of Things.  My proposal is to use an artificial chemistry model mixed with the actor model, in order to have only the strengths of both models:

  1.   decentralized,
  2. does not need an overlooking controller,
  3. it works without  needing to have a meaning, purpose or in any other ways  being oriented to problem solving
  4. does not need to halt
  5. inputs, processing and output have the same nature (i.e. just chemical molecules and their proximity-based interactions).

without having the weaknesses:

  1.  the global view of Chemical Reaction Networks,
  2. the generality of behaviours of the actors in the actor model, which forces the model to be seen as a high level, organizing the way of thinking about particular computing tasks, instead of being a very low level, simple and concrete model.

______________________________________

With these explanations, please go and read again  three  older posts and a page, if interested to understand more:

______________________________________

What is new in distributed GLC?

We have seen that several parts or principles of distributed GLC are well anchored in previous, classical research.  There are three such ingredients:

There are several new things, which I shall try to list them.

1.  It is a clear, mathematically well formulated model of computation. There is a preparation stage and a computation stage. In the preparation stage we define the “GLC actors”, in the computation stage we let them interact. Each GLC actor interact with others, or with itself, according to 5 behaviours.  (Not part of the model  is the choice among  behaviours, if several are possible at the same moment.  The default is  to impose to the actors to first interact with others (i.e. behaviours 1, 2, in this order)  and if no interaction is possible then proceed with internal behaviours 3, 4, in this order. As for the behaviour 5, the interaction with external constructs, this is left to particular implementations.)

2.  It is compatible with the Church-Turing notion of computation. Indeed,  chemlambda (and GLC) are universal.

3. The evaluation  is not needed during computation (i.e. in stage 2). This is the embodiment of “no semantics” principle. The “no semantics” principle actually means something precise, is a positive thins, not a negative one. Moreover, the dissociation between computation and evaluation is new in many ways.

4. It can be used for doing functional programming without the eta reduction. This is a more general form of functional programming, which in fact is so general that it does not uses functions. That is because the notion of a function makes sense only in the presence of eta reduction.

5. It has no problems into going outside, at least apparently, Church-Turing notion of computation. This is not a vague statement, it is a fact, meaning that GLC and chemlambda have sectors (i.e. parts) which are used to represent lambda terms, but also sectors which represent other formalisms, like tangle diagrams, or in the case of GLC also emergent algebras (which are the most general embodiment of a space which has a very basic notion of differential calculus).

__________________________________________

All these new things are also weaknesses of distributed GLC because they are, apparently at least, against some ideology.

But the very concrete formalism of distributed GLC should counter this.

I shall use the same numbering for enumerating the ideologies.

1.  Actors a la Hewitt vs Process Calculi.  The GLC actors are like the Hewitt actors in this respect.  But they are not as general as Hewitt actors, because they can’t behave anyhow. On the other side, is not very clear if they are Hewitt actors, because there is not a clear correspondence between what can an actor do and what can a GLC actor do.

This is an evolving discussion. It seems that people have very big problems to  cope with distributed, purely local computing, without jumping to the use of global notions of space and time. But, on the other side, biologists may have an intuitive grasp of this (unfortunately, they are not very much in love with mathematics, but this changes very fast).

2.   distributed GLC is a programming language vs is a machine.  Is a computer architecture or is a software architecture? None. Both.  Here the biologist are almost surely lost, because many of them (excepting those who believe that chemistry can be used for lambda calculus computation) think in terms of logic gates when they consider computation.

The preparation stage, when the actors are defined, is essential. It resembles with choosing the right initial condition in a computation using automata. But is not the same, because there is no lattice, grid, or preferred topology of cells where the automaton performs.

The computation stage does not involve any collision between molecules mechanism, be it stochastic or deterministic. That is because the computation is purely local,  which means in particular that (if well designed in the first stage) it evolves without needing this stochastic or lattice support. During the computation the states of the actors change, the graph of their interaction change, in a way which is compatible with being asynchronous and distributed.

That is why here the ones which are working in artificial chemistry may feel lost, because the model is not stochastic.

There is no Chemical reaction network which concerts the computation, simply because a CRN is aGLOBAL notion, so not really needed. This computation is concurrent, not parallel (because parallel needs a global simultaneity relation to make sense).

In fact there is only one molecule which is reduced, therefore distributed GLC looks more like an artificial One molecule computer (see C. Joachim Bonding More atoms together for a single molecule computer).  Only it is not a computer, but a program which reduces itself.

3.  The no semantics principle is against a strong ideology, of course.  The fact that evaluation may be not needed for computation is  outrageous (although it might cure the cognitive dissonance from functional programming concerning the “side effects”, see  Another discussion about math, artificial chemistry and computation )

4.  Here we clash with functional programming, apparently. But I hope that just superficially, because actually functional programming is the best ally, see Extreme functional programming done with biological computers.

5.  Claims about going outside Church-Turing notion of computation are very badly received. But when it comes to distributed, asynchronous computation, it’s much less clear. My position here is that simply there are very concrete ways to do geometric or differential like “operations” without having to convert them first into a classical computational frame (and the onus is on the classical computation guys to prove that they can do it, which, as a geometer, I highly doubt, because they don’t understand or neglect space, but then the distributed asynchronous aspect come and hits  them when they expect the least.)

______________________________________________

Conclusion:  distributed GLC is great and it has a big potential, come and use it. Everybody  interested knows where to find us.  Internet of things?  Decentralized computing? Maybe cyber-security? You name it.

Moreover, there is a distinct possibility to use it not on the Internet, but in the real physical world.

______________________________________________

Is the Seed possible?

Is the Seed possible? Neal Stephenson, in the book The Diamond Age, presents the idea of the Seed, as opposed to the Feed.

The Feed is a hierarchical network of pipes and matter compilers (much like  an Internet of Things done not with electronics, but with nanotechnology, I’d say).

The Seed is a different technology. I selected some  paragraphs from the book, which describe the Seed idea.

“I’ve been working on something,” Hackworth said. Images of a nanotechnological system, something admirably compact and elegant, were flashing over his mind’s eye. It seemed to be very nice work, the kind of thing he could produce only when he was concentrating very hard for a long time. As, for example, a prisoner might do.
“What sort of thing exactly?” Napier asked, suddenly sounding rather tense.
“Can’t get a grip on it,” Hackworth finally said, shaking his
head helplessly. The detailed images of atoms and bonds had been replaced, in his mind’s eye, by a fat brown seed hanging in space, like something in a Magritte painting. A lush bifurcated curve on one end, like buttocks, converging to a nipplelike point on the other.

CryptNet’s true desire is the Seed—a technology that, in their diabolical scheme, will one day supplant the Feed, upon which our society and many others are founded. Protocol, to us, has brought prosperity and peace—to CryptNet, however, it is a contemptible system of oppression. They believe that information has an almost mystical power of free flow and self-replication, as water seeks its own level or sparks fly upward— and lacking any moral code, they confuse inevitability with Right. It is their view that one day, instead of Feeds terminating in matter compilers, we will have Seeds that, sown on the earth, will sprout up into houses, hamburgers, spaceships, and books—that the Seed
will develop inevitably from the Feed, and that upon it will be
founded a more highly evolved society.

… her dreams had been filled with seeds for the last several years, and that every story she had seen in her Primer had been replete with them: seeds that grew up into castles; dragon’s teeth that grew up into soldiers; seeds that sprouted into giant beanstalks leading to alternate universes in the clouds; and seeds, given to hospitable, barren couples by itinerant crones, that grew up into plants with bulging pods that contained happy, kicking babies.

Arriving at the center of the building site, he reached into his bag and drew out a great seed the size of an apple and pitched it into the soil. By the time this man had walked back to the spiral road, a tall shaft of gleaming crystal had arisen from the soil and grown far above their heads, gleaming in the sunlight, and branched out like a tree. By the time Princess Nell lost sight of it around the corner, the builder was puffing contentedly and looking at a crystalline vault that nearly covered the lot.

All you required to initiate the Seed project was the rational,
analytical mind of a nanotechnological engineer. I fit the bill
perfectly. You dropped me into the society of the Drummers like a seed into fertile soil, and my knowledge spread through them and permeated their collective mind—as their thoughts spread into my own unconscious. They became like an extension of my own brain.

_________________________

Now, suppose the following.

We already have an Internet of Things, which would serve as an interface between the virtual world and the real world, so there is really not much difference between the two, in the specific sense that something in the former could easily produce effects in the latter.

Moreover, instead of nanotechnology, suppose that we are content with having, on the Net, an artificial chemistry which would mirror the real chemistry of the world, at least in it’s functioning principles:

  1.   it works in a decentralized, distributed way
  2. does not need an overlooking controller, because all interactions are possible only when there is spatial and temporal proximity
  3. it works without  needing to have a meaning, purpose or in any other ways  being oriented to problem solving
  4. does not need to halt
  5. inputs, processing and output have the same nature (i.e. just chemical molecules and their proximity-based interactions).

In this  world, I see a Seed as a dormant, inactive artificial chemical molecule.  When the Seed is planted (on the Net),

  1. it first grows into a decentralized, autonomous network (i.e. it starts to multiply, to create specialized parts, like a real seed which grows into a tree),
  2. then it starts computing (in the chemical sense, it starts to self-process it’s structure)
  3. and interacts with the real world (via the sensors and effectors available via the IoT) until it creates something in the real world.

_____________________

Notes:

  •  clearly, the artificial chemistry I am thinking about is chemlambda
  •  the principles of the sort of working of this artificial chemistry are those of the Distributed GLC

___________________________

How to plant a Seed (I)

In  The Diamond Age there is the Feed and, towards the end, appears the Seed.

There are, not as many as I expected, but many places where this Seed idea of Neal Stephenson is discussed. Most of them discuss it in relation to the Chinese  Ti yong  way of life, following the context where the author embeds the idea.

Some compare the Seed idea with open source.

For me, the Seed idea becomes interesting when is put together with distributed, decentralized computing. How to make a distributed Seed?

If you start thinking about this, it makes even more sense if you add one more ingredient: the Internet of Things.

Imagine a small, inactive, dormant, virtual thing (a Seed) which is planted somewhere in the fertile ground of  the  IoT. After that it becomes active, it grows, becomes a distributed, decentralized computation. Because it lives in the IoT it can have effects in the physical world, it can interact with all kinds of devices connected with the IoT, thus it can become a Seed in the sense of Neal Stephenson.

Chemlambda is a new kind of  artificial chemistry, which is intended to be used in distributed computing, more specifically in decentralized computing.  As a formalism it is a variant of graphic lambda calculus, aka GLC.  See the page  Distributed GLC for details of this project.

So, I am thinking about how to plant a chemlambda Seed. Concretely, what could pass for a Seed in chemlambda and in what precise sense can be planted?

In the next post I shall give technical details.

Another discussion about math, artificial chemistry and computation

I have to record this ongoing discussion from G+, it’s too interesting.  Shall do it from  my subjective viewpoint, be free to comment on this, either here or in the original place.

(Did the almost the same, i.e. saved here some of my comments from an older discussion, in the post   Model of computation vs programming language in molecular computing. That recording was significant, for me at least, because I made those comments by thinking at the work on the GLC actors article, which was then in preparation.)

Further I shall only lightly edit the content of the discussion (for example by adding links).

It started from this post:

 “I will argue that it is a category mistake to regard mathematics as “physically real”.”  Very interesting post by Louis Kauffman: Mathematics and the Real.
Discussed about it here: Mathematics, things, objects and brains.
Then followed the comments.
Greg Egan‘s “Permutation City” argues that a mathematical process itself is identical in each and any instance that runs it. If we presume we can model consciousness mathematically it then means: Two simulated minds are identical in experience, development, everything when run with the same parameters on any machine (anwhere in spacetime).

Also, it shouldn’t matter how a state came to be, the instantaneous experience for the simulated mind is independent of it’s history (of course the mind’s memory ought to be identical, too).He then levitates further and proposes that it’s not relevant wether the simulation is run at all because we may find all states of such a mind’s being represented scattered in our universe…If i remember correctly, Egan later contemplated to be embarassed about this bold ontological proposal. You should be able to find him reasoning about the dust hypothesis on the accompanying material on his website.Update: I just saw that wikipedia’s take on the book has the connection to Max Tegmark in the first paragraph:
> […] cited in a 2003 Scientific American article on multiverses by Max Tegmark.
http://en.wikipedia.org/wiki/Permutation_City

___________________________
+Refurio Anachro  thanks for the reference to Permutation city and to the dust hypothesis, will read and comment later. For the moment I have to state my working hypothesis: in order to understand basic workings of the brain (math processing included), one should pass any concept it is using through the filter
  • local not global
  • distributed not sequential
  • no external controller
  • no use of evaluation.

From this hypothesis, I believe that notions like “state”, “information”, “signal”, “bit”, are concepts which don’t pass this filter, which is why they are part of an ideology which impedes the understanding of many wonderful things which are discovered lately, somehow against this ideology. Again, Nature is a bitch, not a bit 🙂

That is why, instead of boasting against this ideology and jumping to consciousness (which I think is something which will wait for understanding sometimes very far in the future), I prefer to offer first an alternative (that’s GLC, chemlambda) which shows that it is indeed possible to do anything which can be done with these ways of thinking coming from the age of the invention of the telephone. And then more.

After, I prefer to wonder not about consciousness and it’s simulation, but instead about vision and other much more fundamental processes related to awareness. These are taken for granted, usually, and they have the bad habit of contradicting any “bit”-based explanation given up to date.

___________________________

the past lives in a conceptual domain
would one argue then that the past
is not real
___________________________
+Peter Waaben  that’s easy to reply, by way of analogy with The short history of the  rhino thing .
Is the past real? Is the rhinoceros horn on the back real? Durer put a horn on the rhino’s back because of past (ancient) descriptions of rhinos as elephant killers. The modus operandi of that horn was the rhino, as massive as the elephant, but much shorter, would go under the elephant’s belly and rip it open with the dedicated horn.  For centuries, in the minds of people, rhinos really have a horn on the back. (This has real implications, alike, for example, with the idea that if you stay in the cold then you will catch a cold.) Moreover, and even more real, there are now real rhinos with a horn on the back, like for example Dali’s rhino.
I think we can safely say that the past is a thing, and any of this thing reifications are very real.
___________________________
+Peter Waaben, that seems what the dust hypothesis suggests.
+Marius Buliga, i’m still digesting, could you rephrase “- no use of evaluation” for me? But yes, practical is good!
___________________________
[My comment added here: see the posts

]

___________________________

+Marius Buliga :

+Refurio Anachro  concretely, in the model of computation based on lambda calculus you have to add an evaluation strategy to make it work (for example, lazy, eager, etc).  The other model of computation, the Turing machine is just a machine, in the sense that you have to build a whole architecture around to use it. For the TM you use states, for example, and the things work by knowing the state (and what’s under the reading head).  Even in pure functional programming, besides their need for an evaluation strategy, they live with this cognitive dissonance: on one side they they rightfully say that they avoid the use of states of the imperative programming, and on the other side they base their computation on evaluation of functions! That’s funny, especially if you think about the dual feeling which hides behind “pure functional programming has no side effects” (how elegant, but how we get real effects from this?).
In distinction from that. in distributed GLC there is no evaluation needed for computation. There are several causes of this. First is that there are no values in this computation. Second is that everything is local and distributed. Third is that you don’t have eta reduction (thus no functions!). Otherwise, it resembles with pure functional programming if you  see the core-mask construction as the equivalent of the input-output monad (only that you don’t have to bend backwards to keep both functions and no side effects in the model).
[My comment added here: see behaviour 5 of a GLC actor explained in this post.
]
Among the effects is that it goes outside the lambda calculus (the condition to be a lambda graph is global), which simplifies a lot of things, like for example the elimination of currying and uncurrying.  Another effect is that is also very much like automaton kind of computation, only that it is not relying on a predefined grid, nor on an extra, heavy handbook of how to use it as a computer.
On a more philosophical side, it shows that it is possible to do what the lambda calculus and the TM can do, but it also can do things without needing signals and bits and states as primitives. Coming back a bit to the comparison with pure functional programming, it solves the mentioned cognitive dissonance by saying that it takes into account the change of shape (pattern? like in Kauffman’s post) of the term during reduction (program execution), even if the evaluation of it is an invariant during the computation (no side effects of functional programming). Moreover, it does this by not working with functions.
___________________________
+Marius Buliga “there are no values in this computation” Not to disagree, but is there a distinction between GLC graphs that is represented to a collection of possible values? For example, topological objects can differ in their chromatic, Betti, genus, etc. numbers. These are not values like those we see in the states, signals and bits of a TM, but are a type of value nonetheless.
___________________________
+Stephen Paul King  yes, of course, you can stick values to them, but  the fact is that you can do without, you don’t need them for the sake of computation. The comparison you make with the invariants of topological objects is good! +Louis Kauffman  made this analogy between the normal form of a lambda term and such kinds of “values”.
I look forward for his comments about this!
___________________________
+Refurio Anachro  thanks again for the Permutation city reference. Yes, it is clearly related to the budding Artifficial Connectomes idea of GLC and chemlambda!
It is also related with interests into Unlimited Detail 🙂 , great!
[My comment added here: see the following quote from the Permutation city wiki page

The Autoverse is an artificial life simulator based on a cellular automaton complex enough to represent the substratum of an artificial chemistry. It is deterministic, internally consistent and vaguely resembles real chemistry. Tiny environments, simulated in the Autoverse and filled with populations of a simple, designed lifeform, Autobacterium lamberti, are maintained by a community of enthusiasts obsessed with getting A. lamberti to evolve, something the Autoverse chemistry seems to make extremely difficult.

Related explorations go on in virtual realities (VR) which make extensive use of patchwork heuristics to crudely simulate immersive and convincing physical environments, albeit at a maximum speed of seventeen times slower than “real” time, limited by the optical crystal computing technology used at the time of the story. Larger VR environments, covering a greater internal volume in greater detail, are cost-prohibitive even though VR worlds are computed selectively for inhabitants, reducing redundancy and extraneous objects and places to the minimum details required to provide a convincing experience to those inhabitants; for example, a mirror not being looked at would be reduced to a reflection value, with details being “filled in” as necessary if its owner were to turn their model-of-a-head towards it.

]

But I keep my claim that that’s enough to understand for 100 years. Consciousness is far away. Recall that first electricity appeared as  kind of life fluid (Volta to Frankenstein monster), but actually it has been used with tremendous success for other things.
I believe the same about artificial chemistry, computation, on one side, and consciousness on the other. (But ready to change this opinion if faced with enough evidence.)

___________________________

Chemical concrete machine on figshare

Just published the Chemical concrete machine on figshare.  Should be cited as


Chemical concrete machine. Marius Buliga. figshare.
http://dx.doi.org/10.6084/m9.figshare.830457

The description is a bit more telling than the abstract of the article (see why by reading the suggested posts below):
This article introduces an artificial, algorithmic chemistry based on local graph rewrites applied to a class of trivalent graphs named molecules. The graph rewrites are seen as chemical reactions with enzymes. The enzymes are tagged with the name of the respective move.  This artificial chemistry is Turing complete, because it allows the implementation of untyped lambda calculus with beta reduction (but without eta reduction). This opens the possibility  to use this artificial chemistry for constructing artificial chemical connectomes.
Here, on this open notebook/blog, you may read more about this here:
The article is available also as  arXiv:1309.6914 [cs.FL].
_________________________

The chemical connectome of the internet

This is a continuation of the post  WWW with Metabolism . That post, in a nutshell, propose to endow the internet connectome with a chemical connectome, by using an artificial chemistry for this artificial network, namely the graphic form of the lambda calculus formalism of the chemical concrete machine (please go read the mentioned post for details).

I’m going to do a weird, but maybe instructive thing and pretend that it already happened. Why? Because I intend to argue that the internet with a chemical connectome might be a good laboratory for testing ideas about real brains.

The name “chemical connectome” is taken from the article The BRAIN Initiative: Toward a Chemical Connectome.  It means the global chemical reaction network associated with the “connectome”, which is the physical neurons-and-synapses network. Quoting from the article:

Focusing solely on neuronal electrical activity is shortsighted if the goal is to understand information processing in brains. Brain function integrally entails complex synaptic chemistries. […] New tools fuel fundamentally new conceptualizations. By and large, today’s approaches for sensing neurotransmitters are not able to approach chemical neurotransmission at the scales that will ultimately be important for uncovering emergent properties. Likewise, most methods are not highly multiplexed suchthat the interplay of multiple chemical transmitters can be unraveled. In addition to measuring electrical activity at hundreds or thousands of neurons simultaneously, we need to advocate for a large-scale multidisciplinary effort aimed at in vivo nanoscale chemical sensing.

Imagine then we already succeeded with implementing a (logical, artificial, human made) chemical connectome over the internet, by using CS tools and exploiting the relative simplicity of ingredients of the  internet (connected Von Neumann machines which exchange data through known formats and programming languages). We could explore this chemical connectome and we could play with it (because we defined it), even if, like in the case of the brain connectome, we can’t possibly know the www network in all details. So, we should be in a situation which is extremely complex, like a brain is,  but however allows for very powerful  experimentation, because is still human made, according to human rules.