Tag Archives: algorithmic chemistry

An apology of molecular computers and answers to critics

This is how a molecular computer would look like, if seen with a magically powerful microscope. It is a single molecule which interacts randomly with other molecules, called “enzymes”, invisible in this animation.


There is no control over the order of the chemical reactions. This is the idea, to compute without control.

The way it works is like this: whenever a reaction happens, this creates the conditions for the next reaction to happen.

There is no need to use a supercomputer to model such a molecule, nor is it reasonable to try, because of big number of the atoms.

It is enough instead to find real molecular assemblies for nodes, ports and bonds, figured here by colored circles and lines.

The only computations needed are those for simulating the family of rewrites – chemical reactions. Every such rewrite involves up to 4 nodes, therefore the computational task is handy.

Verify once that the rewrites are well done, independently of the situation where you want to apply them, that is all.

Once such molecular compounds are found, the next task is to figure out how to build (by chemical reactions) such molecules.

But once one succeeds to build one molecule, the rest is left to Nature way of computing: random, local, asynchronous.

From this stage there is no need to simulate huge molecules in order to know they work. That is something given by the chemlambda formalism.

It is so simple: translate the rewrites into real chemistry, they are easy, then let go the unneeded control from that point on.

This animation is a screencast of a part of the article Molecular computers
and everything can be validated (i.e. verified by your own) by using the chemlambda repository

Now I’ll pass to a list of critics which, faced with the evidence, they look uninformed:
1. Chemlambda is one of those rewriting systems everybody knows. Ignorant claim, while it is true that some rewrites appear all over the place, from string theory to knot theory to category theory to geometry of interaction, the family of graphs considered is not the same, because those graphs are purely combinatorial object and they don’t need a global embedding, like all other formalism do, in a schematic space-time. Moreover, the choice of the rewrites is such that the system works only by local rewriting and no global control on the cascade of rewrites. No other formalism from the family does that.

2.  Is well known that all this is already done in the category theory treatment of lambda calculus.

False, if one really reads what they do in category theory with lambda calculus, then one figures quick that they can’t do much for untyped lambda beta calculus, that is without eta reduction. This is mentioned explicitly in Barendregt, for example, but the hype around categories and lambda calculus is so pervasive that people believe more than what actually is.

3.  Chemical computing is old stuff: DNA computing, membrane computing, the chemical abstract machine, algorithmic chemistry.

Just because it is chemical computing, it does not mean that it is in the family mentioned.

The first name of chemlambda was “chemical concrete machine” and there there are comparison with the chemical abstract machine
(btw I see that some people discover now “catalysts” without credits in the written papers)
The cham is a formalism working with multisets of molecules, not with individual ones, and the computation is done by what corresponds to lab operation (splitting a solution in two, heating, cooling, etc)
The membrane computing work is done around membranes which enclose containers of multisets of molecules, the membrane themselves being some abstract concepts, of a global nature, whil ein reality, as well as in chemlambda, everything is a molecule. Membranes exist in reality but they are made of many molecular compounds.
DNA computing is an amazing research subject, which may be related to chemlambda if there is a realization of chemlambda nodes, ports and bonds, but not otherwise, because there is not, up to my knowledge, any model in DNA computing with the properties: individual molecules, random reactions, not lab operations.
Algorithmic chemistry is indeed very much related to chemlambda, by the fact that it proposes a chemical view on lambda calculus. But from this great insight, the paths are very different. In algorithmic chemistry the application operation from lambda calculus represents a chemical reaction and the lambda abstraction signals a reactive site. In chemlambda the application and lambda abstraction corresponds to atoms of molecules. Besides, chemlambda is not restricted to lambda calculus, only some of the chemlambda molecules can be put in relation with lambda terms, but even for those, the reactions they enter don’t guarantee that the result is a molecule for a lambda term.

Conclusion: if you are a chemist, consider chemlambda, there is nothing like it already proposed. The new idea is to let control go and instead chain the randomly appearing reactions by their spatial patterns, not by lab operations, nor by impossibly sophisticated simulations.
Even if in reality there would be more constraints (coming from the real spatial shapes of the molecules constructed from these fundamental bricks) this would only influence the weights of the random encounters with the enzymes, thus not modifying the basic formalism.
And if it works in reality, even for only situations where there are cascades of tens of reactions, not hundreds or thousands, even that would be a tremendous advance in chemical computing, when compared with the old idea of copying boolean gates and silicon computers circuits.


Appeared also in the chemlambda collection microblog


Suppose this. What then?

 I want to understand how a single molecule interacts with others, chemically. You have to agree that this is a worthy goal.
What I say is this. By using a collection of made up molecules and made up chemical reactions, I proved that by the stupid deterministic algorithm I can do anything and by experiment that it seems that if I design well the initial molecule then I can do anything I proposed myself doing, with the stupid random algorithm (a molecule which encounters randomly enzymes which rewrite it by chemical reactions). For me, the molecule is not the process, it is just a bunch of atoms and bonds. But I proved I can do anything with it, without any lab supervision.Which is relevant because any real cell does that. It has no supervision, nor goals, nor understanding, is nothing else than a collection of chemicals which interact randomly.

My hypothesis is the following. There is a transformation from the made up chemlambda molecules, which TRANSFORMS:
– node into real molecule
– port into real molecule
– bond into real molecule

and some other real molecules called here “enzymes”, one per any type of graph rewrite

such that

– graph rewrite G which replaces this configuration LT of two nodes and 1 bond  into that  RT configuration TRANSFORMS into  the chemical reaction between enzyme G and the transformation of LT into real chemicals, which gives the transformation of RT into real chemicals and the enzyme G (perhaps carrying away some other reaction products, to have conservation of # atoms)

The argument for that hypothesis is that the rewrites are so simple, compared with real chemistry of biomolecules, that there have to exist such reactions.

This is explained in the Molecular computers

Suppose that the hypothesis is confirmed. Either by identifying the TRANSFORM from scratch (i.e. by using chemistry knowledge to identify classes of reactions and chemicals which can model chemlambda), or by finding the enzymes G and the real molecules corresponding to node, port and bond in some fundamental biochemical processes (that would be even more wonderful).

Suppose this. What then?

Then, say I have the goal to design a molecule which does something inside a cell, when injected in the body. It does it by itself, in the cell medium. What it does can always (or in principle) be expressed as an algorithm, as a computation.

I use chemlambda and TRANSFORM to design the molecule and check that once I have it, it does the job. It is of course a problem to build it in reality, but for this I have the printer of Craig Venter, the digital biological converter https://youtu.be/DVL1nL3SU6I .

So I print it and that’s all. When injected in the body, once arrived in the cell, it does the job.

Other possibilities would open in the case some formalism like chemlambda (i.e. using individual molecules and rewrites, along with trivial random algorithms) is identified in real biochemistry. This would help enormously the understanding of biochemical processes, because instead of working empirically, like now when we work at the level of functions of molecules (knowing well that the same molecule does different things in different contexts and that molecule-function association is very fragile in biology), we might work inversely, from using functions as black boxes to being able to build functions. Even to go outside functions and understand chemistry as computation directly, not only as a random medium for encoding our theoretical notions of computation.

See more about this at the chemlambda index


A citizen science project on autonomous computing molecules

 Wanted: chemists, or people who work(ed) with chemical molecules databases!
[update:  github.io version]
The  chemlambda project proposes the following. Chemlambda is a model of computation based on individual molecules, which compute alone, by themselves (in a certain well defined sense). Everything is formulated from the point of view of ONE molecule which interacts randomly with a family of enzymes.
So what?
Bad detail: chemlambda is not a real chemistry, it’s artificial.
Good detail: it is Turing universal in a very powerful sense. It does not rely on boolean gates kind of computation, but on the other pillar of computation which led to functional programming: lambda calculus.
So instead of molecular assemblies which mimic a silicon computer hardware, chemlambda can do sophisticated programming stuff with chemical reactions. (The idea that lambda calculus is a sort of chemistry appeared in the ALCHEMY (i.e. algorithmic chemistry) proposal by Fontana and Buss. Chemlambda is far more concrete and simple than Alchemy, principially different, but it nevertheless owes to Alchemy the idea that lambda calculus can be done chemically.)
From here,  the following reasoning.
(a) Suppose we can make this chemistry real, as explained in the article Molecular computers.  This looks reasonable, based on the extreme simplicity of chemlambda reactions. The citizen science part is essential for this step.
(b) Then is is possible to take further Craig Venter’s Digital Biological Converters (which already exist) idea and enhance it to the point of being able to “print” autonomous computing molecules. Which can do anything (amenable to a computation, so literary anything). Anything in the sense that they can do it alone, once printed.
The first step of such an ambitious project is a very modest one: identify the ingredients in real chemistry.
The second step would be to recreate with real chemistry some of the examples which have been already shown as working, such as the factorial, or the Ackermann function.
Already this second step would be a huge advance over the actual state of the art in molecular computing. Indeed, compare a handful of boolean gates with a functional programming like computation.
If it is, for example, a big deal to build with DNA some simple assemblies of boolean gates, then surely it is a bigger deal to be able to compute the Ackermann function (which is not primitive recursive, like the factorial) as the result of a random chemical process acting on individual molecules.
It looks perfect for a citizen science project, because what is missing is a human distributed search in existing databases, combined with a call for realization of possibly simple proofs of principles chemical experiments based on an existing simple and rigorous formalism.
Once these two steps are realized, then the proof of principle part ends and more practical directions open.
Nobody wants to compute factorials with chemistry, silicon computers are much better for this task. Instead, chemical tiny computers as described here are good for something else.
If you examine what happens in this chemical computation, then you realize that it is in fact a means towards self-building of chemical or geometrical structure at the molecular level. The chemlambda computations are not done by numbers, or bits, but by structure processing. Or this structure processing is the real goal!
Universal structure processing!
In the chemlambda vision page this is taken even further, towards the interaction with the future Internet of Things.

Model of computation vs programming language in molecular computing

An interesting discussion (?) started in the comments of this John Baez  G+ post concerns differences between “model of computation” and “programming language” denominations (in that post, in particular, for Petri nets).

I reproduce here what I think that are relevant bits for this discussion and later, after possibly several updates, I shall try to write something useful by using these bits.

1.  Turing machine and lambda calculus are both models of computation, but lambda calculus is also  a programming language and Turing machine is not.

2. Zachariah Hargis makes the point of comparing this model of computation  vs  programming language distinction as related to the one  made by Leibniz between calculus ratiocinator  and  lingua characteristica. (Among other references, note to self to explore further.)

3. Chemical reaction networks (CRNs)  is one fundamental ingredient of molecular computing, no matter what formalization of CRNs is used. Don’t believe that all “computational part” of a CRN is a Petri net (because it is often very important which are concretely the molecules and reactions involved in the CRN, not only the abstract number of species and reaction rates between those).

4. Petri nets, as used in chemistry, are a tool for getting quantitative information from CRNs, once the CRNs are designed by other means. Petri nets might be useful for thinking about CRNs, but not necessary for designing CRNs.

5.  CRNs  which are designed by using, or embody in some sense lambda calculus is a much more interesting path towards a “programming language”, and a classical one (Fontana and Buss Algorithmic chemistry), than the more studied engineering style implementation of imperative programming and by force copy-paste of TM thinking into bio computing.

6. Among the specifications for a “programming language for chemistry” are the following:

  • (a) geometrical (in particular no use of currying, along more obvious features as being parallel, asynchronous, the latter being achievable already by many well known, non exotic programming languages),
  • (b) purely syntactic (in particular no names management, nor input-output signalling),
  • (c) (maybe an aspect of (b), but not sure)  no use of evaluation in computation (strategy).

(The chemical concrete machine satisfies these requirements and moreover it contains lambda calculus, thus showing that  such a “language” is possible. However, the chemical concrete machine is based on a made-up, artificial chemistry, thus it provides only a proof of principle for the existence of such a “language”. Or is it?  Biochemists help is needed to identify, or search for real chemical reactions which could be used for implementing the chemical concrete machine in reality.)

7. The problem of tacit axioms in history of the Church-Turing thesis might be especially relevant for biochemists, and it could also be mentioned as an argument in favour of making a distinction between “model of computation” and “programming language”:  any model of computation uses some tacit axioms, while a programming language does not (only at the semantic level concerning making (human) sense about the results and ways of functioning of the programming language  such tacit axioms are used in this case). For biochemists, to not use such tacit axioms is a must  when they try to find scientifically valid explanations. CS does well when ignoring these, in most of the cases.


Metabolism of loops (a chemical reaction network in the chemical concrete machine)

In the following figure you see a chemical reaction network in the chemical concrete machine which involves only “+” enzymes.



  • the molecule from the left upper corner corresponds to combinator \Omega = (\lambda x . (xx)) (\lambda x . (xx))
  • for each molecule I marked the available reaction sites, with dashed red closed curves, along with the name in red of the enzyme which is interested in the respective reaction site,
  • for the chemical reactions, figured with blue arrows, I put the name of the move which corresponds to the respective chemical reaction, with an added “+” to indicate that the reaction is unidirectional, as if only “+” enzymes are available (that is I wrote “DIST^{+}” instead of “\delta^{+}” and so on, see for details the notations used in the chemical concrete machine),
  • whenever there are non-overlapping reaction sites, we can perform in parallel the moves (reactions),
  • but in some places we have overlapping reaction sites, like in the case of the molecule from the 3rd row, left, where a \beta reaction site and a DIST reaction site overlap.
  • In case of overlapping reaction sites there are multiple possibilities, which produce branches in the chemical reaction network,
  • I have not used any elimination of loops,
  • several molecules from the figure don’t correspond to lambda calculus terms, thus what  is figured is not a representation of the usual fact that the combinator \Omega does not have a normal form (one which cannot be reduced further),
  • in the middle of the figure we see a loop-producing cycle, hence the name,
  • curiously, by going outside lambda calculus, we can reduce \Omega (at left upper corner) to a “normal” form (right lower corner), i.e. a loop and a molecule which does not have any “+” reaction sites. (The green fork node of that molecule is a fan-out node and the molecule is like a fan-out gate with one output curling back to the input of the fan-out, but remember that no signals circulate through the arrows 🙂  )


See also:


UPDATE:  For clarity, here is the usual cycle of reductions of the combinator \Omega, seen in graphic lambda calculus (but with the drawing conventions of the chemical concrete machine):


(A similar figure appears after the proof of Theorem 3.1  arXiv:1305.5786v2 [cs.LO] . )

In graphic lambda calculus we have the GLOBAL FAN-OUT move, which, as the name says, is a global move. In the chemical concrete machine we have only local moves.  Here is how the same cycle is achieved in the chemical concrete machine.


You see a move (reaction) which is labelled “(multiple) CO-ASSOC”. The reason is that we need several CO-ASSOC moves to pass from the molecule at the right of the 2nd row to the one from the 3rd row. Alternatively, the same can be done with a SWITCH move, which is a succession of FAN-IN(+) , CO-COMM(+)  and FAN-IN(-), therefore unpleasant as well. Moreover, if we invent a SWITCH enzyme (i.e. if we accept SWITCH as a new move which does not modify the whole chemical machine, because SWITCH is a consequence of the other moves) then we have an explosion of places where SWITCH enzyme could act.

In conclusion, the usual endless reduction of \Omega in lambda calculus, is possible, but highly unlikely in the chemical concrete machine, and only in the presence of CO-ASSOC or SWITCH enzymes, and moreover under tight control of the places where these enzymes act.


WWW with Metabolism

While I was trying  to convince biochemists  (I’m still trying)  to use the Chemical concrete machine for a variety of goals, from bio-computing to understanding brains, Stephen Paul King came with an awesome suggestion, which evolved into the following idea:

The WWW is an artificial, human-made network and the Chemical concrete machine (chemlambda) is artificial, human-made, computing friendly chemistry. Let’s use the chemical concrete machine to awake the net by giving it a metabolism.

Together with Louis Kauffman, we are trying to make some fine mathematics with real world implications out of it. Care to join? Then send me or Stephen a message.

Here is a list of arguments in favor of this idea:

  • it is much simpler to use a made-up, simplified chemistry on a network much simpler than brains
  • both the WWW and the chemical concrete machine (which is Turing universal) belong to the same (computational) universe
  • in silico experiments  with WWW + chemlambda  correspond to in vivo experiments with wet neural networks
  • it is scalable
  • may have lots of real life  CS applications
  • it’s mathematically friendly, come on pure mathematicians, you are needed
  • it’s based on lambda calculus, so it’s already incredibly cool, as adepts of functional programming might confirm.


Oh, don’t forget the logo of the chemlambda and graphic lambda calculus:


where you can see two lambdas arranged into a double helix. It’s better than this [source]:

Cyberdyne_logowhich features a Y.


UPDATE: see the more recent post   Fraglets, bionets, and the www with metabolism  fro relevant research already done related to www with metabolism, which could be very useful.

Example: if-then-else in the chemical concrete machine

… along with a short discussion about what the chemical concrete machine can compute. I start with this and then go to the example.

Until now I proved that the graph rewriting system called “chemical concrete machine” can do combinatory logic. Therefore it can compute anything (a Turing machine can).  I shall be more specific, because to the eyes of somebody who is not used with functional programming, this might seem outrageous.

It can compute anything which can be computed by using any programming  language based on lambda calculus, like Haskell or Scala. And then, some more (as regards only the things those languages can do without using any syntactic sugar, of course). The procedure is straightforward, namely translate the (essentially) combinatory logic terms into graphs and then let the enzymes (moves) to do their magic. I shall give a bit later the example of the if-then-else structure.

There are, of course, interesting questions to ask, among them:

  • do exist real molecules and real enzymes which react one with another like in the chemical concrete machine formalism? Here I need your help, dear chemists.
  • is this an example of a sequential or parallel computation?
  • what evaluation procedure is used in the chemical concrete machine?

The second question is the most easy to ask: is parallel computation.  The molecules (graphs) are mixed in a reactor with a choice of enzymes and then all possible reactions can occur in parallel, at all times. (Realistic models will have attached a probabilistic part, but there is nothing special here, because the chemical concrete machine, initialized with molecules and enzymes, is just a chemical reaction network, so any procedure to attach the probabilistic part to a chemical reaction network can also be used in conjunction with the chemical concrete machine.)

But we can also prepare the initial molecules such that the computation is sequential. It suffices to use zippers. More later. But for the moment, it is worthy to mention that the chemical concrete machine (as well as the graphic lambda calculus) are already proven to be more powerful than lambda calculus. Why? for at least two reasons:

  • lambda calculus, or combinatory logic, are just sectors of the formalisms, i.e. they correspond to only a part of what the formalisms can do. There are other sectors, as well, for example the tangle diagram sector.
  • they are naturally parallel, as long as we use only local moves, as is the case for the chemical concrete machine, for example. Indeed, I wrote that a graph is a “molecule”, but this is only a way of speaking, because molecules could be better identified with connected graphs. But in these formalisms the graphs are not supposed to be connected: any (finite) collection of graphs (i.e. molecules) is also a graph. The moves being local, there is no interference appearing in the simultaneous application of several instances of the (local) moves in different places, for different molecules (connected subgraphs) or even for the same molecule, as long as the places where the moves are applied are different one from another. On the other side, lambda calculus and combinatory logic are naturally sequential.

The third question, concerning the evaluation procedure will be also explored in further posts.  Care has to be taken here because there are no variables in these formalisms  (which translates in less demand of different species of real molecules only for the need to name variables). So it is about the order of moves, right?  The short answer is that it depends, sometimes the computation done by the chemical machine can be seen as greedy evaluation, sometimes as lazy evaluation.

Let me make again the point that somehow the chemical concrete machine formalism should be seen as part of the beautiful idea of algorithmic chemistry. So, it’s not so unearthly.

Finally, it is well known that lambda calculus and Turing machines are the two pillars of computation. For historical reasons chemists seem to concentrate only on the emulation of Turing machines (pleas correct me if I’m wrong).  The main idea of algorithmic chemistry, as far as I understand, is that a sufficiently complex chemical network has the capacity to do lambda calculus. But, if you are determined to use only Turing machines for chemical computing, then, supposing that algorithmic chemistry idea is true, you have to translate the natural language of lambda calculus into the Turing machine frame. This is a tarpit. Very fast, it becomes very hard to follow.  Instead, why not use lambda calculus as it is, for the real powerful applications of chemical computing, and in parallel use one of the excellent languages for simulating in silico chemical computing.


The if-then-else construct has already been explored in graphic lambda calculus,  see Teaser: B-type neural networks in graphic lambda calculus (II)   in . Here I shall do a much simpler example, just by making the right translations, as explained in the beginning of this post.

In lambda calculus, there are terms called TRUE, FALSE and IFTHENELSE, which are the Church encodings of the booleans true, false and if-then-else. Theassociated graphs in the chemical concrete machine are:


Take other two molecules A, B, with one exit each, in particular you may think that they correspond to terms in lambda calculus (but it is not mandatory). Then IFTHENELSE TRUE A B should become A. In the chemical concrete machine, with only beta + enzymes, look what is happening:


Along this chain of reactions, there is no other choice than the one from the figure. Why? Because essentially at every step there is only one reaction site available to the enzyme beta+ (of course, in the region of the reactor represented in the figure). The result is, unsurprisingly, compatible with the lambda calculus version, with the exception that A and B are not supposed to be (graphs corresponding to) lambda terms. They can be anything, as for example, from the family of “other molecules”.

In lambda calculus IFTHENELSE FALSE A B should become (by reductions) B. In the chemical concrete machine look what happens:


The previous remarks apply here as well.

With a little bit of imagination, if we look closer to what TRUE and FALSE are doing, then we can adapt the IFTHENELSE to what I’ve called a B-type NN synapse and obtain a molecule which releases, under the detection of a certain molecule, the medicine A, and under the detection of another  the medicine B.


Return to the chemical concrete machine tutorial.