After a very interesting hangout with Eugenio Battaglia and Gerd Moe-Behrens emerged this idea: to use the functional programming for biological computing. It is not straightforward to explain it, therefore I use this post in order to clarify a bit what is my understanding of the problem.
As a background for this description, consider that I am a mathematician, which is a mixed blessing, because even if some ideas seem easy in mathematical terms, they turn out to be hard to explain without explicit recourse to mathematics. On the other side, I am ignorant concerning chemistry and biology, therefore handicapped in discussions with specialists outside my narrow mathematical world.
Further I shall describe things from my point of view, within the mentioned constraints which shape my understanding. I am not sure if I gave the right description of the zest of the discussion.
1. Apparently, as Gerd Moe-Behrens explains in The biological microprocessor, or how to build a computer with biological parts, there is a trend to use the architecture of a silicon computer for building a biological computer. In particular this means to try to build boolean logic gates, or memory. It is an important observation that whatever happens in a living cell which resembles to computation (in a more vague sense than Turing computation) is not necessarily implemented as in a silicon computer. Instead, synthetic biology tries to import concepts from silicon computers into the bio realm, mainly because these concepts are already familiar. Hence the accent on bits and boolean logic, although not exclusive (other studies concern for example membrane computing, or neural networks, or continuously evolving dynamical systems).
2. On the other side, functional programming is one of the main programming paradigms, a very elegant one, receiving more and more attention in the competition with the more well known imperative programming . Or, evaluation of boolean circuits, which is taken as a role model in many studies of biological computing, is more natural in the imperative programming paradigm. However, functional programming has its roots in the lambda calculus, one of the two pillars of computation, along with the Turing machine (of equivalent power, but with different philosophies behind).
3. In 1994, Fontana and Buss, in a pair of articles, introduce the idea that lambda calculus is a kind of natural formalization of the bare bones of chemistry. Molecules are seen as lambda terms, collision of molecules is seen as the application operation in lambda calculus, and the abstraction operation from lambda calculus “captures the role of functional groups embedded in the reactants” [source, p. 11].
4. By the previous points, it is only natural to try to use the functional programming paradigm for biological computing. This means in particular to no longer chase for logical gates and boolean circuits.
5. An interesting thing is to notice that Fontana and Buss use lambda calculus with eta reduction, i.e. the terms (or molecules) are identified with functions, in the sense that the molecule is identified with the function . Also functional programming, as far as I understand, also use this eta reduction, being more interested into types.
6. Independently, I built graphic lambda calculus, which is even more powerful than lambda calculus. One of the interesting parts of graphic lambda calculus is that it makes clear that the beta reduction (which is the main reduction move in lambda calculus) is local in nature (read “simple to implement in reality”) and the eta reduction, as innocuous as it might look to our powerful brains, is in fact a global reduction move (here you see the eta reduction in graphic lambda calculus). Therefore, I believe that “extreme” functional programming (i.e. without extensionality) is more likely to be implemented in reality (i.e. outside a silicon computer).
7. In graphic lambda calculus we work with graphs, (some of them) corresponding to lambda terms. We could identify them with chemical molecules, as it’s done in algorithmic chemistry. But there are some differences between algorithmic chemistry and my chemical concrete machine proposal, namely that the application and the abstraction operations from lambda calculus are, in the concrete machine, ingredients of molecules, and moves (reductions) are assimilated with enzymes, thus reductions are certain chemical reactions.
8. This leads us to the last point: it is intriguing to search if “extreme” functional programming (i.e. functional programming without extensionality) could be implemented in biological computing. The advantage of this extreme functional programming paradigm is that it might be much more simple to do this implementation than to mimic a silicon computer, because already lambda calculus looks like a kind of chemistry. Another advantage, coming from the fact that the chemical concrete machine uses the graphic lambda calculus formalism, is that geometric operations (like bonding two molecules together, or performing reduction steps in parallel) are more simple to describe in graphic lambda calculus than in the usual lambda calculus.
The disadvantage is that is hard to understand lambda calculus without extensionality. It is already hard to understand functional programming, by someone with the imperative programming mindframe. Without extensionality, is then even harder to not fall into habits of thinking which might be misleading. I know this, because two years ago all this lambda calculus was totally opaque to me. Why, not using equality? Not using functions? Is this mathematics or some crazy invention of logicians? Now I know better, of course. But for somebody which is used with boolean logic and (like me) with functional analysis, it looks crazy first. If you go further, then you start to love it.