This post introduces **chemlambda v2**. I continue from the last post, which describes the fact that chemlambda v1, even if it has only local rewrites, it is not working well when used with the dumbest possible reduction algorithms.

Nature has to work with the dumbest algorithms, or else we live in a fairy tale.

**Chemlambda v2 is an artificial chemistry**, in the following sense:

- it is a graph rewrite system over oriented fatgraphs made of a finite number of nodes, from the list: 5 types of 3-valent nodes, A (application), L (lambda abstraction), FO (fanout), FI (fanin), FOE (external fanout), 1 type of 2-valent node Arrow, 3 types of 1-valent nodes, FRIN (free in), FROUT (free out), T (termination). Compared to chemlambda v1, there is a new node, the FOE. The nodes, not the rewrites, are described in this early explanation called Welcome to the soup. (
~~Mind that the gallery of example which is available at the end of these explanation mixes chemlambda v1 and chemlambda v2 examples.~~I updated the links so that is no longer pointing to this very early gallery of examples. However if you like it here is it.) - the rewrites CO-COMM and CO-ASSOC of chemlambda v1 are not available, instead there are several new DIST rewrites: FO-FOE, L-FOE, A-FOE, FI-FO, and a new beta like rewrite FI-FOE. As in chemlambda v1, the patterns of the rewrites fit with the interaction combinator rewrites if we forget the orientation of the edges, but the 3-valent nodes don’t have a principal port, so they don’t form interaction nets. Moreover, the are conflicts among the rewrites, i.e. there are configurations of 3 nodes such that we have a node which belongs to two pairs of nodes which may admit rewrites. The order of application of rewrites may matter for such conflicts.
- there is an algorithm of application of rewrites, which is either the deterministic greedy algorithm with a list of priority of rewrites (for example beta rewrites have priority over DIST rewrites, whenever there is a conflict), or the random application algorithm.

**Sources for chemlambda v2**:

- Github repository (awk, html, js), see also a Haskell implementation written by synergistics.
- Demos pages with lots of examples and explanations
- M. Buliga, Molecular computers (2015, html) (2018, arXiv) (2018, figshare), main article source for the exact syntax and list of rewrites. [but the awk implementation has the final authority, as I shall explain]
- M. Buliga, Build a molecular computer, The Journal of Brief Ideas (2015)
- M. Buliga, The Chemlambda collection of simulations (2017, figshare) contains hundreds of simulation results, many of those used to make the animations from the now deleted chemlambda G+ collection.
- M. Buliga, The library of chemlambda molecules (2016, github) contains 427 molecules (graphs) with interesting behaviour [and some more which are useful for variants of chemlambda v2, will be explained later]
- For context, consult this page for a more exhaustive list.

**The goal of chemlambda v2**: to explore the possibility of molecular computers in this artificial chemistry.

This needs explanations. Indeed, **does the system work with the simplest random algorithm?** We are not interested into semantics, because it is, or it relies on global notions, We are not (very) interested into reduction strategies for lambda terms, because they are not as simple as the dumbest algorithms we use here. Likewise for readback, etc.

So, does chemlambda v2 work enough for making molecular computers? Pure untyped lambda calculus reduction problems are an inspiration. If the system works for the particular case of graphs related to lambda terms then this is a bonus for this project.

As you see, instead of searching for an algorithm which could implement, decentralized say, a lambda calculus reduction strategy, we ask if a particular system reduces (graphs related to) terms with one algorithm from the fixed class of dumbest ones.

That is why the universality in the sense of Lafont is fascinating. In this post I argued that Lafont universality property of interaction combinators means, in this pseudo-chemical sense, that **the equivalent molecular computer based on interaction combinators reactions (though not the translations) works** for implementing a big enough class of reactions which are Turing universal in particular (Lafont shows concretely that he can implement Turing machines).

(continues with the part IV)