Chemlambda v2 is an entirely different project than GLC and chemlambda v1. This post continues from the first part. It explains the passage towards chemlambda v2.
A problem of GLC and chemlambda v1 is that research articles are opinion pieces, not validated by programs and experiments. The attempt to use GLC with the Actor Model in order to build a decentralized computing proposal, aka distributed GLC, failed because of this. Does all of this work?
The CO-COMM and CO-ASSOC rewrites lead to the situation that, in order to be useful, either:
- they have to be applied by a human or by a(n unknown) very clever algorithm
- or they are applied in both directions randomly, which implies that no GLC or chemlambda v1 reduction ever terminates.
Here is an early visual tutorial which introduces the nodes of chemlambda v2.
At the end of it you are pointed to See also a gallery of examples which mixes chemlambda v1 with chemlambda v2, like these:
- the duplication of the combinator S in chemlambda v1, without using CO-COMM or CO-ASSOC, fails.
- the duplication of the same combinator in chemlambda v2 works well.
Or, another example, the Y combinator. In chemlambda v1, without using CO-COMM and CO-ASSOC, the Y combinator applied to an unspecified term behaves like this. In chemlambda v2, where there is a supplimentary node and other rewrites, the Y combinator behaves almost identically, but some nodes (the yellow FOE here instead of the green FO before) are different: