Artificial chemistry and lambda calculus in phind.com

I asked phind.com “artificial chemistry and lambda calculus”.

It answered this:

Answer | gpt-4 Model

Artificial chemistry and lambda calculus are intertwined through a concept known as chemlambda, which is a graph rewriting system that can model lambda calculus computations.

Chemlambda is a variant of the graphic lambda calculus, a universal model of computation based on graph rewrites, that was proposed by Marius Buliga. It is designed to work with a specific class of graphs, called chemlambda graphs, which can represent lambda terms Source 0, Source 5, Source 7.

An interesting concept in the context of chemlambda and lambda calculus is the idea of a quine graph. A quine graph is a graph that, given a graph rewrite system, has a non-void maximal collection of non-conflicting matches of left patterns of graph rewrites. After the parallel application of the rewrites, we obtain a graph isomorphic with the original graph Source 0. Such graphs exhibit a metabolism, meaning they can multiply or die when reduced by a random rewriting algorithm Source 0, Source 9, Source 12.

Marius Buliga developed a library of programs and a database that contain hundreds of graphs. These are used in artificial chemistry experiments with chemlambda, lambda calculus, and interaction combinators. The experiments are bundled into pages and can be accessed from the entry page https://chemlambda.github.io/index.html. There are also versions of these programs in Python, Haskell, Awk, and JavaScript Source 0, Source 7, Source 9.

The chemlambda project took an open turn, based on programs and demonstrations which could be validated by reproduction or by porting of the programs into other languages Source 7.

In the context of lambda calculus, the paper “A Neural Lambda Calculus: Neurosymbolic AI meets the foundations of computing and functional programming” discusses the use of integrated neural learning and lambda calculus formalization. The paper indicates that it is enough to learn how to perform reductions in lambda calculus so that we can execute any program Source 14.

To summarize, chemlambda is an interesting approach for combining artificial chemistry and lambda calculus to create a graph rewriting system that can model lambda calculus computations. The concept of quine graphs and the corresponding library of programs and database developed by Marius Buliga provide a rich resource for further exploration and experimentation.”

UPDATE: archived. Notice that the very interesting Source 14 is from 2023. I wonder if it can be trained to perform graph rewrites in mol notation, by using the chemlambda collection of molecules or by generating random ones, like is done in some quinegraph pages.

SKI, crossings, zipper logic, Pure See

In zipper logic I define zipper combinators as in the image

(which appears last time in Thank you hundredrabbits :))

I can define those because there is a correspondence between (knot diagrams) crossings and application and abstraction from lambda calculus (but mind that in my knot diagrams used I do not assume that the diagrams are planar graphs!).

This correspondence is (line 1 for lambda, line 4 for application)

In zipper logic revisited (aka zss) I have another correspondence!

which is different.

This might be confusing.

The relation is that the oriented crossings are switched between the older and the newer correspondence.

The newer one is compatible with pure see.

Indeed, according to the newer correspondence, in terms of emergent algebra operations indexed by a formal parameter z, we have

A B = \delta_{1/z}^{B} A

and

\lambda x. A = \delta_{z}^{x} A

(via the correspondence between quandles and knot diagrams)

but in the old correspondence, used initially for zipper logic, we had

A B = \delta_{z}^{B} A

and

\lambda x. A = \delta_{1/z}^{x} A

So is a matter of choice, is not important, z over 1/z…

Fact is that now all of the computing with space and emergent algebras computations can be generated automatically. See Why the S combinator is curvature for a previous mention.

And of course there is a double Curry-Howard correspondence…

A whole _new_ linear (and curvature corrected) logic mixed with differential calculus….

(Boomers know better, they already discovered everything, but it does not work, of course hahaha)

This is old stuff.

Structure-to-Structure vs Meaning-to-Structure

This is an excerpt from

Molecular computers which are based on graph rewriting systems like chemlambda, chemSKI or Interaction Combinators (v1.0.0). Zenodo. https://doi.org/10.5281/zenodo.7306917

with some comments [added between square brackets].

This is related to chemSKI not Kind, chemSKI not Nock, as well as ZSS: Zipper logic revisited and Asemantic computing.

Your comments are welcomed! Please use cites to specific arguments of mine or others. Please don’t use partial arguments without cites, nor impressionistic formulations like “I don’t see why/where/how …” 🙂

The gist of it: in Nature or Decentralized Computing we have to learn to understand or deal with only local computations.

Here is the commented excerpt.

Typically there is a 3 stages process which uses graph rewriting. We want to solve a problem, therefore we write a program.

  • meaning to structure: The program is then compiled to a graph. This can also be seen as a language to structure problem. The language can be a term rewrite system, the program can be a term, the structure can be an abstract syntax tree. Many other examples exist, which can be put into the form of transforming something which has a meaning (for a human), to something which can be processed by a machine (in this case a graph).
  • structure to structure: transform the initial graph into a final graph, by using the graph rewriting system and an algorithm for the choice and order of application of the rewrites. Usually this problem is complicated because we want to have a good translation not only of the meaning to structure, but also of the term reduction into graph reduction. For example, when we write a program we expect it to “do” something, and we would like that the execution of the program by the machine (ie the structure to structure part) to be a translation of what we imagine the program is doing. As an example, if our program is a lambda term, which is compiled into a graph, we would like the graph rewriting to be compatible with the term rewriting, ie the beta reduction term rewriting rule. These semantic constraints which are put in the use of graph rewriting are incompatible with decentralized computing, see Asemantic computing
  • structure to meaning: transform the final graph into the language, in order to obtain the answer of the problem. For example, if the initial program was a lambda term, we compile it into a graph, then we reduce the graph, then we translate back the final graph into a lambda term. In other examples we just want to translate the final graph into something which has a meaning for us, into the solution of the problem.

In all such applications of graph rewriting, we position at the level of meaning, we descend at the level of structure, where we use graph rewriting as a tool and then we translate back the result into a meaningful output.

Here is a different 3 stages process, where we put the accent on the structure to structure level. As an example, we want to have a chemical molecule which can be absorbed by a living cell. We want that, once inside the living cell, the molecule is processed and produces a nano-machine (by making parts which self assemble, or by other means), all by itself and outside our control. For this kind of problem the 3 steps are:

  • meaning to structure: create the initial molecule, from a high level, language specification. For example say that the input molecule will be a piece of rna, created in a lab. [In decentralized computing the input molecule would be a mol file which is created by a user and then split between a small number of users.]
  • structure to structure: in the cell, the initial molecule enters in random reactions with other molecules and leads to another structure. For example the molecule is chemically copied and then translated into proteins which self-assemble into the goal nano-machine. [In decentralized computing all users apply the same reduction algorithm, thus exposing the pieces of the mol file of the small number of users to the rest of the users, in principle, and also without any synchronization among users.] We have no control over this step and there is no meaning (semantics) to be satisfied by this random process. [In the sense that there is no global in time and in space meaning which controls or survive to the local reductions.]
  • structure to meaning: the presence of these nano-machines in the cell produce an effect which we can observe. [In decentralized computing, the structure to meaning function or algorithm can only be local to the user and graph (piece of mol file) the user has. Since all the global annotations of the graph are washed away in principle, the structure to meaning function has to be local as well.]

The differences are striking:

  • we want to produce a structure which, left alone, induces the creation of another structure, [in decentralized computing, the small mol file or files introduced in the computation by a small number of users will produce effects which can be observed by them via a local structure to meaning function.]
  • the level of meaning (where we design the initial structure in order to be able to obtain the final structure) is a tool which serves the structure to structure goal, [in decentralized computing, in order to produce a graph which has the desired effect, we perform experiments with graphs coming from a humanly understandable programming language, like for example lambda calculus. Once such a graph is build from the programming language, the experiments allow us to understand what happens at the graphical level and then to exploit the graphical level mechanism for goals which are not directly possible at the initial programming language level. Much of the chemlambda collection of simulations uses this type of exploration.]
  • there is no control of the algorithm of applications of graph rewrites, it is given by the nature. [In decentralized computing, in principle, the exterior of a user’s graph and interations are random, this is the best user’s theory abvout the rest of the universe.]