Category Archives: lambda calculus

A graphical history of chemlambda

I made a page

 

which contains all the graph rewrites which were considered in various versions of chemlambda and  glc. I should have done this a long time ago and I have no doubt it will be useful.

It complements

Alife properties of directed interaction combinators vs. chemlambda.  Marius Buliga (2020), https://mbuliga.github.io/quinegraphs/ic-vs-chem.html#icvschem

which is an interactive version, which provides validation means,  of the article   arXiv:2005.06060

 

16 animations which supplement arXiv:2005.06060

Here is a list of 16 animations (which you can play with!) which supplement well the article

Artificial life properties of directed interaction combinators vs. chemlambda

and it’s associated page of experiments.

 

From the page of experiments you can travel to other directions, too!

Nature vs nurture in lambda calculus terms

This is one of the experiments among those accessible from arXiv:2003.14332 ,  or as well from figshare.

Btw if you want to know how you can contribute contribute read especially section 2.

Imagine that the lambda term

(\a.a a)(\x.((\b.b b)(\y.y x)))

is the gene of a future organism. How will it grow? Only nature (the term, the gene) matters or nurture (random evolution) matter?

Maybe you are familiar with this term, I saw it first time in arXiv:1701.04691, it is one which messes many purely local graph rewrite algorithms for lambda calculus. (By purely local I also mean that the parsing of the lambda term into a graph does not introduce non-local boxes, croissants, or other devices.)

As a term, it should reduce to the omega combinator. This is a quine in lambda calculus and it is also a chemlambda quine. You can play with it here.

I was curious if chemlambda can reduce corectly this term.  At first sight it does not. In this page use the “change” button until you have the message “random choices” and put the rewrites weights slides in the middle between “grow” and “slim”.  These are the default settings so at first you don’t have to do anything about.

Click the “start” button. You’ll see that after a while the graph (generated by the lambda to chemlambda parser) reduces to only one yellow node (a FOE) and the FROUT (free out) node of the root (of the lambda term). So chemlambda seems to be not capable to reduce this term correctly.

You can play with the (graph of the) term reducing it by hand, i.e. by hovering with the mouse over the nodes to trigger rewrites. If you do this carefully then you’ll arrive to reduce the graph to one which is identical with the graph of the omega combinator, with the exception of the fact that it has FOE nodes instead of FO nodes. By using the mol>lambda button you can see that such a graph does represent the omega combinator.

Now, either you succeeded this task, or not. Fo not just erload (by using lambda>mol button) and click “change” to get “older first” and move the rewrites weigths slider to “GROW”. This is the standard setting to check if the graph is a quine. Push start. What happens?

The reduction continues forever. This is a quine!

You’ll see that the mol>lambda gives nothing. This means that the decoration with lambda terms does not reach the root node. We are outside lambda calculus with this quine, which is different from the quine given by the omega combinator.

The two quines have the same gene. One of the quines (the graph of omega) never dies. The other quine is mortal: it dies by reducing eventually (in the “random choices” case) to a FOE node.

 

The decorator: a glimpse into the way chemlambda does lambda calculus

The lambda2chemlambda parser has now a decorator. Previously there was a button (function) which translates (parses) a lambda term into a mol. The reverse button (function) is the decorator, which takes a mol and turns it into a lambda term and some relations.

Let’s explain.

The lambda calculus to mol function  and the mol to lambda calculus are not inverse one to the other. This means that if we start from a mol file, translate it into a lambda term, then translate the lambda term back into mol, then we don’t always get the same mol. (See for example this.)

Example 1: In this page we first have the lambda term PRED ((POW 3) 4), which is turned into a mol. If we use the mol>lambda button then we get the same term, up to renaming of variables. You can check this by copy-paste of the lambda term from the second textarea into the first and then use lambda>mol button. You can see down the page (below the animation window) the mol file. Is the same.

Now push “start” and let chemlambda reduce the mol file. It will stop eventually. Move the gravity slider to MIN and rescale with mouse wheel to see the structure of the graph. It is like a sort of a surface.

Push the mol>lambda button. You get a very nice term which is a Church number, as expected.

Copy-paste this term into the first textarea and convert it into a mol by using lambda>mol. Look at the graph, this time it is a long, closed string, like a Church number should look in chemlambda. But it is not the same graph, right?

The mol>lambda translation produces a decoration of the half-edges of the graph. The propagation of this decoration into the graph has two parts:

  • propagation of the decoration along the edges. Whenever an edge has already the source and targed decorated, we get a relation between these decorations (they are “=”)
  • propagation of the decoration through the nodes. Here the nodes A, L, and FO are decorated according to the operations application, lambda and fanout. The FOE nodes are decorated like fanouts (so a FO and a FOE node are decorated the same! something is lost in translation). There is no clear way about how to decorate a FI (fanin) node. I choose to decorate the output of a FI with a new variable and to introduce two relations: first port decoration = output decoration and second port decoration = output decoration.

The initial decoration is with variables, for the ports of FRIN nodes, for the 2nd ports of the L nodes and for the output ports of the FI nodes.

We read the decoration of the graph as the decoration of the FROUT node(s). Mols from lambda terms have only one FROUT,

The graph is well decorated if the FROUT node is decorated and there are no relations.

Example 2: Let’s see how the omega combinator reduces. First click the mol>lambda buton to see we have the same term back, the omega.

Now use the “step” button, which does one reduction step, randomly. In the second textarea you see the decoration of the FROUT node and the relations.

What happens? Some steps make sense in lambda calculus, but some others don’t. In these relations you can see how chemlambda “thinks” about lambda calculus.

If you use the “step” button to reduce the mol, then you’ll see that you obtain back the omega combinator, after several steps.

Enjoy!

 

How does a zipper logic zipper zip? Bonus: what is Ackermann goo?

Zipper logic is an alternative to chemlambda, where two patterns of nodes, called half-zippers, appear.

It may be more easy to mimic, from a molecular point of view.

Anyway, if you want to have a clear image about how it works then there are two ways to play with zippers.

1. Go to this page and execute a zip move. Is that a zipper or not?

2. Go to the lambda2chemlambda page and type this lambda term

(\h.\g.\f.\e.\d.\c.\b.\a.z) A B C D E F G H

Then reduce it. [There is a difference, because a, b, … h do not occur in A B C D E F so the parser adds a termination node to each of them, so when you reduce it the zipper will zip and then will dissappear.]

You can see here the half-zippers

\h.\g.\f.\e.\d.\c.\b.\a.

A B C D E F G H

which are the inspiration of the zippers from zipper logic.

In chemlambda you can also make FI-zippers and FOE-zippers as well, I used this for permutations.

BONUS: I made a comment at HN which received the funny reply “Thanks for the nightmares! 🙂“, so let me recall, by way of this comment, what is an Ackermann goo:

A scenario more interesting than boundless self-replication is Ackermann goo [0], [1]. Grey goo starts with a molecular machine able to replicate itself. You get exponentially more copies, hence goo. Imagine that we could build molecules like programs which execute themselves via chemical interactions with the environment. Then, for example, a Y combinator machine would appear as a linearly growing string [2]. No danger here. Take Ackermann(4,4) now. This is vastly more complex than a goo made of lots of small dumb copies.

[0] https://chemlambda.github.io/collection.html#58

[1] https://chemlambda.github.io/collection.html#59

[2] https://chemlambda.github.io/collection.html#259

 

Parser gives fun arrow names

Not yet released with modifications, but the lambda2chemlambda parser can be made to give fun arrow names. Like for example the term

((\g.((\x.(g (x x))) (\x.(g (x x))))) (\x.x))

which is the Y combinator applied to id, becomes the mol

FROUT [^L [((\g. [((\g [((\^L [((\g.((\x. [((\g.((\x [((\g.((\^FO [((\g [((\g.((\x.(g [((\g.((\x.(g*^A [((\g.((\x.(g [((\g.((\x.(g@( [((\g.((\x.(g@^FO [((\g.((\x [((\g.((\x.(g@(x [((\g.((\x.(g@(x*^A [((\g.((\x.(g@(x [((\g.((\x.(g@(x@x [((\g.((\x.(g@(x@^Arrow [((\g.((\x.(g@(x* [((\g.((\x.(g@(x@x^Arrow [((\g.((\x.(g@(x@ [((\g.((\x.(g@(^Arrow [((\g.((\x.(g@ [((\g.((\x.(^Arrow [((\g.((\x.( [((\g.((\x.^Arrow [((\g.((\ [((\g.((^A [((\g.(( [((\g.((\x.(g@(x@x)))@( [((\g.((\x.(g@(x@x)))@^L [((\g.((\x.(g@(x@x)))@(\x. [((\g.((\x.(g@(x@x)))@(\x [((\g.((\x.(g@(x@x)))@(\^Arrow [((\g.((\x.(g* [((\g.((\x.(g@(x@x)))@(\x.(g^A [((\g.((\x.(g@(x@x)))@(\x.(g [((\g.((\x.(g@(x@x)))@(\x.(g@( [((\g.((\x.(g@(x@x)))@(\x.(g@^FO [((\g.((\x.(g@(x@x)))@(\x [((\g.((\x.(g@(x@x)))@(\x.(g@(x [((\g.((\x.(g@(x@x)))@(\x.(g@(x*^A [((\g.((\x.(g@(x@x)))@(\x.(g@(x [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x [((\g.((\x.(g@(x@x)))@(\x.(g@(x@^Arrow [((\g.((\x.(g@(x@x)))@(\x.(g@(x* [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x^Arrow [((\g.((\x.(g@(x@x)))@(\x.(g@(x@ [((\g.((\x.(g@(x@x)))@(\x.(g@(^Arrow [((\g.((\x.(g@(x@x)))@(\x.(g@ [((\g.((\x.(g@(x@x)))@(\x.(^Arrow [((\g.((\x.(g@(x@x)))@(\x.( [((\g.((\x.(g@(x@x)))@(\x.^Arrow [((\g.((\x.(g@(x@x)))@(\ [((\g.((\x.(g@(x@x)))@(^Arrow [((\g.((\x.(g@(x@x)))@ [((\g.(^Arrow [((\g.( [((\g.^Arrow [((\ [((^A [(( [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@( [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@^L [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@(\x. [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@(\x [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@(\^Arrow [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@(\x [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@(\x.x^Arrow [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@(\x.x [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@(\x.^Arrow [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@(\ [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@(^Arrow [((\g.((\x.(g@(x@x)))@(\x.(g@(x@x)))))@ [(^Arrow [( [

Recall that the parser is part of the landing page for all chemlambda projects.

So, if you write in the parser:

(\x.a) b

which is the pattern for a beta rewrite, then the relevant part of the mol (with funny arrow names) and the rewrite will have the form:

betastring

 

Use the lambda to chemlambda parser to see when the translation doesn’t work

I use the parser page mainly and other pages will be mentioned in the text.

So chemlambda does not solve the problem of finding a purely local conversion of lambda terms to graphs, which can be further reduced by a purely local random algorithm, always. This is one of the reasons I insist both into going outside lambda calculus and into looking at possible applications in real chemistry, where some molecules (programs) do reduce predictively and the span of the cascade of reactions (reductions) is much larger than one can achieve via massive brutal try-everything on a supercomputer strategy.

Let’s see: choose

(\a.a a)(\x.((\b.b b)(\y.y x)))

it should reduce to the omega combinator, but read the comment too. I saw this lambda term, with a similar behaviour, in [arXiv:1701.04691], section 4.

Another example took me by surprise. Now you can choose “omega from S,I combinators”, i.e. the term

(\S.\I.S I I (S I I)) (\x.\y.\z.(x z) (y z)) \x.x

It works well, but l previously used a related term, actually a mol file, which corresponds to the term where I replace I by S K K in S I I (S I I), i.e. the term

S (S K K) (S K K) (S (S K K) (S K K))

To see the reduction of this term (mol file) go to this page and choose “omega from S,K combinators”. You can also see how indeed S K K reduces to I.

But initially in the parser page menu I had  the term

(\S.\K.S (S K K) (S K K) (S (S K K) (S K K))) (\x.\y.\z.(x z) (y z)) (\x.(\y.x))

It should reduce well but it does not. The reason is close to the reason the first lambda term does not reduce well.

Now some bright side of it. Look at this page to see that the ouroboros quine is mortal. I believed it is obviously imortal until recently. Now I started to believe that imortal quines in chemlambda are rare. Yes, there are candidates like (the graph obtained from) omega, or why not (try with the parser) 4 omega

(\f.(\x.(f(f (f (f x)))))) ((\x.x x) (\x.x x))

and there are quines like the “spark_243501” (shown in the menu of this page) with a small range of behaviours. On the contrary, all quines in IC are imortal.

Lambda calculus to chemlambda parser (2) and more slides

This post has two goals: (1) to explain more about the lambda to chemlambda parser and (2) to talk about slides of presentations which are connected one with the other across different fileds of research.

(1) There are several incremental improvements to the pages from the quine graphs repository. All pages, including the parser one, have two sliders, each giving you control about some parameters.

The “gravity” slider is kind of obvious. Recall that you can use your mose (or pinching gestures) to zoom in or out the graph you see. With the gravity slider you control gravity. This allows you to see better the edges of the graph, for example, by moving the gravity slider to the minimum and then by zooming out. Or, on the contrary, if you have a graph which is too spreaded, you can increase gravity, which will have as aeffect a more compactly looking graph.

The “rewrites weights slider” has as extrema the mysterious words “grow” and “slim”. It works like this. The rewrites (excepting COMB, which are done preferentially anyway) are grouped into those which increase the number of nodes (“grow”) and the other ones, which decrease the number of nodes (“slim”).

At each step, the algorithm tries to pick at random a rewrite. If there is a COMB rewrite to pick, then it is done. Else, the algorithm will try to pick at random one “grow” and one “slim” rewrite. If there is only one of these available, i.e. if there a “grow” but no “slim” rewrite, then this rewrite is done. Else, if there is a choice between two randomly choses “grow” and “slim” rewrites, we flip a coin to choose among them. The coin is biased towards “grow” or “slim” with the rewrites weights slider.

This is interesting to use, for example with the graphs which come from lambda terms. Many times, but not always, we are interested in reducing the number of nodes as fast as possible. A strategy would be to move the slider to “slim”.

In the case of quines, or quine fights, it is interesting to see how they behave under “grow” or “slim” regime.

Now let’s pass to the parser. Now it works well, you can write lambda terms in a human way, but mind that “xy” will be seen as a variable, not as the application of “x” to “y”. Application is “x y”. Otherwise, the parser understands correctly terms like

(\x.\y.\z.z y x) (\x.x x)(\x. x x)\x.x

Then I followed the suggestion of my son Matei to immediately do the COMB rewrites, thus eliminating the Arrow nodes given by the parser.

About the parser itself. It is not especially short, because of several reasons. One reason is that it is made as a machine with 3 legs, moving along the string given by the lexer. Just like the typical 3-valent node. So that is why it will be interesting to see it in action, visually. Another reason is that the parser first builds the graph without fanout FO and termination T nodes, then adds the FO and and T nodes. Finally, the lambda term is not prepared in advance by any global means (excepting the check for balanced parantheses). For example no de Bruijn indices.

Another reason is that it allows to understand what edges of the (mol) graph are, or more precisely what port variables (edge variables) correspond to. The observation is that the edges are in correspondence with the position of the item (lparen, rparen, operation, variable) in the string. We need at most N edge names at this stage, where N is the length of the string. Finally, the second stage, which adds the FO and T nodes, needs at most N new edge names, practically much less: the number of duplicates of variables.

This responds to the question: how can we efficiently choose edge names? We could use as edge name the piece of the string up to the item and we can duble this number by using an extra special character. Or if we want to be secretive, now that we now how to constructively choose names, we can try to use and hide this procedure.

Up to now there is no “decorator”, i.e. the inverse procedure to obtain a lambda term from a graph, when it is possible. This is almost trivial, will be done.

I close here this subject, by mentioning that my motivation was not to write a parser from lambda to chemlambda, but to learn how to make a parser from a programming language in the making. You’ll see and hopefully you’ll enjoy 🙂

(2) Slides, slides, slides. I have not considered slides very interesting as a mean of communication before. But hey. slides are somewhere on the route to an interactive book, article, etc.

So I added to my page links to 3 related presentations, which with a 4th available and popular (?!) on this blog, give together a more round image of what I try to achieve.

These are:

  • popular slides of a presentation about hamiltonian systems with dissipation, in the form baptized “symplectic Brezis-Ekeland-Nayroles”.  Read them in conjuction with arXiv:1902.04598, see further why
  • (Artificial physics for artificial chemistry)   is a presentation which, first, explains what chemlambda is in the context of artificial chemistries, then proceeds with using a stochastic formulation of hamiltonian systems with dissipation as an artificial physics for this artificial chemistry. An example about billiard ball computers is given. Sure, there is an article to be written about the details, but it is nevertheless interesting to infer how this is done.
  • (A kaleidoscope of graph rewrite systems in topology, metric geometry and computer science)  are the most evolved technically slides, presenting the geometrical roots of chemlambda and related efforts. There are many things to pick from there, like: what is the geometrical problem, how is it related to emergent algebras, what is computation, knots,  why standard frames in categorical logic can’t help (but perhaps it can if they start thinking about it), who was the first programmer in chemlambda, live pages where you can play with the parser, closing with an announcement that indeed anharmonic lambda (in the imperfect form of kali, or kaleidoscope) soves the initial problem after 10 years of work. Another article will be most satisfactory, but you see, people rarely really read articles on subjects they are not familiar with. These slides may help.
  • and for a general audience my old (Chemlambda for the people)  slides, which you may appreciate more and you may think about applications of chemlambda in the real world. But again, what is the real world, else than a hamiltonian system with dissipation? And who does the computation?

 

 

Lambda calculus to chemlambda parser

To play with at this page.  There are many things to say, but will come back later with details about my first parser and why is it like this.

UPDATE: After I put the parser page online, it messed with the other pages, but now everything is allright.

UPDATE: I’ll demo this at a conference on Dec 4th, at IMAR, Bucharest.

Here are the slides.

The title is “A kaleidoscope of graph rewrite systems in topology,metric geometry and computer science“.

So if you are in Bucharest on Dec 4th, at 13h, come to talk. How to arrive there.

I already dream about a version which is purely “chemical”, with 3-legged parser spiders reading from the DNA text string and creating the molecules.

Will do, but long todo list.

Quine graphs (5), anharmonic logic and a flute

Several things:

  • added a new control to the quine graphs pages (all can be accessed from here). Is caled the “rewrites weights slider”: you can either favor the rewrites DIST, which add nodes to the graph, or the rewrites β  which take away nodes from the graph (for chemlambda these are the L-A rewrite, but also the termination rewrites and FI-FOE). This changes, sometimes radically, the result of the algorithm. It depends what you want. In the case of reductions of lambda terms, you may want to privilege β rewrites, but as usual this is not generally true, see the last example. In the case of the fight arena for quines, you can favor a species of quines over another by your choice.
  • of course in the background I continue with kali, or anharmonic lambda calculus. The new thing is that why not conceive a programming language which compiles to kali? It works fine, you can say things as

“from a see b as c” or

“let a mark b as c” or

“is a as b” or

“c := let x  = b in c”

and many others! Soon, hope at the beginning of December, will be available.

  • It helps to do new things, so I finished t day my first flute. Claudia, my wife, is a music professor and her instrument of choice is the flute. From a physics point of view a flute is a tube with holes placed in the right possitions, how hard would it be to make one? After I made it, it is usable but I started to learn a lot of stuff about how unnatural is the modern flute and how much you can play with all variables. As a mathematician I  oscilate between “is a known thing to simulate a flute numerically” and “I shall concentrate on the craft and will play with various techniques”. My first flute was a craft only effort, but now, when I know more, I am hooked!

OK, so that’s the news, I like to make things!

 

A small ouroboros has a short life (Quine graphs 4)

As I said many times, there is a lot of background to the hundreds of chemlambda mol files and associated simulations. Now that I took the Quine graphs repository seriously, I have to select rather clear and short strings of facts associated to the mol files, so that I can tell clear and short enough stories.

So it came out of the depth of this pile of stuff I have that the ouroboros is not an immortal quine. It can die and his life is of the order exp(length) where length is the length of the double track it contains.  The shortest ouroboros appears here (look in the MENU).

Otherwise, I reorganize (and continue to do so) the js scripts, so that they become easy to understand. Now there is a chemistry, nodes, etc. so thare will be a convergence with the modules of hapax.

Quine graphs (3), ouroboros, hapax and going public

Several news:

I decided that progressively I’m going to go public, with a combination of arXiv, Github and Zenodo (or Figshare), and publication. But there is a lot of stuff I have to publish and that is why this will happen progressively. Which means it will be nice to watch because it is interesting, for me at least,  to answer to the question:

What the … does a researcher when publishing? What is this for? Why?

Seriously, the questions are not at all directed against classical publication, nor are they biased versus OA. When you publish serially, like a researcher, you often tell again and again a story which evolves in time. To make a comparison, it is like a sequence of frames in a movie.

Only that it is not as simple. It is not quite like a sequence of frames,  is like a sequence of pictures, each one with it’s repeating tags, again and again.

Not at all compressed. And not at all like an evolving repository of programs which get better with time.

Experimental generation of anharmonic lambda calculus (II)

UPDATE: There is still another version, less exotic, at github. Both are working versions of kali.  For those with mathematical inclinations there are explanations in the tool rhs.js. This will come to a conclusion with the explanations about FO and with the tools to choose optimal collections of rewrites.

____

There is a working version of kali which I posted today. Enjoy and wonder how can it (sometimes!) reduce lambda terms even if it has nothing to do with lambda calculus in the background. The js script is interesting to read, or better the script from the page which generates DIST rewrites (which you can add in your copy of the main script and play with it, it is enough to add it to the list of rewrites).

It is comical how does it fail sometimes. Space is symmetrical and logic is constrained.

Also I think I’ll pursue the idea to train the (choice of rewrites, probabilities) for certain tasks, as evoked here.

Experimental generation of anharmonic lambda calculus (I)

UPDATE: This is from yesterday 15.10.2019. It gives the possible RHS of DIST rewrites in a more precise way, compatible with the 6 nodes: A, FI, D (port 2 “in”) and L, FO, FOE (port 2 “out”) (each node has port 1 “in” and port 3 “out”). Moreover which can be decorated with generic dilations which satisfy em-convex. It turns out that there are exactly two possibilities for each rewrite. However, there are fewer than 3^36 rewrite systems which are potentially interesting, because many of these rewrites inhibit others.

For example all the rewrites used in  the working version of kali are there (with the exception of those involving the node FO, there is a separate treatment for this node; also I ignore for the moment the rewrites invoving T and I take for granted the 3 beta-like rewrites, which are not generated by the rhs.js script, because they are not DIST rewrites).The version I have today shortens the list a lot and soon enough will be merged into a more explorable version of kali.

______

How many rewrites are possible in anharmonic lambda and how do they relate one with the other? Is there a way to discover experimentally the best collection of rewrites? I think it is and that’s what I’m doing right now.

Let me explain.

Suppose we start by the anharmonic group, isomorphic with S(3), or why not we start with S(4), with an eye into the relation it has with projective geometry. But basically we want to associate to every element of the group (call it G) a trivalent node with named ports “1”, “2” and “3”, such that we can decorate the edges of a graph made by such nodes by “terms” which are somehow relevant with respect to the geometric nature of the group G (either S(3) as the anharmonic group, or S(4) as it appears in projective geometry). For the moment how we do that, or why are we doing it, or even what that has to do with lambda calculus are irrelevant questions.

There are two big possibilities further.

Either we work with graphs with undirected edges, but we restrict the patterns which trigger rewrites at pairs of nodes connected via the ports “3”, i.e. at pairs of nodes linked by an edge which connects the port “3” of on enode with the port “3” of the other. For example this is the case in Interaction Combinators. The advantage is that there will never be conflicting rewrites.

Or we work with graphs with directed edges, we attach orientations to ports (say ports “1” is always “in”, i.e. the orientation point to the node, the port “3” is always “out” and the port “2” may be “in” or “out”, depending on the node type) and we ask that any edge connects a port which is “in” with a port which is “out”. Then we look for patterns which trigger rewrites ahich are made by a pair of nodes linked by an edge which connects a port “3” with a port “1”. This is like in chemlambda. The advantage here is opposite: we allow conflicting rewrites.

So in either case we think about rewrites which have as LHS (left hand side) a pair of nodes, as explained, and as RHS (right hand side) either no node (actually two edges), like in the case of the beta rewrite, or 4 nodes. The pattern of 4 nodes should be like in Interaction Combinators (say like GAMMA-DELTA rewrites) or like the DIST rewrites in chemlambda.

If I take the second case, namely graphs with oriented edges, as explained. how many different rewrites are possible? More precisely, denote a node with a “+” if the port “2” is “in” and with a “-” if the port “2” is out. Then, how many rewrites from 2 nodes to 4 nodes, in the pattern DIST, are possible?

Doing it by hand is dangerous, so we better do it with a program. >We are looking for patterns of 4 nodes arranged like in DIST rewrites, each node being a “+” or a “-” node, with oriented edges, so that the 4 half edges (or external ports)  have the same orientation as the corresponding ones of the  LHS pattern made of 2 nodes (“+” or “-“). How many are they?

There are 4 cases, depending on the LHS nodes, which we denote by “+,+”, “-,-“, “+,-” and “-,+”.

I was surprised to learn from programs that theer are 386=16X23 possible RHS patterns, divided into 80=16X5, 80=16X5, 112=16X7 and 96=16X6 patterns.

While I can understand the 16X part, the 5,5,7,6 are strange to me. The 16X appears from the fact that for a node “+” we can permute the “1”,”2″ ports without changing the edges orientations, and similarly for a node “-” we can permute the “2” “3” ports with the same (lack of) effect.

Now let’s get further. We look now at the nodes which are decorated with elements of the group G, or say the group G acts somehow on the set of node types which has as many elements as G. We look for rewrites from LHS to RHS such that the decoration of the external ports does not change during the rewrites. This imposes strong restrictions on the choice of nodes (and it means something geometrically as well).

Not any possible rewrite can be accepted though. For example we reject rewrites with the property that the RHS contains a copy of the LHS. So each rewrite has associated a collection of rewrites which are inhibited by it. This will place restrictions on the collections of rewrites which we may choose for our calculus.

We reject also rewrites whose RHS contain a copy of the LHS of a rewrite we like, like the beta rewrite.

We reject rewrites which have the RHS made such that the external ports will all be “2”, because such a pattern will last forever during further rewrites.

Still when we look at what is left we see that we have about |G|X|G| LHS patterns  and much more RHS choices, so there will still be many possible calculi.

We may look for symmetries, related to the action of G on the node types and we may ask: which is the most (or a most) symmetric collection and which will be a least symmetric (but maximal in number)?

Or we may say that the human mind is feeble and let the collections compete, by reducing a family of graphs. Take lambda calculus and some tests and let the possible collections compete. Take emergent algebras and some tests and do the same. Or even look for collections more fit for certain chosen tests from lambda calculus or others.

Who will win?

Kali: anharmonic lambda calculus

You can play with some examples of lambda terms (SKK, Y combinator, Omega combinator, Ackermann(2,2), some duplications of terms, lists, Church numbers multiplications). It is important to try several times, because the reduction algorithm uses randomness in an essential way! This justifies the “reload” button, the “start” which does the reduction for you (randomly), the “step” which choses a random reduction step and shows it to you. Or you may even use the mouse to reduce the graphs.

It may look kind of alike the other chemlambda reductions, but a bit different too, because the nodes are only apparently the usual ones (lambdas, applications, fanins and fanouts), in reality they are dilations, or homotheties, if you like, in a linear space.

I mean literary, that’s what they are.

That is why the name: anharmonic lambda calculus. I show you lambda terms because you are interested into those, but as well I could show you emergent (actually em-convex) reductions which have apparently nothing to do with lambda calculus.

But they are the same.

Here is my usual example Ackermann(2,2), you’ll notice that there are more colors than precedently:

kali24-ackermann-2-2

The reason is that what you look at is called “kali24”, which for the moment uses 7 trivalent nodes, out of 24 possible from projective space considerations.

I will fiddle with it, probably I’ll make a full 24 nodes versions (of which lambda calculus alone would use only a part), there is still work to do, but I write all the time about the connections with geometry and what you look at does something very weird, with geometry.

Details will come. Relevant links:

  • kali24, the last version
  • kali, the initial version with 6 nodes, which almost works
  • em-convex, the dilations enhanced lambda calculus which can be also done with kali
  • and perhaps you would enjoy the pages to play and learn.

One more thing: when all fiddling will end, the next goal would be to go to the first interesting noncommutative example, the Heisenberg group. Its geometry, as a noncommutative linear space (in the sense of emergent algebras, because in the usual sense it is not a linear space), is different but worthy of investigation. The same treatment can be applied to it and it would be interesting to see what kind of lambda calculus is implied, in particular. As this approach is a machine of producing calculi, I have no preference towards the outcome, what can it be? Probably not quite a known variant of lambda, quantum or noncommutative, because the generalization does not come from a traditional treatment [update: which generalizes from a too particular example].

What is the purpose of the project Hapax?

“hapax” means “only once” in ancient Greek. You may have seen it in the form hapax legomenon, quote: ” a word that occurs only once within a context, either in the written record of an entire language, in the works of an author, or in a single text”.

After a bit of research I found the correct, I hope, form that I  use for this project:

apaxcheon

It reads “hapax cheon” and it means, again in ancient Greek, “poured only once”.

Why this? Because, according to this wiki link, “the Greek word χυμεία khumeia originally meant “pouring together””.

The motivation of the project hapax comes from the realization that we only explored a tiny drop in the sea of possibilities. As an example, just look at lambda calculus, one of the two pillars of computation. Historically there are many reasons to consider lambda calculus something made in heaven, or a platonic ideal.

But there are 14400 = 5! X 5! alternatives to the iconic beta rewrite only. Is the original beta special or not?

By analogy with the world of CA, about a third of cellular automata are Turing universal. My gues is that a significant fraction of the alternatives to the beta rewrite are as useful as the original beta.

When we look at lambda calculus from this point of view, we discover that all the possible alternatives, not only of beta, but of the whore graph rewriting formalism, say in the form of chemlambda, all these alternative count a huge number, liek 10^30 in the most conservative estimates.

Same for interaction combinators. Same for knot theory. Same for differential calculus (here I use em).

I started to collect small graph rewrite systems which can be described with the same formalism.

The formalism is based on a formulation which uses exclusively permutations (for the “chemistry”  and Hamiltonian mechanics side) and a principle of dissipation which accounts for the probabilistic side.

The goal of the project hapax is to build custom worlds (physics and chemistry)

“poured only once”

which can be used to do universal computation in a truly private way. Because once the rules of computation are private,  this leads to the fact that the who;le process of computation becomes incomprehensible.

Or is it so? Maybe yes, maybe not. How can we know, without trying?

That is why I starded to make the hapax stuff.

For the moment is not much, only demos like this one, but the rest will pass from paper to programs, then we’ll play.

 

Hapax chemlambda

Chemistry is a game with a pair of dices.

You roll two dices and act. The dices are permutohedra.

Which leads to ask what certain chemistries (artificial or real) have so special. The conjecture is that (probabilistically speaking) a sizeable proportion of them are special.

For example, we can evade the lambda calculus by choosing one of the  14400 rewrites for ( β with random right patterns) .

Hapax chemlambda!

Lambda calculus inspires experiments with chemlambda

In the salvaged collection of google+ animations (see how  the collection was deleted ) I told several stories about how lambda calculus can bring inspiration for experiments with chemlambda. I select for this post a sequence of such experiments. For previous related posts here see this tag and this post.

Let’s go directly to the visuals. (UPDATE: or see the story of the ouroboros here)

Already in chemlambda v1 I remarked the interesting behaviour of the graph (or molecule) which is obtained from the lambda term of the predecessor applied to a Church number.  With the deterministic greedy algorithm of reductions, after the first stages of reduction there is a repeating pattern of  reduction, (almost) up to the end. The predecessor applied to the Church number molecule looks almost like a closed loop made of pairs A-FO (because that’s how a Church number appears in chemlambda), except a small region which contains the graph of the predecessor, or what it becomes after few rewrites.

In chemlambda v2 we have two kinds of fanouts: FO and FOE.  The end result of the reduction of the same molecule, under the same algorithm, is different: where in chemlambda v1 we had FO nodes (at the end of the reduction), now we have FOE nodes. Other wise there’s the same phenomenon.

Here is it, with black and white visuals

pred

Made by recording of this live (js) demo.

1. What happens if we start not from the initial graph, but from the graph after a short number of rewrites? We have just to cut the “out” root of the initial graph, and some nodes from it’s neighbourhood and glue back, so that we obtain a repeating pattern walking on a circular train track.

Here is it, this time with the random reduction algorithm:

bigpred_train-opt

I previously called this graph an “ouroboros”. Or a walker.

2. That is interesting, it looks like a creature (can keep it’s “presence”) which walks in a single direction in a 1-dimensional world.  What could be the mechanism?

Penrose comes to mind, so in the next animation I also use a short demonstration from a movie by Penrose.

bigpred_penrose-opt

 

3. Let’s go back to the lambda calculus side and recall that the algorithm for the translation of a lambda term to a chemlambda molecule is the same as the one from GLC, i.e the one from Section 3 here. There is a freedom in this algorithm, namely that trees of FO nodes can be rewired as we wish. From one side this is normal for GLC and chemlambda v1,  which have the CO-COMM and CO-ASSOC rewrites

convention_3

In chemlambda v2 we don’t have these rewrites at all, which means that in principle two diferent molecules,  obtained from the same lambda term, which differ only by the rewiring of the FO nodes may reduce differently.

In our case it would be interesting to see if the same is true for the FOE nodes as well. For example, remark that the closed loop, excepting the walker, is made by a tree of FOE nodes, a very simple one. What happens if we perturb this tree, say by permuting some of the leaves of the tree, i.e. by rewiring the connections between FOE and A nodes.

bigpred_train_perm-opt

The “creature” survives and now it walks in a world which is no longer 1 dimensional.

Let’s play more: two permutations, this time let’s not glue the ends of the loop:

bigpred_train_egg

It looks like a signal transduction from the first glob to the second. Can we make it more visible, say by making invisible the old nodes and visible the new ones? Also let’s fade the links by making them very large and almost transparent.

bigpred_train_egg_mist_blue

Signal transduction! (recall that we don’t have a proof that indeed two molecules from the same lambda term, but with rewired FO trees reduce to the same molecule, actually this is false! and true only for a class of lambda terms. The math of this is both fascinating and somehow useless, unless we either use chemlambda in practice or we build chemlambda-like molecular computers.)

4.  Another way to rewire the tree of FOE nodes is to transform it into another tree with the same leaves.

bigpred_tree-opt

 

5. Wait, if we understand how exactly this works, then we realize that we don’t really need this topology, it should also work for topologies like generalized Petersen graphs, for example for a dodecahedron.

dodecahedron_walker

 

This is a walker creature which walks in a dodecaheral “world”.

6. Can the creature eat? If we put something on it’s track, see if it eats it and if it modifies the track, while keeping it’s shape.

walker_bit-opt

So the creature seems to have a metabolism.

We can use this for remodeling the world of the creature. Look what happens after many passes of the creature:

walker_bit_new

 

7. What if we combine the “worlds” of two creatures, identical otherwise. Will they survive the encounter, will they interact or will they pass one through the other like solitons?

bigpred_bif

 

Well, they survive. Why?

8. What happens if we shorten the track of the walker, as much as possible? We obtain a graph wit the following property: after one (or a finite give number of) step of the greedy deterministic algorithm we obtain an isomorphic graph. A quine! chemlambda quine.

At first, it looks that we obtained a 28 nodes quine. After some analysis we see that we can reduce this quine to a 20 nodes quine. A 20-quine.

Here is the first observation of the 20-quine under the random algorithm

20_quine_50steps

According to this train of thoughts, a chemlambda quine is a graph which has a periodic evolution under the greedy deterministic algorithm, with the list of priority of rewrites set to DIST rewrites (which add nodes)  with greater priority than beta and FI-FOE rewrites (which subtract ndoes), and which does not have termination nodes (because it leads to some trivial quines).

These quines are interesting under the random reduction algorithm, which transform them into mortal living creatures with a metabolism.

____________

So this is an example of how lambda calculus can inspire chemlambda experiments, as well as interesting mathematical questions.

Torsor rewrites

With the notation conventions from em-convex, there are 3 pairs of torsor rewrites.  A torsor, figured by a fat circle here, is a term T of type  T: E \rightarrow E \rightarrow E \rightarrow E

with the rewrites:

t1

and

t2

Finally, there is a third pair of rewrites which involve terms of the form \circ A for A: N

t3

The rewrite T3-1 tells that the torsor is a propagator for \circ A, the rewrite T3-2 is an apparently weird form of a DIST rewrite.

Now, the following happen:

  • if you add the torsor rewrites to em-convex then you get a theory of topological groups which have a usual, commutative smooth structure, such that the numbers from em-convex give the structure of 1-parameter groups
  • if you add the torsor rewrites to em, but without the convex rewrite then you get a more general theory, which is not based on 1-parameter groups,  because the numbers from em-convex give a structure more general
  • if you look at the emergent structure from em without convex, then you can define torsor terms whch satisfy the axioms, but of course there is no em-convex axiom.

Lots of fun, this will be explained in em-torsor soon.

 

 

Summer report 2018, part 2

Continues from Summer report 2018, part 1.

On the evolution of the chemlambda project and social context. 

Stories about the molecular computer. The chemlambda project evolved in a highly unexpected way, from a scientific quest done completely in the open to a frantic exploration of a new territory. It became a stories generator machine. I was “in the zone” for almost two years. Instead of the initial goal of understanding the computational content of emergent algebras, the minimalisic chemlambda artificial chemistry concentrated on the molecular computer ideas.

This idea can be stated as: identify molecules and chemical reactions which work as the  interaction nets rewrites style of chemlambda. See the article Chemlambda strings for a simple explanation, as well as a recent presentation of the newest (available) version of chemlambda: v3. (It is conservative in the numbers of nodes and links, the presentation is aimed for a larger audience.)

This idea is new. Indeed, there are many other efforts towards molecular computing. There is the old ALCHEMY (algorithmic chemistry) where lambda calculus serves as inspiration, by taking the application operation as a chemical reaction and the lambda abstration as reactive sites in a molecule. There is the field of DNA and RNA computing where computations are embodied as molecular machines made of DNA or RNA building blocks. There is the pi calculus formalism, as pure in a sense as lambda calculus, based on communication channels names exclusively, which can be applied to chemistry. There is the idea of metabolic networks based on graph grammars.

But there is nowhere the idea to embed interaction networks rewrites into real chemical reactions. So not arbitrary graph grammars, but a highly selected class. Not metabolical networks in general, but molecules designed so individually compute. Not solutions well stirred in a lab. Not static or barely dynamic lego-like molecules. Not boolean gates computing but functional programming like computing.

From the side of CS, this is also new, because instead of concentrating of these rewrites as a tool for understanding lambda calculus reductions, we go far outside of the realm of lambda calculus terms into a pure random calculus with graphs.

But it has to be tried, right? Somebody has to try to identify this chemistry. Somebody has to try to use the functional programming basic concepts from the point of view of the machine, not the programmer.

For the mathematical and computing aspects see this mathoverflow question and answers.

For the general idea of the molecular computer see these html/js slides. They’ve been prepared for a TED talk with a very weird, in my opinion, story.

For the story side and ethical concerns see for example these two short stories posted at telegra.ph : Internet of smells, Home remodeling (a reuse of Proust).

In order to advance there is the need to find either, rather both funding and brain time from a team dedicated to this. Otherwise this project is stalled.

I tried very hard to find the funding and I have not succeeded (other weird stories, maybe some day will tell them).

I was stalled and I had to go back to my initial purpose: emergent algebras. However, being so close to inverse engineering of the  nature’s  OS gives new ideas.

After a year of efforts I understood that it all comes to stochastic luck, which can be groomed and used (somehow). This brings me to the stories of the present, for another post.

 

Do triangulations of oriented surfaces compute?

In a precise sense, which I shall explain, they do. But the way they do it is hidden behind the fact that the rewrites seem non local.

  1. They compute, because ribbon graphs with colored, trivalent nodes and directed edges do compute, via the encoding of untyped lambda terms into this family of graphs, provided by chemlambda. Indeed, a chemlambda molecule is a ribbon graph with these properties. If you want to encode a lambda term into chemlambda then there is a simple procedure: start from the lambda term on a form which eliminates the need of any alpha conversion. Then build the syntactic tree and replace the nodes by A nodes for application and L nodes for lambda abstraction (don’t forget that L nodes have one in and 2 out ports, differently from the syntactic tree node for lambda abstraction). Then eliminate the variables which are at the leaves by grafting trees of FO (green fanout) nodes from the lambda abstraction node to the places where the variables occur, or by grafting T (terminal) nodes to the lambda node which issues a variable which does not occur later, or simply by just erasing the variable label for those variables which are not issued from an abstraction. That’s it, you get a ribbon graph which is open (it has at least the root half-edge and maybe the half-edges for the variables which don’t come from an abstraction), but then you may add FRIN (free in) and FROUT (free out) nodes and think about them as tadpoles and you get a trivalent ribbon graph. The dual of this graph is (equivalent to) a triangulated, oriented surface, which has faces colored (corresponding to the nodes of the graph), directed edges, such that there are no faces with the 3 edges directed in a cyclic way.
  2. How they compute? Chemlambda uses a set of graph rewrites which has some classic ones, like the Wadsworth-Lamping graphical version of the beta move, but it has two types of fanouts (FO and FOE), one FANIN, and different than usual rules for distributivity. Look at the moves page to see them. All these rewrites are local, in the sense that there is a small number, fixed a priori, which is an upper bound for the number of nodes and edges which enter (in any way) into the graph rewrite (as a condition or as the left pattern, or as the right pattern). The algorithm of application of the rewrites is a very important piece which is needed to make a model of computation. The algorithm is very simple, it can be deterministic or random, and consists, in the deterministic case, into the application of as many rewrites as possible, with a priority for the distributivity moves in case of conflict, and in the random case, it’s just random application of rewrites.

Here is an example, where I play with the reduction of false omega id in chemlambda

  1. Now let’s pass to the duals, the triangulated surfaces. The nodes of the triangulated surface correspond to the faces of the ribbon graph. Or the faces of the ribbon graph are global notions, because they are the orbits of a permutation. After one of the rewrites, the faces (of the ribbon graph) change in a way which has to be non local, because one has to compute again the orbits of the permutation for the new graph, and there is no upper bound on the number of half-edges which have to be visited for doing that.
  2. So triangulated, oriented surfaces do compute, but the rewrites and the algorithm of application are hidden behind this duality. They are non-local for triangulated surfaces, but local for ribbon graphs.
  3. Finally, a word of attention: these surfaces do compute not by being arrows in a category. They don’t compute in this usual, say Turaev kind of way. They compute by (the duals of) the rewrites, there is nothing else than triangulated surfaces, colored by 3 colors (red, green, yellow), there is no decoration which actually does the computation by substitution and evaluation. I don’t know why, but this seems very hard to understand by many. Really, these surfaces compute by rewrites on the triangulations, not by anything else.

ADDED: If you look at the tadpoles as pinches, then make the easy effort to see what  the SKI formalism looks like, you’ll see funny things. The I combinator is the sphere with one pinch (the plane), the K combinator is the sphere with two pinches (cylinder) and the S combinator is the torus with one pinch. But what is SKK=I? What is KAB=A? What you see in the dual (i.e in the triangulation) It depends globally on the whole term, so these reductions do not appear to be the same topological manipulations in different contexts.