Tag Archives: emergent algebra

Anharmonic lambda calculus (II)

Some news after the anouncement of kali, or anharmonic lambda calculus. I use again two pages, which are updated in tandem:

 

At the moment I write this post the github.io version is more recent. I think the termination rewrites are better than before.

There is still some work on the exact choice of rewrites, among the many possible which are compatible with the underlying geometric structure. But you can see by looking at kali24.js that there is some connection between the nodes and the anharmonic group.

All this will be explained when I’ll arrive to the most satisfying version.

I added to the lambda calculus examples some more, not lambda calculus ones. Among them the much discussed 10-node quine and also the most amazing molecule I discovered until now. It appears in the menu as “10 nodes sometimes becomes quine from [graph A-L-FI-FOE 540213]” and please do reload it often and let it unfold. For an archived video see this one. It is a graph which basically shatters the notion that a quine is enough, conceptually, to describe life. More simply put, it can evolve in so many ways, among them in a chemlambda quine way, but not uniquely. Amazing.

You can see it also on the menu of the find a quine page. There the graphs look more compact and you can see more of the overall structure, but less the detailed linking of nodes.

Coming back to kali24, the chunk which I need to explain first is what does it have to do with em-convex. That is an enriched lambda calculus which describes my work on emergent algebras. It ends with the proof that in the presence of an axiom called “convex”,  we end up with usual vector spaces over terms of type N (in the paper) and that also the term of type N have an em calculus themselves, which is a way of saying that we recover on them a structure which is like the Riemann sphere.

What is not proved are two things: (1) is  there is a graphical rewrite system which can reproduce the proofs of em-convex, under the algorithm of rewrites used with chemlambda? (2) can we dispense of the lambda calculus part (the abstraction and application operations) and redo everything only with the pure em calculus?

Back to earth now, just for the fun, don’t you think that a geometrical model  of lambda calculus on the Riemann sphere would be nice?

Divination table for anharmonic lambda

UPDATE: Second version, better: kali24.

UPDATE: first version to play here. Needs fiddling though, the termination rewrites not yet clear and it may need to pass to the full projective 24 nodes version, but I hope not. Anyway, it is surprisingly fit, seen that the background has absolutely nothing to do with lambda calculus.

The ambition is to use it for anharmonic chemlambda but IC style seems simpler.

I stare at this to generate the right choice of rewrites

zhexa

and it is much to say, but maybe you can figure it out from some links:

anharmonic group

em

shuffle trick

Torsor rewrites

With the notation conventions from em-convex, there are 3 pairs of torsor rewrites.  A torsor, figured by a fat circle here, is a term T of type  T: E \rightarrow E \rightarrow E \rightarrow E

with the rewrites:

t1

and

t2

Finally, there is a third pair of rewrites which involve terms of the form \circ A for A: N

t3

The rewrite T3-1 tells that the torsor is a propagator for \circ A, the rewrite T3-2 is an apparently weird form of a DIST rewrite.

Now, the following happen:

  • if you add the torsor rewrites to em-convex then you get a theory of topological groups which have a usual, commutative smooth structure, such that the numbers from em-convex give the structure of 1-parameter groups
  • if you add the torsor rewrites to em, but without the convex rewrite then you get a more general theory, which is not based on 1-parameter groups,  because the numbers from em-convex give a structure more general
  • if you look at the emergent structure from em without convex, then you can define torsor terms whch satisfy the axioms, but of course there is no em-convex axiom.

Lots of fun, this will be explained in em-torsor soon.

 

 

Chemlambda task got bigger, explanation

I learned how to reproduce the type inference system of the Calculus of Constructions in a (slight modification of) chemlambda . This comes with the previous caveats concerning untyped lambda beta calculus and chemlambda:

  • (a) for any term (or type) there is a chemlambda molecule, but not for any chemlambda molecule there is a correspondent in CoC,
  • (b) the inference rules are implemented as (compositions of) graph rewrites, but not all graph rewrites have a correspondent in the inference rules,
  • (c) the reduction strategy (I think corresponding to the “tactics” from COQ) is an ingredient which has to be added in order to obtain a full model of computation,
  • (d) all graph rewrites are local (i.e. there is an a priori bound on the number of nodes and arrows of the left and right pattern of graph rewrites, as well on the number of nodes and arrows needed to be taken into account in a condition in case the move has the form: if condition then replace left pattern by right pattern) ,
  • (e) there is no correspondence between the reduction in chemlambda and reduction in CoC (in the sense that the reduction in chemlambda may pass by molecules which do not correspond to terms in CoC).

Some time is needed to eliminate possible bugs, but the task has become even bigger: a purely local version of COQ, based exclusively on graph rewrites, with “tactics” that never use values or names passing.
I have kept a distance, previously, from types because they seemed to me an added ingredient, not really needed, to the purity of a geometrical version of untyped lambda beta calculus. But slowly I understood that the initial try to mix emergent algebras (i.e. space as program) with lambda calculus. from http://arxiv.org/abs/1205.0139 , called “lambda-scale”, is a timid and a bit skew attempt for a type system.
Added over this is the fact that it’s hard, at least for me, to discern in the big literature on the subject, what holds for lambda beta, but not for any other “superior” version, because almost everywhere, if you don’t read carefully, extensionality is mixed in the pot by default, like in all famously well known category theory treatment of the matter.
Or, I have been forced from the beginning to exclude extensionality, because it appears as a non-local graph rewrite. That’s it, I have not made it on purpose, it turns out to be so. Without extensionality, no functions, so no functional style. Without terms and variable passing there is no evaluation, no call, lazy, by need, or otherwise.

There is something else happening here. I looked and found a (surprising)  kind of historical precendent for this situation. At the beginning of chemistry, or even before the rigorous establishment of it, there has been a long and well looked functional period, when chemicals were identified with their function. (In popular thinking even now we speak or hear about the soothing quality of this herbal tea, say.) This was later called “vitalism”, and has been rejected on the grounds of the lack of enough explanatory power.

So I asked: regardless of the fashion of the moment, if you think that untyped lambda beta is Turing universal, then why use extension (or types btw)? What gives a geometrized version (and enlarged, in the sense (a, b, c, e) from the beginning of this post)  version of untyped lambda beta?

At the surface, of one takes a lazy look at my drawings from a year ago, it looks like known facts, mixed with crazy statements. But if you care to look better and read it, then you see that this geometrical view is quite different, because basically has no name, from the previous mentioned reasons.

Now the task is even bigger, so I have to settle for picking some proof of principle bits, which can be attained in finite time by myself.

I tried to go a bit into programming, of course that it works, the limit being only my procrastination in front of so many possible choices.

Not quite, even with the small attempts contained in the chemlambda gui, a patch of awk, sh and js scripts, I am drowned in data.

The field is huge, probably will got bigger.

______________________________________________________________

More detailed argument for the nature of space buillt from artificial chemistry

There are some things which were left uncleared. For example, I have never suggested to use networks of computers as a substitute for space, with computers as nodes, etc. This is one of the ideas which are too trivial. In the GLC actors article is proposed a different thing.

First to associate to an initial partition of the graph (molecule) another graph, with nodes being the partition pieces (thus each node, called actor, holds a piece of the graph) and edges being those edges of the original whole molecule which link nodes of graphs from different partitions. This is the actors diagram.
Then to interpret the existence of an edge between two actor nodes as a replacement for a spatial statement like (these two actors are close). Then to remark that the partition can be made such that the edges from the actor diagram correspond to active edges of the original graph (an active edge is one which connects two nodes of the molecule which form a left pattern), so that a graph rewrite applied to a left pattern consisting of a pair of nodes, each in a different actor part, produces not only a change of the state of each actor (i.e. a change of the piece of the graph which is hold by each actor), but also a change of the actor diagram itself. Thus, this very simple mechanism produces by graph rewrites two effects:

  • “chemical” where two molecules (i.e. the states of two actors) enter in reaction “when they are close” and produce two other molecules (the result of the graph rewrite as seen on the two pieces hold by the actors), and
  • “spatial” where the two molecules, after chemical interaction, change their spatial relation with the neighboring molecules because the actors diagram itself has changed.

This was the proposal from the GLC actors article.

Now, the first remark is that this explanation has a global side, namely that we look at a global big molecule which is partitioned, but obviously there is no global state of the system, if we think that each actor resides in a computer and each edge of an actor diagram describes the fact that each actor knows the mail address of the other which is used as a port name. But for explanatory purposes is OK, with the condition to know well what to expect from this kind of computation: nothing more than the state of a finite number of actors, say up to 10, known in advance, a priori bound, as is usual in the philosophy of local-global which is used here.

The second remark is that this mechanism is of course only a very
simplistic version of what should be the right mechanism. And here
enter the emergent algebras, i.e. the abstract nonsense formalism with trees and nodes and graph rewrites which I have found trying to
understand sub-riemannian geometry (and noticing that it does not
apply only to sub-riemannian, but seems to be something more general, of a computational nature, but which computation, etc). The closeness,  i.e. the neighbourhood relations themselves are a global, a posteriori view, a static view of the space.

In the Quick and dirty argument for space from chemlambda I propose the following. Because chemlambda is universal, it means that for any program there is a molecule such that the reductions of this molecule simulate the execution of the program. Or, think about the chemlambda gui, and suppose even that I have as much as needed computational power. The gui has two sides, one which processes mol files and outputs mol files of reduced molecules, and the other (based on d3.js) which visualizes each step. “Visualizes” means that there is a physics simulation of the molecule graphs as particles with bonds which move in space or plane of the screen. Imagine that with enough computing power and time we can visualize things in as much detail as we need, of course according to some physics principles which are implemented in the program of visualization. Take now a molecule (i.e. a mol file) and run the program with the two sides reduction/visualization. Then, because of chemlambda universality we know that there exist another molecule which admit chemlambda reductions which simulate the reductions of the first molecule AND the running of the visualization program.

So there is no need to have a spatial side different from the chemical side!

But of course, this is an argument which shows something which can be done in principle but maybe is not feasible in practice.

That is why I propose to concentrate a bit on the pure spatial part. Let’s do a simple thought experiment: take a system with a finite no of degrees of freedom and see it’s state as a point in a space (typically a symplectic manifold) and it’s evolution described by a 1st order equation. Then discretize this correctly(w.r.t the symplectic structure)  and you get a recipe which describes the evolution of the system which has roughly the following form:

  • starting from an initial position (i.e. state), interpret each step as a computation of the new position based on a given algorithm (the equation of evolution), which is always an algebraic expression which gives the new position as a  function of the older one,
  • throw out the initial position and keep only the algorithm for passing from a position to the next,
  • use the same treatment as in chemlambda or GLC, where all the variables are eliminated, therefore renounce in this way at all reference to coordinates, points from the manifold, etc
  • remark that the algebraic expressions which are used  always consists  of affine (or projective) combinations of  points (and notice that the combinations themselves can be expressed as trees or others graphs which are made by dilation nodes, as in the emergent algebras formalism)
  • indeed, that  is because of the evolution equation differential  operators, which are always limits of conjugations of dilations,  and because of the algebraic structure of the space, which is also described as a limit of  dilations combinations (notice that I speak about the vector addition operation and it’s properties, like associativity, etc, not about the points in the space), and finally because of an a priori assumption that functions like the hamiltonian are computable themselves.

This recipe itself is alike a chemlambda molecule, but consisting not only of A, L, FI, FO, FOE but also of some (two perhaps)  dilation nodes, with moves, i.e. graph rewrites which allow to pass from a step to another. The symplectic structure itself is only a shadow of a Heisenberg group structure, i.e. of a contact structure of a circle bundle over the symplectic manifold, as geometric  prequantization proposes (but is a mathematical fact which is, in itself, independent of any interpretation or speculation). I know what is to be added (i.e. which graph rewrites which particularize this structure among all possible ones). Because it connects to sub-riemannian geometry precisely. You may want to browse the old series on Gromov-Hausdorff distances and the Heisenberg group part 0, part I, part II, part III, or to start from the other end The graphical moves of projective conical spaces (II).

Hence my proposal which consist into thinking about space properties as embodied into graph rewriting systems, inspred from the abstract nonsense of emergent algebras, combining  the pure computational side of A, L, etc with the space  computational side of dilation nodes into one whole.

In this sense space as an absolute or relative vessel does not exist more than the  Marius creature (what does exist is a twirl of atoms, some go in, some out, but is too complex to understand by my human brain) instead the fact that all beings and inanimate objects seem to agree collectively when it comes to move spatially is in reality a manifestation of the universality of this graph rewrite system.

Finally, I’ll go to the main point which is that I don’t believe that
is that simple. It may be, but it may be as well something which only
contains these ideas as a small part, the tip of the nose of a
monumental statue. What I believe is that it is possible to make the
argument  by example that it is possible that nature works like this.
I mean that chemlambda shows that there exist a formalism which can do this, albeit perhaps in a very primitive way.

The second belief I have is that regardless if nature functions like this or not, at least chemlambda is a proof of principle that it is possible that brains process spatial information in this chemical way.

__________________________________________________________________

How space is born (0)

This opens a new series of posts, which will turn us back to the “computing with space” theme, the main interest here at chorasimilarity.

Look again at the move R2 of graphic lambda calculus.

 

r2move

The epsilon and mu are, originally, elements of a commutative group. Suggestions have been made repeatedly that the commutative group can be anything.

The epsilon and mu are port names, just like the red 1, 2, 3 from the figure.

In the more recent drawing conventions (not that that matters for the formalism) the port names are in blue.

Here is again the same move, but without the epsilon and mu.

r2_newm

Of course, the green node is the fanout, in the chemlambda version.

Yes, eventually, everything is related to everything in this open notebook.

In the next posts I shall take it step by step.

________________________________________________________________

 

 

 

 

 

 

 

 

How not to do the beta move: use emergent algebra instead

In the frame of chemlambda and g-patterns, here is how not to do the beta move. We pass from chemlambda to a slightly enlarged version, see the graphical formalism of projective conical spaces, which would correspond to an only local moves version of the whole GLC, with the emergent algebra nodes and moves.

Then we do emergent algebra moves instead.

Look, instead of the beta move (see here all moves with g-patterns)

L[a,d,k] A[k,b,c]

<–BETA–>

Arrow[a,c] Arrow[b,d]

lets do for an epsilon arbitrary the epsilon beta move

not_beta_1Remark that I don’t do the beta move, really. In g-patterns the epsilon beta move does not replace the LEFT pattern by another, only it ADDS TO IT.

L[a,d,k] A[k,b,c]

— epsilon BETA –>

FO[a,e,f]  FO[b,g,h]

L[f,i,k] A[k,h,j]

epsilon[g,i,d] epsilon[e,j,c]

Here, of course,  epsilon[g,i,d] is the new graphical element corresponding to a dilation node of coefficient epsilon.

Now, when epsilon=1 then we may apply only ext2 move and LOC pruning (i.e. emergent algebra moves)

not_beta_3

and we get back the original g-pattern.

But if epsilon goes to 0 then, only by emergent algebra moves:

not_beta_2

that’s it the BETA MOVE is performed!

What is the status of the first reduction from the figure? Hm, in the figure appears a node which has a “0” as decoration. I should have written instead a limit when epsilon goes to 0… For the meaning of the node with epsilon=0 see the post Towards qubits: graphic lambda calculus over conical groups and the barycentric move. However, I don’t take the barycentric move BAR, here, as being among the allowed moves. Also, I wrote “epsilon goes to 0”, not “epsilon=0”.

__________________________________________________________

epsilon can be a complex number…

__________________________________________________________

Questions/exercises:

  • the beta move pattern is still present after the epsilon BETA move, what happens if we continue with another, say a mu BETA move, for a mu arbitrary?
  • what happens if we do a reverse regular BETA move after a epsilon beta move?
  • why consider “epsilon goes to 0” instead of “epsilon = 0”?
  • can we do the same for other moves, like DIST, for example?

 

__________________________________________________________