All posts by chorasimilarity

Mathematics is everywhere! Open to collaborations.

Y again: conclusion!

In the post Y again: conflict! I took the most obvious reduction strategy (sequential) and applied it to the reduction of the “Y combinator applied to something”.  The reduction ended in conflict between two moves which compete for the same g-pattern.

Then, in the post Y again:compete!  I took in parallel the two possible outcomes of the conflict. The contenders have been branded as fast shooting cowboys, offering a show.

Surprisingly, both possible paths of reduction ended in a very simple version of the Y combinator.

Only  that the very simple version is not one coming from lambda calculus!

Indeed, let’s recall who is the Y combinator, seen a g-pattern in chemlambda.

In lambda calculus the Y combinator is

Lx.( (Ly.(x(yy)) (Ly.(x(yy)) )

As a molecule, it looks like this.

 

yagain_2

As g-pattern, it looks like this (see this post  and this post for the conversion of lambda terms into g-patterns):

L[a,x,o] A[b,c,a] FO[x,y,z]

L[e,d,b] FO[d,f,g] A[f,g,h] A[y,h,e]

L[j,i,c] FO[i,l,m] A[l,m,k] A[z,k,j]

 

Applied to something means we add to this g-pattern  the following:

A[o,p,u]

with the meaning that Y applies to whatever links to the port “p”. (But mind that in chemlambda there is no variable or term passing or evaluation! so this is a way to speak in the lambda calculus realm, only).

The two mentioned posts about Y again led to the conclusion that the g-pattern “Y applied to something” behaves (eventually, after several reductions) as the far more simple g-pattern:

A[o,p,u]                (i.e. “applied to something at port “p”)

L[b,a,o]

FO[a,c,d] A[c,d,b]

Now, this means that the Y combinator g-pattern may be safely replaced in computations by

L[b,a,o]

FO[a,c,d] A[c,d,b]

or, in graphical version, by

 

yagain_5

But this is outside lambda calculus.

So what?

It is far simpler than the Y combinator from lambda calculus.

The same happens with other lambda terms and reductions.(see for example the post Actors for the Ackermann machine, for another example. Incidentally, the analysis of the Ackermann machine, i.e. the graph which behaves like the Ackermann function, gave me the idea of using the actor model with GLC. This evolved into arXiv:1312.4333.).

This shows  the fact that chemlambda, even with the dumbest sequential reduction strategy (ok, enhanced in obvious ways so it solves conflicts), can do more with less fuss than lambda calculus.

By looking on the net (recall that I’m a mathematician, therefore excuse my ignorance in CS well known people, I’m working on this), I can’t but wonder what chemlambda would give in relation with, for example:

Of course, the dream is to go much, much further. Why? Because of  the List of Ayes/Noes of the artificial chemistry chemlambda.

__________________________________________________________

 

Y again: compete!

In the post Y again: conflict!  the chemlambda computation based on a very crude model ended in conflict.

Conflict means that the same graphical element appears in two LEFT g-patterns (see, in the series of expository posts  the part II for the g-patterns and the part III for the moves) .

In the next figure we see this conflict, in the upper part (that’s where we were left in the previous post), followed by a fork: in the lower left part of the figure we see what we get if we apply the beta move and in the lower right part we see what happens if we apply the DIST move.

 

yagain_3

 

Recall that (or look again at the upper side of the picture) the conflict was between LEFT patterns of a beta move and of a DIST move.

I rearranged the drawing of the g-patterns a bit (mind that this is not affecting in any way the graphs, because the drawings on paper or screen are one thing and the graphs another thing, excuse me for being trivially obvious). In this pretty picture we see well that already the Y gun shot a second pair of nodes A and FO.

The differences now:

  • for the graph from the lower left part of the picture, obtained after the beta move, we see that A[i.i] produces a loop: A[i,i] –COMB–> loop. We can disregard the loop further (because we allow only moves from LEFT to RIGHT, therefore the loop will not interact with anything in the future of the computation). We are left with a very simple graph, made of the two pairs of nodes A and FO which the gun already shot and with a very interesting pair:  A[w,a1,o] FO[o,q,a1] .
  • for the graph from the lower right part of the picture, obtained after the DIST move, we see the two pairs of nodes A and FO whih the Y gun already shot and a graph made by 6 nodes.

Which way is the best? What to do?

Let’s make them compete!  Who shoots faster?

 

Imagine the scene: in the Lambda Saloon, somewhere in the Wide Wild West, enter two new cowboys.

“Who called these guys?” asks the sheriff of the Functional City, where the reputed saloon resides.

“They are strangers! They both look like Lucky Luke, though…”

The cowboys and cowgirls from the saloon nod in approval: they all remember what happened when Lucky Luke — the one, the single cowboy who shoots faster than his shadow — was put between the two parallel mirrors from the saloon stage. What a show! That day, the Functional City got a reputation, and everybody knows that reputation is something as good as gold. Let the merchants from Imperative County sell windows panes for the whole US, nobody   messes with a cowboy, or cowgirl from the Functional City. Small, but elegant. Yes sir, style is the right word…

Let’s go back to the two strangers.

“I’m faster than Master Y” says the stranger from the right.

“I’m MUCH faster than Master Y” says the one from the left, releasing from his cigar a loop of smoke.

“Who the … is Master Y?” asks the sheriff.

“Why, it’s Lucky Luke. He trained us both, then sent us to Functional City. He says hello to you and tells you that he got new tricks to show” says one of the strangers.

“… things not learned from church…” says the other.

“I need to see how fast are you, or else I call you both liars” shouts one of the bearded, long haired  cowboys.

The stranger from the right started first. What a show!

 

yagain_right

He first makes a clever DIST move (not that there was anything else to do). Then he is presented with 4 simultaneous moves to do (FAN-IN, 2 betas and a DIST). He shoots and freezes. Nothing moves, excepting two loops of smoke, rising from his gun.

“I could continue like this forever, but I stopped, to let my colleague show you what he’s good at.” said the stranger from the right.

“Anyway, I am a bit slow  with the first shot, but after that I am faster.” he continued.

“Wait, said the sheriff, you sure you really shot?”

“Yep, sure, look better how I stand”, said the stranger from the right, only slightly modifying his posture, so that everybody could clearly see the shot:

 

yagain_right_2

“Wow, true!” said the cowboys.

“I’m fast from the first shoot!” said the stranger from the left. “Look!”

 

yagain_left

“I only did a DIST move.” said the stranger from the left, freezing in his posture.

“Nice show, guys! Hey, look, I can’t tell now which is which, they look the same. I got it, the guy from the right is a bit slower at the first shoot (however he is dazzlingly fast) but then he is as  good as his fellow from the left.”

“Hm, said the sheriff, true. Only one thing: I have never seen in the Lambda Saloon  anything like this. It’s fast, but it does not seem to belong here.”

___________________________________________________________

Y again: conflict!

We proved in    arXiv:1403.8046 [cs.AI]  ( link to the published  article) that in chemlambda the molecule which represents the Y combinator is a gun which shoots pairs of FO and A nodes. See there the sequence of reductions which was considered.

This time  let’s not care about staying in lambda calculus and let’s take the simplest reduction strategy, to see what happens.

We posit in the frame of g-patterns from the expository posts  part I  and part II (definition of g-patterns) and part III (definition of moves) and part IV (g-patterns for lambda terms 1st part) and part V (g-patterns for lambda terms 2nd part) and part VI (about self-multiplication of g-patterns)  and part VII (an example of reduction) .

We take the following reduction strategy:

  • we start with a g-pattern and we reduce it sequentially
  • at each step we identify first all the LEFT patterns from the following moves: beta, DIST, FAN-IN,
  • we do the moves from LEFT to RIGHT
  • we identify and do all possible COMB moves from LEFT to RIGHT
  • repeat until no move available OR until conflict.

What’s conflict? We shall see one.

Mind that this is a very stupid and simplistic strategy, which does not guarantee that if we start with a g-pattern which represents a lambda term then we stop by having a g-pattern of a lambda term.

It does have it’s advantages, though.

OK, so let us start with the g-pattern of Y applied to something. 

In general, with g-patterns we can say many things. As any combinator molecule, when represented by a g-pattern, the Y combinator has only one free port, let’s call it “b”. Thus Y appears as a g-pattern which we denote by Y[b].

Suppose we want to tart the reduction from Y applied to something. This means that we shall start with the g-pattern

A[b,a,out] Y[b]

OK!

Look what happens when we apply the mentioned strategy.

(Advice: big picture, click on it to see it clearly and to travel along it.)

 

yagain_1

Here is a conflict: at one step we have two LEFT patterns, in this case

L[o,p,i] A[i,p,v]   , which is good for a beta move

and

A[i.p.v] FO[v,q,a1]  , which is good for a DIST move.

The patterns contain a common graphical element, in this case A[i,p,v], which will be deleted during the respective moves.

CONCLUSION: with this strategy we have a gun which shoots one pair of FO and A nodes, but then it got wrecked.

What to do then?

The human way is to apply

When in trouble or in doubt

Run in circles, scream and shout

for a moment, then acknowledge that this is a stupid reduction strategy, then find some qualities of this strategy, then propose another which has those qualities but works better, then reformulate the whole problem and give it an unexpected turn.

The AI way is to wait for somebody to change the reduction strategy.

__________________________________________________________

Github laudatio: negative Coase cost

This is a record of a mind changing experience I had with Github, one which will manifest in the months to come. (Well, it’s time to move on, to move further, new experiences await, I like to do this…)

Here is not more than what is in this ephemeral google+ post, but is enough to get the idea.

And it’s controversial, although obvious.

“I  just got hooked by github.io . Has everything, is a dream came true. Publishing? arXiv? pfff…. I know, everybody knows this already, let me enjoy the thought, for the moment. Then it will be some action.

 
Continuing with github and publishing, this is a worthy subject (although I believe that practically github already dwarfed legacy publishing, academia and arXiv). Here is an excerpt from a post from 2011
http://marciovm.com/i-want-a-github-of-science/
 
“- Publishing is central to Academia, but its publishing system is outclassed by what Open Source software developers have in GitHub

- GitHub’s success is not just about openness, but also a prestige economy that rewards valuable content producers with credit and attention

-Open Science efforts like arXiv and PLoS ONE should follow GitHub’s lead and embrace the social web”

I am aware about the many efforts about publishing via github, I only wonder if that’s not like putting a horse in front of a rocket.

On the other side, there is so much to do, now that I feel I’ve seen rock solid proof that academia, publishing and all that jazz is walking dead, with the last drops of arterial blood splatting around from the headless body. “

 

Negative Coase cost?

__________________________________________________

 

Lambda calculus and the fixed point combinator in chemlambda (VIII)

This is the 8th  (continuing from part I  and part II  and part III and part IV and part V and part VI  and part VII) in a series of expository posts where we put together in one place the pieces from various places about:

  • how is treated lambda calculus in chemlambda
  • how it works, with special emphasis on the fixed point combinator.

I hope to make this presentation  self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)

_________________________________________________________

This series of posts may be used as a longer, more detailed version of sections

  • The chemlambda formalism
  • Chemlambda and lambda calculus
  • Propagators, distributors, multipliers and guns
  • The Y combinator and self-multiplication

from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication,  arXiv:1403.8046 [cs.AI],  presented  by Louis Kauffman in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York.  Here is a link to the published  article, free, at MIT Press.

_________________________________________________________

Tags. I shall use the name “tag” instead of “actor” or “type”, because is more generic (and because in future developments we shall talk  more about actors and types, continuing from the post Actors as types in the beta move, tentative).

Every port of a graphical element (see part II)  and the graphical element itself can have tags, denoted by :tagname.

There is a null tag “null” which can be omitted in the g-patterns.

As an example, we may see, in the most ornate way, graphical elements like this one:

L[x:a,y:b,z:c]:d

where of course

L[x:null,y:null,z:null]:null    means L[x,y,z]

The port names are tags, in particular “in” out” “middle” “left” and “right” are tags.

Any concatenation of tags is a tag.  Concatenation of tags is denoted by a dot, for example “left.right.null.left.in”.  By the use of “null” we have

a.null –concat–> a

null.a –concat–> a

I shall not regard concat as a move in itself (maybe I should, but that is for later).

Further in this post I shall not use tags for nodes.

Moves with tags. We can use tags in the moves, according to a predefined convention. I shall take several  examples.

1. The FAN-IN move with tags. If the tags a and b are different then

FI[x:a, y:b, z:c] FO[z:c,u:b, v:a]

–FAN-IN–>

Arrow[x:a,v:a] Arrow[y:b,u:b]

Remark that the move is not reversible.

It means that you can do FAN-IN only if the right tags are there.

2. COMB with tags.

L[x:a, y:b, z:c] Arrow[y:b, u:d]

–COMB–>

L[x:a, u:d,z:c]

and so on for all the comb moves which involve two graphical elements.

3. DIST with tags.  There are two DIST moves, here with tags.

A[x:a,y:b,z:c] FO[z:c,u:d,v:e]

–DIST–>

FO[x:a, w:left.d, p:right.e]   FO[y:b, s:left.d, t:right.e]

A[w:left.d, s:left.d, u:d]   A[p:right.e, t:right.e, v:e]

In graphical version

 

dist_with_tags

 

and the DIST move for the L node:

L[y:b, x:a, z:c] FO[z:c, u:d, v:e]

–DIST–>

FI[p:right, w:left, x:a] FO[y:b, s:left, t:right]

L[s:left, w:left,u:d]  L[t:right, p:right, v:e]

In graphical version:

 

dist_tags_lambda

 4. SHUFFLE. This move replaces CO-ASSOC, CO-COMM. (It can be done as a sequence of CO-COMM and CO-ASSOC; conversely, CO-COMM and CO-ASSOC can be done by SHUFFLE and LOC PRUNING, explanations another time.)

FO[x:a, y:b, z:c]  FO[y:b, w:left, p:right] FO[z:c, s:left, t:right]

–SHUFFLE–>

FO[x:a, y:left, z:right]  FO[y:left, w, s] FO[z:right, p, t]

In graphical version:

 

shuffle_with.tags

 

____________________________________________________________

 

 

 

A symplectic Brezis-Ekeland-Nayroles principle

We submitted to the arXiv the article

Marius Buliga, Gery de Saxce, A symplectic Brezis-Ekeland-Nayroles principle

You can find here the slides of two talks given in Lille and Paris a while ago,  where the article has been announced.

UPDATE: The article appeared, as  arXiv:1408.3102

This is, we hope, an important article! Here is why.

The Brezis-Ekeland-Nayroles principle appeared in two articles from 1976, the first by Brezis-Ekeland, the second by Nayroles. These articles appeared too early, compared to the computation power of the time!

We call the principle by the initials of the names of the inventors: the BEN principle.

The BEN principle asserts that the curve of evolution of a elasto-plastic body minimizes a certain functional, among all possible evolution curves which are compatible with the initial and boundary conditions.

This opens the possibility to find, at once the evolution curve, instead of constructing it incrementally with respect to time.

In 1976 this was SF for the computers of the moment. Now it’s the right time!

Pay attention to the fact that a continuous mechanics system has states belonging to an infinite dimensional space (i.e. has an infinite number of degrees of freedom), therefore we almost never hope to find, nor need the exact solution of the evolution problem. We are happy for all practical purposes with approximate solutions.

We are not after the exact evolution curve, instead we are looking for an approximate evolution curve which has the right quantitative approximate properties, and all the right qualitative exact properties.

In elasto-plasticity (a hugely important class of materials for engineering applications) the evolution equations are moreover not smooth. Differential calculus is conveniently and beautifully replaced by convex analysis.

Another aspect is that elasto-plastic materials are dissipative, therefore there is no obvious hope to treat them with the tools of hamiltonian mechanics.

Our symplectic BEN principle does this: one principle covers the dynamical, dissipative evolution of a body, in a way which can be reasonably easy amenable to numerical applications.

_______________________________________

 

How not to do the beta move: use emergent algebra instead

In the frame of chemlambda and g-patterns, here is how not to do the beta move. We pass from chemlambda to a slightly enlarged version, see the graphical formalism of projective conical spaces, which would correspond to an only local moves version of the whole GLC, with the emergent algebra nodes and moves.

Then we do emergent algebra moves instead.

Look, instead of the beta move (see here all moves with g-patterns)

L[a,d,k] A[k,b,c]

<–BETA–>

Arrow[a,c] Arrow[b,d]

lets do for an epsilon arbitrary the epsilon beta move

not_beta_1Remark that I don’t do the beta move, really. In g-patterns the epsilon beta move does not replace the LEFT pattern by another, only it ADDS TO IT.

L[a,d,k] A[k,b,c]

– epsilon BETA –>

FO[a,e,f]  FO[b,g,h]

L[f,i,k] A[k,h,j]

epsilon[g,i,d] epsilon[e,j,c]

Here, of course,  epsilon[g,i,d] is the new graphical element corresponding to a dilation node of coefficient epsilon.

Now, when epsilon=1 then we may apply only ext2 move and LOC pruning (i.e. emergent algebra moves)

not_beta_3

and we get back the original g-pattern.

But if epsilon goes to 0 then, only by emergent algebra moves:

not_beta_2

that’s it the BETA MOVE is performed!

What is the status of the first reduction from the figure? Hm, in the figure appears a node which has a “0” as decoration. I should have written instead a limit when epsilon goes to 0… For the meaning of the node with epsilon=0 see the post Towards qubits: graphic lambda calculus over conical groups and the barycentric move. However, I don’t take the barycentric move BAR, here, as being among the allowed moves. Also, I wrote “epsilon goes to 0″, not “epsilon=0″.

__________________________________________________________

epsilon can be a complex number…

__________________________________________________________

Questions/exercises:

  • the beta move pattern is still present after the epsilon BETA move, what happens if we continue with another, say a mu BETA move, for a mu arbitrary?
  • what happens if we do a reverse regular BETA move after a epsilon beta move?
  • why consider “epsilon goes to 0″ instead of “epsilon = 0″?
  • can we do the same for other moves, like DIST, for example?

 

__________________________________________________________

 

List of Ayes/Noes of artificial chemistry chemlambda

List of noes

  • distributed (no unique place, no external passive space)
  • asynchronous (no unique time, no external global time)
  • decentralized (no unique boss, no external acyclic hierarchy)
  • no semantics (no unique meaning, no signal propagation, no values)
  • no functions (not vitalism)
  • no probability

 

List of ayes

__________________________________________________________

Actors as types in the beta move, tentative

Recall the posts about types as decorations of a GLC graph?  Like this one

Example: decorations of S,K,I combinators in simply typed GLC

In the chemlambda version, the decoration with types for the lambda and application graphical elements is this:

type_lambda_appl

or with g-patterns:

L[x:b, y:a, z:a->b]

A[x:a->b, y:a, z:b]

Recall also that there is a magma associated to any graph (or g-pattern) which is easy to define. If the magma is free then we say that the g-pattern is well typed (not that we need “well” further).

Let’s mix this with actors. We make the attribution of the port variables of a g-pattern to actors (id’s) and we write  that the port variable x belongs to the actor a  like this

x:a

I don’t want to define an operation -> for actors id’s, like if they are types. Instead I shall use the Arrow graphical element and the COMB move (see the moves of chemlambda in terms of g-patterns here).

Here is a COMB move, a bit modified:

L[x:b, y:a, z:d]     –COMB–>  L[x:b, y:a, u:a] Arrow[u:b, z:d]

which says something like

:d should be  :a->:b

type_lambda

The same for the application

A[z:d, v:a, w:b]  –COMB–>  Arrow[z:d, s:a] A[s:b , v:a, w:b]

which says something like

:d should be :a->:b

type_appl

which, you agree, is totally compatible with the decorations from the first figure of the post.

Notice the appearance of port variables u:a, u:b  and s:a, s:b, which play the role a->b.

We allow the usual COMB moves only if the repeating variables have the same actors.

What about the beta move?

The LEFT g-pattern of the beta move is, say with actors:

L[x:a, y:d, z:c]  A[z:c, v:b, w:e]

Apply the two new COMB moves;

L[x:a, y:d, z:c]  A[z:c, v:b, w:e]

–2 COMB–>

L[x:a, y:d, u:d]

Arrow[u:a, z:c]  Arrow[z:c, s:b]

A[s:e , v:b, w:e]

type_beta_1

An usual COMB move applies here:

L[x:a, y:d, u:d]

Arrow[u:a, z:c]  Arrow[z:c, s:b]

A[s:e , v:b, w:e]

<–COMB–>

L[x:a, y:d, u:d]

Arrow[u:a, s:b]

A[s:e , v:b, w:e]

type_beta_2

and now the new beta move would be:

L[x:a, y:d, u:d]

Arrow[u:a, s:b]

A[s:e , v:b, w:e]

–BETA–>

Arrow[x:a, w:e]

Arrow[v:b, y:d]

type_beta_3

This form of the beta move resembles with the combination of CLICK and ZIP from zipper logic.

Moreover the Arrow elements could be interpreted as message passing.

________________________________________________________

 

Actors, space and Movable Feast Machine at ALIFE14

See:

  1. David H. Ackley, Trent R. Small, Indefinitely Scalable Computing = Artificial Life Engineering
  2. Lance R. Williams, Self-Replicating Distributed Virtual Machines

presented at ALIFE14.

Both articles look great and the ideas are very close to my actual interests. Here is why:

  • it is about distributed computing
  • using actors
  • some things which fall under functional programming
  • and space.

The chemlambda and distributed GLC project  also has a paper there: M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication see the better arXiv version because it has links inside: arXiv:1403.8046.

The resemblance with the mentioned papers and the chemlambda and distributed GLC is that our (fully theoretical, helas) model is also about distributed computing, using actors, lambda calculus and space.

The differences are many, though, and I hope that these might lead to interesting interactions.

Further I describe the main difference, with the understanding that all this is very new for me, a mathematician, so I might be wrong in my grasping of the MFM (please correct me if so).

In the MFM the actors are atoms in a passive (i.e. predefined) space. In the distributed GLC the actors have as states graphs called molecules (more precisely g-patterns).

[Here is the moment to thank, first, to Stephen P. King who noticed me about the two articles.  Second, Stephen works on something which may be very similar to the MFM, as far as I understand, but I have to strongly stress that the distributed GLC  does NOT use a botnet, nor the actors are nodes in a chemlambda graph!]

In distributed GLC the actors interact by message passing to others actors with a known ID. Such message passing provokes a change in the states of the actors which corrsponds to one of the graph rewrites (moves of chemlambda). As an effect the connectivities between the actors change (where connectivity between an actor :alice and :bob means that :alice has as state a g-pattern with one of the free ports decorated with :bob ID). Here the space is represented by these connectivities and it is not passive, but an effect of the computation.

In the future I shall use and cite, of course, this great research subject which was unknown to me. For example the article Lance R. Williams Robust Evaluation of Expressions by Distributed Virtual Machines already uses actors!  What more I am not aware of? Please tell, thanks!

_______________________________________________________________

 

 

Questions/Answers Session: on chemlambda and computing models

I open a session on no-nonsense chemlambda and distributed GLC.

This will NOT be made public, only by private mail messages.

If you want to hear more:

  • precise
  • structured
  • advanced (i.e. not presented at chorasimilarity)

then mail me at chorasimilarity@gmail.com and let’s talk about  parts you don’t get clearly.

Looking forward to hear from you,

Marius Buliga

__________________________________________________________

 

Just a little bit of eta goes a long way

L[a,x,b] A[u,x,a] <–eta–> Arrow[u,b] loop

Example: from this post

L[a,x,b] A[b,x,a] <–eta–>

Arrow[b,b] loop <–comb–>

loop loop

or

L[a,x,b] A[b,x,a] <–beta–>

Arrow[a,a] Arrow[x,x] <–2comb–>

loop loop

Then why not

L[a,x,b] A[u,y,a] <–eta–> Arrow[u,b] Arrow[x,y]

which is exactly alike the FAN-IN

FO[a,x,b] FI[u,y,a] <–FAN-IN–> Arrow[u,b] Arrow[x,y]

Taking this seriously, the beta move should have a hidden companion

FO[a,x,b] FI[b,y,c] <–betahide–> Arrow[y,x] Arrow[a,c]

… which brings us to a symmetrized version of chemlambda which is very close to the interaction nets of Yves Lafont.

Chemlambda artificial chemistry at ALIFE14: article and supplementary material

Louis Kauffman presents today our article in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York

Chemlambda, universality and self-multiplication

Marius Buliga and Louis H. Kauffman

Abstract (Excerpt)

We present chemlambda (or the chemical concrete machine), an artificial chemistry with the following properties: (a) is Turing complete, (b) has a model of decentralized, distributed computing associated to it, (c) works at the level of individual (artificial) molecules, subject of reversible, but otherwise deterministic interactions with a small number of enzymes, (d) encodes information in the geometrical structure of the molecules and not in their numbers, (e) all interactions are purely local in space and time. This is part of a larger project to create computing, artificial chemistry and artificial life in a distributed context, using topological and graphical languages.

DOI: http://dx.doi.org/10.7551/978-0-262-32621-6-ch079
Pages 490-497

Supplementary  material:

____________________________________________________________

 

Lambda calculus and the fixed point combinator in chemlambda (VII)

This is the 7th  (continuing from part I  and part II  and part III and part IV and part V and part VI)  in a series of expository posts where we put together in one place the pieces from various places about:

  • how is treated lambda calculus in chemlambda
  • how it works, with special emphasis on the fixed point combinator.

I hope to make this presentation  self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)

_________________________________________________________

This series of posts may be used as a longer, more detailed version of sections

  • The chemlambda formalism
  • Chemlambda and lambda calculus
  • Propagators, distributors, multipliers and guns
  • The Y combinator and self-multiplication

from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication,  arXiv:1403.8046 [cs.AI],  which is accepted in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York, (go see the presentation of Louis Kauffman if you are near the event.) Here is a link to the published  article, free, at MIT Press.

_________________________________________________________

In this post I take a simple example which contains beta reduction and self-multiplication.

Maybe self-multiplication is a too long word. A short one would be “dup”, any tacit programming language has it. However, chemlambda is only superficially resembling to tacit programming (and it’s not a language, arguably, but a GRS, nevermind).

Or “self-dup” because chemlambda has no “dup”, but a mechanism of self-multiplication, as explained in part VI.

Enough with the problem of the right denomination, because

“A rose by any other name would smell as sweet”

as somebody wrote, clearly not  believing that the limit of his world is the limit of his language.

Let’s consider the lambda term (Lx.xx)(Ly.yz). In lambda calculus there is the following string of reductions:

(Lx.xx)(Ly.yz) -beta-> (Ly.yz) (Lu.uz) -beta-> (Lu.uz) z -beta-> zz

What we see? Let’s take it slower. Denote by C=xx and by  B= Ly.yz. Then:

(Lx.C)B -beta-> C[x:=B]  = (xx)[x:=B]  =  (x)[x:=B]  (x)[x:=B] = BB = (Ly.yz) B -beta-> (yz)[y:=B] = (y)[y:=B] (z)[y:=B] =  Bz = (Lu.uz)z -beta=> (uz)[u:=z] = (u)[u:=z] (z)[u:=z]  = zz

Now, with chemlambda and its moves performed  only from LEFT to RIGHT.

The g-pattern which represents (Lx.xx)(Ly.yz) is

L[a1,x,a] FO[x,u,v] A[u,v,a1] A[a,c,b]  L[w,y,c] A[y,z,w]

We can only do a beta move:

L[a1,x,a] FO[x,u,v] A[u,v,a1] A[a,c,b]  L[w,y,c] A[y,z,w]

<–beta–>

Arrow[a1,b] Arrow[c,x] FO[x,u,v] A[u,v,a1] L[w,y,c] A[y,z,w]

We can do two COMB moves

Arrow[a1,b] Arrow[c,x] FO[x,u,v] A[u,v,a1] L[w,y,c] A[y,z,w]

2 <–COMB–>

FO[c,u,v] A[u,v,b] L[w,y,c] A[y,z,w]

Now look, that is not a representation of a lambda term, because of the fact that FO[c,u,v] is “in the middle”, i.e. the middle.in port of the FO[c,u,v] is the out port of B, i.e. the right.out port of the lambda node L[w,y,c]. On the same time, the out ports of FO[c,u,v] are the in ports of A[u,v,b].

The only move which can be performed is DIST, which starts the self-dup or self-multiplication of B = L[w,y,c] A[y,z,w] :

FO[c,u,v] A[u,v,b] L[w,y,c] A[y,z,w]

<–DIST–>

FI[e,f,y] FO[w,g,h] L[h,e,v] L[g,f,u] A[u,v,b] A[y,z,w]

This is still not a representation of a lambda term. Notice also that the g-pattern which represents B has not yet self-multiplied. However, we can already perform a beta move  for L[g,f,u] A[u,v,b] and we get (after 2 COMB moves as well)

FI[e,f,y] FO[w,g,h] L[h,e,v] L[g,f,u] A[u,v,b] A[y,z,w]

<–beta–>

FI[e,f,y] FO[w,g,h] L[h,e,v] Arrow[g,b] Arrow[v,f] A[y,z,w]

2 <–COMB–>

FI[e,f,y] FO[w,b,h] L[h,e,f] A[y,z,w]

This looks like a weird g-pattern. Clearly is not a g-pattern coming from a lambda term, because it contains the fanin node FI[e,f,y].  Let’s write again the g-pattern as

L[h,e,f]  FI[e,f,y]  A[y,z,w] FO[w,b,h]

(for our own pleasure, the order of the elements in the g-pattern does not matter)  and remark that A[y,z,w] is “conjugated” by the FI[e,f,y] and FO[w,b,h].

We can apply another DIST move

L[h,e,f]  FI[e,f,y]  A[y,z,w] FO[w,b,h]

<–DIST–>

A[i,k,b] A[j,l,h] FO[y,i,j] FO[z,k,l] FI[e,f,y] L[h,e,f]

and now there is only one move which can be done, namely a FAN-IN:

A[i,k,b] A[j,l,h] FO[y,i,j] FO[z,k,l] FI[e,f,y] L[h,e,f]

<–FAN-IN–>

Arrow[e,j] Arrow[f,i] A[i,k,b] A[j,l,h] FO[z,k,l] L[h,e,f]

which gives after 2 COMB moves:

Arrow[e,j] Arrow[f,i] A[i,k,b] A[j,l,h] FO[z,k,l] L[h,e,f]

2 <–COMB–>

A[f,k,b] A[e,l,h] FO[z,k,l] L[h,e,f]

The g-pattern

A[f,k,b] A[e,l,h] FO[z,k,l] L[h,e,f]

is a representation of a lambda term, finally: the representation of (Le.ez)z. Great!

From here, though, we can apply only a beta move at the g-pattern A[f,k,b]  L[h,e,f]

A[f,k,b] A[e,l,h] FO[z,k,l] L[h,e,f]

<–beta–>

Arrow[h,b] Arrow[k,e] A[e,l,h] FO[z,k,l]

2 <–COMB–>

FO[z,k,l] A[k,l,b]

which represents zz.

_____________________________________________________

 

Episciences not dead after all, Epi-IAM has the right stance

Alerted by the post Episciences.org progress by Thomas Arildsen, I took a look at the EPI-IAM and I discovered something to be appreciated.

Indeed, compare the non-combat stance of Episciences.org

The project proposes an alternative to existing economic models, without competing with traditional publishers.

with the one of EPI-IAM:

The driving force for this project is the take-over of the best journals in the field by the scientific communities, organised in thematic executive committees (so-called epicommittees) gathering international experts.

This project is intended for:

  • existing journals wishing to be liberated from a commercial editorial environment or already open-access journals in search of shared support services

  • newly created journals looking for a simple and highly visible editing environment

“IAM” stands for “Informatics and Applied Mathematics”, great! perhaps the first initiative towards new styles of communication of research, among those from mathematics and hard sciences (well, arXiv excluded, of course), which has a chance to compare with the much more advanced, already functioning ones, from biology and medicine.

In a previous post I wrote that in particular the episciences project looks dead to me. I am happy to be proven wrong!

This is what we need (a dire need in math), not any of the flawed projects which involve gold OA, friends recommendations networks, opaque peer-review and dislike of comments on articles, authority medals dispensed by journals.

It is a revolution, very much alike to the one 100 years ago in art, which led to an explosion of creativity.

The ball is on our side (and recall that we are not going to get any help from the academic management and colleagues adapted to the old ways).

Congrats EPI-IAM, a development to follow!

_________________________________________________________

 

Computing with chemlambda, the first year

Approximately a year ago I wrote the post A chemical concrete machine for lambda calculus. Quote:

How can this be done? Here is sketch, mind you that I propose things which I believe are possible from a chemical perspective, but I don’t have any chemistry knowledge.  If you do, and if you are interested to make a chemical concrete machine for graphic lambda calculus, then please contact me.

(1) What has been achieved in one year?    (2) What will happen next?

(1) More than 100 posts in the chorasimilarity open notebook  cover, with lots of details, everything which will be mentioned further.

I am most grateful for the collaboration with Louis Kauffman. This was a dream for me since I wrote Computing with space: a tangle formalism for chora and difference. Via the continuous enthusiastic social web connector Stephen P. King, we started to work together and we are now in position, after a year, to take a big leap. We wrote two articles GLC actors, artificial chemical connectomes, topological issues and knots , which is for the moment a not very well understood hidden treasure of a distributed computing model, and Chemlambda, universality and self-multiplication, which will be presented at ALIFE 14, concentrating on the self-multiplication phenomenon (see the last post of the thread of expository posts on this here). These works are embedded into hundreds of hours of discussions with many people. These discussions helped at least as motivations for well explaining things.

In parallel the chemlambda paper was published on figshare: Chemical concrete machine. Follwed by Zipper logic, another piece of the puzzle.

We had a NSF proposal which was centered around cybersecurity, perhaps too early in the stage of development of the project. However, the theoretical part of the project has been appreciated beyond my expectations, what is needed is the practical implementation.

 

(2) More and more I become convinced that the distributed, decentralized computing project based on chemlambda would be possible today, provided is done in the right place and frame. The most recent thoughts are about the use of the semantic web tools like RDF and N3logic for this (although I strongly believe in the no semantics slogan).

I shall write much more in a part II post, right now I have a very bad connection…

UPDATE: … so, imagine that chemlambda molecules are RDF datasets, accesible via the respective URI. If you want to run a computation then you need to impersonate the actors (because the initial actor diagram is already in the structure of the RDF dataset) and to specify a model of computation (i.e. to specify the reduction rules decorated with actors, along with the actors behaviours, all in N3).

Well designed computations could then have their URIs.

Then, imagine that you want to endow your computer with a microbiome OS, just follow the links.

Another, related direction of future research concerns the IoT, things and space ….

__________________________________________________________

Separation of form from content, no semantics

One of the principles which make the net possible, as stated by Tim Berners-Lee,

separation of form from content: The principle that one should represent separately the essence of a document and the style with which it is presented.

Applied to decentralized computing, this means no semantics.

[One more confirmation of my impression that logic is something from the 21st century disguised in 19th century clothes.]

___________________________________________________________

 

 

Lambda calculus and the fixed point combinator in chemlambda (VI)

This is the 6th  (continuing from part I  and part II  and part III and part IV and part V)   in a series of expository posts where we put together in one place the pieces from various places about:

  • how is treated lambda calculus in chemlambda
  • how it works, with special emphasis on the fixed point combinator.

I hope to make this presentation  self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)

_________________________________________________________

This series of posts may be used as a longer, more detailed version of sections

  • The chemlambda formalism
  • Chemlambda and lambda calculus
  • Propagators, distributors, multipliers and guns
  • The Y combinator and self-multiplication

from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication,  arXiv:1403.8046 [cs.AI],  which is accepted in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York, (go see the presentation of Louis Kauffman if you are near the event.) Here is a link to the published  article, free, at MIT Press.

_________________________________________________________

In this post I want to concentrate on the mechanism of self-multiplication for g-patterns coming from lambda terms (see  part IV   where the algorithm of translation from lambda terms to g-patterns is explained).

Before that, please notice that there is a lot to talk about an important problem which shall be described later in detail. But here is it, to keep an eye on it.

Chemlambda in itself is only a graph rewriting system. In part V is explained that  the beta reduction from lambda calculus needs an evaluation strategy in order to be used. We noticed that  in chemlambda the self-multiplication is needed in order to prove that one can do beta reduction as the beta move.

We go towards the obvious conclusion that in chemlambda reduction (i.e. beta move) and self-multiplication are just names used for parts of the computation. Indeed, the clear conclusion is that there is a computation which can be done with chemlambda, which has some parts where we use the beta move (and possibly some COMB, CO-ASSOC, CO-COMM, LOC PRUNING) and some other parts we use DIST and FAN-IN (and possibly some of the moves COMB, CO-ASSOC, CO-COMM, LOC PRUNING). These two parts have as names reduction and self-multiplication respectively, but in the big computation they mix into a whole. There are only moves, graphs rewrites applied to a molecule.

Which brings the problem: chemlambda in itself is not sufficient for having a model of computation. We need to specify how, where, when the reductions apply to molecules.

There may be many variants, roughly described as: sequential, parallel, concurrent, decentralized, random, based on chemical reaction network models, etc

Each model of computation (which can be made compatible with chemlambda) gives a different whole when used with chemlambda. Until now, in this series there has been no mention of a model of computation.

There is another aspect of this. It is obvious that chemlambda graphs  form a larger class than lambda terms, and also that the graph rewrites apply to more general situations  than beta reduction (and eventually an evaluation strategy).  It means that the important problem of defining a model of computation over chemlambda will have influences over the way chemlambda molecules “compute” in general.

The model of computation which I prefer  is not based on chemical reaction networks, nor on process calculi, but on a new model, inspired from the Actor Model, called  the distributed GLC. I shall explain why I believe that the Actor Model of Hewitt is superior to those mentioned previously (with respect to decentralized, asynchronous computation in the real Internet, and also in the real world), I shall explain what is my understanding of that model and eventually the distributed GLC proposal by me and Louis Kauffman will be exposed in all details.

4.  Self-multiplication of a g-pattern coming from a lambda term.

For the moment we concentrate on the self-multiplication phenomenon for g-patterns which represent lambda terms. In the following is a departure from the ALIFE 14 article. I shall not use the path which consists into going to combinators patterns, nor I shall discuss in this post why the self-multiplication phenomenon is not confined in the world of g-patterns coming from lambda terms. This is for a future post.

In this post I want to give an image about how these g-patterns self-multiply, in the sense that most of the self-multiplication process can be explained independently on the computing model. Later on we shall come back to this, we shall look outside lambda calculus as well and we shall explore also the combinator molecules.

OK, let’s start. In part V has been noticed that after an application of the beta rule to the g-pattern

L[a,x,b] A[b,c,d] C[c]  FOTREE[x,a1,...,aN] B[a1,...,aN, a]

we obtain (via COMB moves)

C[x] FOTREE[x,a1,...,aN] B[a1,...,aN,d]

and the problem is that we have a g-pattern which is not coming from a lambda term, because it has a FOTREE in the middle of it. It looks like this (recall that FOTREEs are figured in yellow and the syntactic trees are figured in light blue)

structure_12The question is: what can happen next?  Let’s simplify the setting by taking the FOTREE in the middle as a single fanout node, then we ask what moves can be applied further to the g-pattern

C[x] FO[x,a,b]

Clearly we can apply DIST moves. There are two DIST moves, one for the application node, the other for the lambda node.

There is a chain of propagation of DIST moves through the syntactic tree of C, which is independent on the model of computation chosen (i.e. on the rules about which, when and where rules are used), because the syntactic tree is a tree.

Look what happens. We have the propagation of DIST moves (for the application nodes say) first, which produce two copies of a part of the syntactic tree which contains the root.

structure_7

At some point we arrive to a pattern which allows the application of a DIST move for a lambda node. We do the rule:

structure_8

We see that fanins appear! … and then the propagation of DIST moves through the syntactic tree continues until eventually we get this:

structure_9So the syntactic tree self-multiplied, but the two copies are still connected by FOTREEs  which connect to left.out ports of the lambda nodes which are part of the syntactic tree (figured only one in the previous image).

Notice that now (or even earlier, it does not matter actually, will be explained rigorously why when we shall talk about the computing model, for the moment we want to see if it is possible only) we are in position to apply the FAN-IN move. Also, it is clear that by using CO-COMM and CO-ASSOC moves we can shuffle the arrows of the FOTREE,  which is “conjugated” with a fanin at the root and with fanouts at the leaves, so that eventually we get this.

structure_10

The self-multiplication is achieved! It looks strikingly like the anaphase [source]

800px-Anaphase.svgfollowed by telophase [source]

Telophase.svg____________________________________________________

 

 

Lambda calculus and the fixed point combinator in chemlambda (V)

This is the 5th  (continuing from part I  and part II  and part III and part IV)   in a series of expository posts where we put together in one place the pieces from various places about:

  • how is treated lambda calculus in chemlambda
  • how it works, with special emphasis on the fixed point combinator.

I hope to make this presentation  self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)

_________________________________________________________

This series of posts may be used as a longer, more detailed version of sections

  • The chemlambda formalism
  • Chemlambda and lambda calculus
  • Propagators, distributors, multipliers and guns
  • The Y combinator and self-multiplication

from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication,  arXiv:1403.8046 [cs.AI],  which is accepted in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York, (go see the presentation of Louis Kauffman if you are near the event.) Here is a link to the published  article, free, at MIT Press.

_________________________________________________________

2.  Lambda calculus terms as seen in  chemlambda  continued.

Let’s look at the structure of a molecule coming from the process of translation of a lambda term described in part IV.

Then I shall make some comments which should be obvious after the fact, but useful later when we shall discuss about the relation between the graphic beta move (i.e. the beta rule for g-patterns) and the beta reduction and evaluation strategies.

That will be a central point in the exposition, it is very important to understand it!

So, a molecule (i.e. a pattern with the free ports names erased, see part II for the denominations) which represents a lambda term looks like this:

 

structure_1

In light blue is the part of the molecule which is essentially the syntactic tree of the lambda term.  The only peculiarity is in the orientation of the arrows of lambda nodes.

Practically this part of the molecule is a tree, which has as nodes the lambda and application ones, but not fanouts, nor fanins.

The arrows are directed towards the up side of the figure.  There is no need to draw it like this, i.e. there is no global rule for the edges orientations, contrary to the ZX calculus, where the edges orientations are deduced from from the global down-to-up orientation.

We see a lambda node figured, which is part of the syntactic tree. It has the right.out  port connecting to the rest of the syntactic tree and the left.out port connecting to the yellow part of the figure.

The yellow part of the figure is a FOTREE (fanout tree). There might be many FOTREEs,  in the figure appears only one. By looking at the algorithm of conversion of a lambda term into a g-pattern, we notice that in the g-patterns which represent lambda terms the FOTREEs may appear in two places:

  • with the root connected to the left.out port of a lambda node, as in the g-pattern which correspond to Lx.(xx)
  • with the root connected to the middle.out port of an Arrow which has the middle.in port free (i.e. the port variable of the middle.in of that Arrow appears only once in that g-pattern), for example for the term  (xx)(Ly.(yx))

As a consequence of this observation, here are two configurations of nodes which NEVER appear in a molecule which represents a lambda term:

structure_2

Notice that these two patterns are EXACTLY those which appear as the LEFT side of the moves DIST! More about this later.

Remark also the position of the  the insertion points of the FOTREE which comes out of  the left.out port of the figured lambda node: the out ports of the FOTREE connect with the syntactic tree somewhere lower than where the lambda node is. This is typical for molecules which represent lambda terms. For example the following molecule, which can be described as the g-pattern L[a,b,c] A[c,b,d]

structure_3

(but with the port variables deleted) cannot appear in a molecule which corresponds to a lambda term.

 

Let’s go back to the first image and continue with “TERMINATION NODE (1)”. Recall that termination nodes are used to cap the left.out port of a lambda lode which corresponds to a term Lx.A with x not occurring in A.

Finally, “FREE IN PORTS (2)” represents free in ports which correspond to the free variables of the lambda term. As observed earlier, but not figured in the picture, we MAY have free in ports as ones of a FANOUT tree.

I collect here some obvious, in retrospect, facts:

  • there are no other variables in a g-pattern of a lambda term than the port variables. However, every port variable appears at most twice. In the graphic version (i.e. as graphs) the port variables which appear twice are replaced by edges of the graph, therefore the bound lambda calculus variables disappear in chemlambda.
  • moreover, the free port in variables, which correspond to the free variables of the lambda term, appear only once. Their multiple occurrences in the lambda term are replaced by FOTREEs.  All in all, this means that there are no values at all in chemlambda.
  • … As a consequence, the nodes application and lambda abstraction are not gates. That means that even if the arrows of the pattern graphs appear in the grammar version as (twice repeating) port variables, the chemlambda formalism has no equational side: there are no equations needed between the port variables. Indeed, no such equation appears in the definition of g-patterns, nor in the definition of the graph rewrites.
  • In the particular case of g-patterns coming from lambda terms it is possible to attach equations to the nodes application, lambda abstraction and fanout, exactly because of the particular form that such g-patterns have. But this is not possible, in a coherent way (i.e. such that the equations attached to the nodes have a global solution) for all molecules!
  • after we pass from lambda terms to chemlambda molecules, we are going to use the graph rewrites, which don’t use any equations attached to the nodes, nor gives any meaning to the port variables other than that they serve to connect nodes. Thus the chemlambda molecules are not flowcharts. There is nothing going through arrows and nodes. Hence the “molecule” image proposed for chemlambda graphs, with nodes as atoms and arrows as bonds.
  • the FOTREEs appear in some positions in a molecule which represents a lambda term, but not in any position possible. That’s more proof that not any molecule represents a lambda term.
  • in particular the patterns which appear at LEFT in the DIST graph rewrites don’t occur in molecules which represent lambda terms. Therefore one can’t apply DIST moves in the + direction to a molecule which represents a lambda term.
  • Or, otherwise said, any molecule which contains the LEFT patterns of a DIST move is not one which represents a lambda term.

_______________________________________________________

3. The beta move. Reduction and evaluation. 

I explain now in what sense the graphic beta move, or beta rule from chemlambda, corresponds to the beta reduction in the case of molecules which correspond to lambda terms.

Recall from part III the definition of he beta move

L[a1,a2,x] A[x,a4,a3]   <–beta–> Arrow[a1,a3] Arrow[a4,a2]

or graphically

beta_move_exp

If we use the visual trick from the pedantic rant, we may depict the beta move as:

beta_move_exp_2

i.e. we use as free port variables the relative positions  of the ports in the doodle.  Of course, there is no node at the intersection of the two arrows, because there is no intersection of arrows at the graphical level. The chemlambda graphs are not planar graphs.”

The beta reduction in lambda calculus looks like this:

(Lx.B) C –beta reduction–> B[x:=C]

Here B and C are lambda terms and B[x:=C] denotes the term which is obtained from B after we replace all the occurrences of x in B by the term C.

I want to make clear what is the relation between the beta move and the beta reduction.  Several things deserve to be mentioned.

It is of course expected that if we translate (Lx.B)C and B[x:=C] into g-patterns, then  the beta move transforms the g-pattern of (Lx.B)C into the g-pattern of B[x:=C]. This is not exactly true, in fact it is  true in a more detailed and interesting sense.

Before that it is worth mentioning that the beta move applies even for patterns which don’t correspond to lambda terms.  Hence the beta move has a range of application greater than the beta reduction!

Indeed, look at the third figure from this post, which can’t be a pattern coming from a lambda term. Written as a g-pattern this is L[a,b,c] A[c,b,d]. We can apply the beta move and it gives:

L[a,b,c] A[c,b,d]  <-beta-> Arrow[a,d] Arrow[b,b]

which can be followed by a COMB move

Arrow[a,d] Arrow[b,b] <-comb-> Arrow[a,d] loop

Graphically it looks like that.

structure_4

In particular this explains the need to have the loop and Arrow graphical elements.

In chemlambda we make no effort to stay inside the collection of graphs which represent lambda terms. This is very important!

Another reason for this is related to the fact that we can’t check if a pattern comes from a lambda term in a local way, in the sense that there is no local (i.e. involving an a priori bound on the number of graphical elements used) criterion which describes the patterns coming from lambda terms. This is obvious from the previous observation that FOTREEs connect  to the syntactic tree lower than their roots.

Or, chemlambda is a purely local graph rewrite system, in the sense that the is a bound on the number of graphical elements involved in any move.

This has as consequence: there is no correct graph in chemlambda.  Hence there is no correctness enforcement in the formalism.   In this respect chemlambda differs from any other graph rewriting system which is used in relation to lambda calculus or more general to functional programming.

Let’s go back to the beta reduction

(Lx.B) C –beta reduction–> B[x:=C]

Translated into g-patterns the term from the LEFT looks like this:

L[a,x,b] A[b,c,d] C[c]  FOTREE[x,a1,...,aN] B[a1,...,aN, a]

where

  • C[c] is the translation as a g-pattern of the term C, with the out port “c”
  • FOTREE[x,a1,...,aN] is the FOTREE which connects to the left.out port of the node L[a,x,b] and to the ports a1, …, aN which represent the places where the lambda term variable “x” occurs in B
  • B[a1,...,aN.a] is a notation for the g-pattern of B, with the ports a1,…,aN (where the FOTREE connects) mentioned, and with the out port “a”

The beta move does not need all this context, but we need it in order to explain in what sense the beta move does what the beta reduction does.

The beta move needs only the piece L[a,x,b] A[b,c,d]. It is a local move!

Look how the beta move acts:

L[a,x,b] A[b,c,d] C[c]  FOTREE[x,a1,...,aN] B[a1,...,aN, a]

<-beta->

Arrow[a,d] Arrow[c,x] FOTREE[x,a1,...,aN] B[a1,...,aN, a]

and then 2 comb moves:

Arrow[a,d] Arrow[c,x] C[c] FOTREE[x,a1,...,aN] B[a1,...,aN, a]

<-2 comb->

C[x] FOTREE[x,a1,...,aN] B[a1,...,aN,d]

Graphically this is:

structure

The graphic beta move, as it looks on syntactic trees of lambda terms, has been discovered  in

Wadsworth, Christopher P. (1971). Semantics and Pragmatics of the Lambda Calculus. PhD thesis, Oxford University

This work is the origin of the lazy, or call-by-need evaluation in lambda calculus!

Indeed, the result of the beta move is not B[x:=C] because in the reduction step is not performed any substitution x:=C.

In the lambda calculus world, as it is well known, one has to supplement the lambda calculus with an evaluation strategy. The call-by-need evaluation explains how to do in an optimized way the substitution x:=C in B.

From the chemlambda point of view on lambda calculus, a very interesting thing happens. The g-pattern obtained after the beta move (and obvious comb moves) is

C[x] FOTREE[x,a1,...,aN] B[a1,...,aN,d]

or graphically

structure_5

As you can see this is not a g-pattern which corresponds to a lambda term.  That is because it has a FOTREE in the middle of it!

Thus the beta move applied to a g-pattern which represents a lambda term gives a g-patterns which can’t represent a lambda term.

The g-pattern which represents the lambda term B[x:=C] is

C[a1] …. C[aN]  B[a1,...,aN,d]

or graphically

structure_6

In graphic lambda calculus, or GLC, which is the parent of chemlambda we pass from the graph which correspond to the g-pattern

C[x] FOTREE[x,a1,...,aN] B[a1,...,aN,d]

to the g-pattern of B[x:=C]

C[a1] …. C[aN]  B[a1,...,aN,d]

by a GLOBAL FAN-OUT move, i.e. a graph rewrite which looks like that

if C[x] is a g-pattern with no other free ports than “x” then

C[x] FOTREE[x, a1, ..., aN]

<-GLOBAL FAN-OUT->

C[a1] …. C[aN]

As you can see this is not a local move, because there is no a priori bound on the number of graphical elements involved in the move.

That is why I invented chemlambda, which has only local moves!

The evaluation strategy needed in lambda calculus to know when and how to do the substitution x:C in B is replaced in chemlambda by SELF-MULTIPLICATION.

Indeed, this is because the g-pattern

C[x] FOTREE[x,a1,...,aN] B[a1,...,aN,d]

surely has places where we can apply DIST moves (and perhaps later FAN-IN moves).

That is for the next post.

___________________________________________________

 

Lambda calculus and the fixed point combinator in chemlambda (IV)

This is the 4th  (continuing from part I  and part II  and part III)   in a series of expository posts where we put together in one place the pieces from various places about:

  • how is treated lambda calculus in chemlambda
  • how it works, with special emphasis on the fixed point combinator.

I hope to make this presentation  self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)

_________________________________________________________

This series of posts may be used as a longer, more detailed version of sections

  • The chemlambda formalism
  • Chemlambda and lambda calculus
  • Propagators, distributors, multipliers and guns
  • The Y combinator and self-multiplication

from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication,  arXiv:1403.8046 [cs.AI],  which is accepted in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York, (go see the presentation of Louis Kauffman if you are near the event.) Here is a link to the published  article, free, at MIT Press.

_________________________________________________________

2.  Lambda calculus terms as seen in  chemlambda .

In this post is explained how to associate to any untyped lambda calculus term a g-pattern.

Important. Not any g-pattern, i.e. not any pattern, and not any molecule from chemlambda is associated to a lambda term!

Recall first what is a(n untyped) lambda term.

<lambda term> ::= <variable> |  ( <lambda term> <lambda term> ) | ( L <variable> . <lambda term>)

The operation which associates to a pair of lambda terms A and B the term AB is called application.

The operation which associates to a variable x and a term A the term Lx.A  is called (lambda) abstraction.

Every variable which appears in a term A is either bound or free. The variable x is bound if it appears under the scope of an abstraction, i.e. there is a part of A with the form Lx. B .

It it allowed to rename the bound variables of a term. This is called alpha renaming or alpha conversion. Two terms which differ only by alpha renaming are considered to be the same one.

It is then possible to rename the bound variables of a term such that if x is a bound variable then it appears under the scope of only one abstraction and moreover it does not appear as a free variable.

Further is an algorithm   which transforms a lambda term, in this form which eliminates the ambiguities of the names of bound variables, into a g-pattern. See the post Conversion of lambda calculus terms into graphs for an algorithm which transforms a general lambda term into a GLC graph.

In this algorithm, a variable is said to be “fresh” if it does not appear before the step of the algorithm in question.

We start from declaring that we shall use (lambda terms) variables as port variables.

 

Let  Trans[a,A]  be the translation operator, which has as input a variable  and a lambda term and as output a mess (see part II for the definition of a mess:  “A mess is any finite multiset of graphical elements in grammar version.”)

The algorithm defines Trans.

We start from an initial pair a0,  A0 , such that a0 does not occur in A0.

Then we define Trans recursively by

  • Trans[a, AB] = A[a', a", a]  Trans[a',A] Trans[a",B]  with  a’ and a” fresh variables,
  • Trans[a, Lx.A] = L[a',x,a] Trans[a',A]  with a’ a fresh variable
  • Trans[a,x] =  Arrow[x,a]  .

Practically, Trans gives a version of the syntactic tree of the term, with some peculiarities related to the use of the grammar version of the graphical elements instead of the usual gates notation for the two operations, and also the strange orientation of the arrow of the lambda node which is decorated by the respective bound variable.

Trans[a0,A0] is a mess and not a g-pattern because there may be (port) variables which occur more than twice. There are two  possible cases for this:

  • (a) either a port variable occurs once as an  out port variable and more than once as an  in port variable,
  • (b)  or it does not occur as an out port variable but it occurs more than once as an in port variable,

Let’s see examples:

  • (a)  start with a0 = a and A0 = Lx.(xx) .  Then Trans[a,Lx.xx] = L[a1,x,a] Trans[a1, xx] = L[a1, x, a] A[a2,a3,a_1] Trans[a2,x] Trans[a3,x] = L[a1,x,a] A[a2,a3,a1] Arrow[x,a2] Arrow[x,a3]

 

first_ex

As you see the port variable x appears 3 times, once as an out port variable, in L[a1,x,a] , and twice as an in port variable, in Arrow[x,a2] Arrow[x,a3] .

  • (b)  start with a0 = a and A0 = (xz)(yz) . Then Trans[a,(xz)(yz)] = A[a1,a2,a] Trans[a1,xz] Trans[a2,yz] = A[a1,a2,a] A[a3,a4,a1] Trans[a3,x] Trans[a4,z] A[a5,a6,a2] Trans[a5,y] Trans[a6,z] = A[a1,a2,a] A[a3,a4,a1]  Arrow[x,a3]  Arrow[z,a4]  A[a5,a6,a2]  Arrow[y,a5] Arrow[z,a6]

 

second_ex

In this case the port variable z does not occur as a out port variable, but it appears twice as a in port variable, in Arrow[z,a4] Arrow[z,a6].

 

To pass from a mess to a g-pattern is easy now: we shall introduce fanout nodes.

Indeed, an FO  tree with free in port a and free out ports a1, a2, …, aN  is, by definition, ANY g-pattern formed by the rules:

  •  FO[a,a1,a2]  is an FO tree
  • if  FOTREE[a,a1,...,aN] is an FO tree which has a as the only free in port variable and a1, …, aN the only free out port variables, then for every j=1,…,N and for any u,v fresh port variables  FOTREE[a,a1,...,aN] FO[aj,u,v]  is an FO tree with a as the only free port in variable and a1, …, aN, u, v  with the aj removed from the list as the free out port variables.

Remark that by a sequence of CO-COMM and  CO-ASSOC moveswe can pass from any FO tree with free in port variable a and free out port variables a1, …, aN to any other FO tree with the same free in or out port variables.

We shall not choose a canonical FO tree associated to a pair formed by one free in port variable and a finite set of free out port variables, for this reason. (However, in any algorithm where FO trees have to be constructed, such a choice will be embedded in the respective algorithm.]

In order to transform the mess which is outputted by the Trans operator, we have to solve the cases (a), (b) explained previously.

(a) Suppose that there is a port variable x which satisfies the description for (a), namely that x occurs once as an  out port variable and more than once as an  in port variable. Remark that, because of the definition of the Trans operator, the port variable x will appear at least twice in a list Arrow[x,a1] … Arrow[x,aN] and only once somewhere in a node L[b,x,c].

Pick then an FO tree FOTREE[x,a1,...,aN]  with the only free in port variable x and the only free out port variables a1, …, aN. Erase then from the mess outputted by Trans the collection Arrow[x,a1] … Arrow[x,aN]  and replace it by FOTREE[x,a1,...,aN].

In this way the port variable x will occur only once in a out port, namely in L[b,x,c] and only once in a in port, namely the first FO[x,...] element of the FO tree FOTREE[x,a1,...,aN].

Let’s see for our example, we have

Trans[a, Lx.xx] = L[a1,x,a] A[a2,a3,a1] Arrow[x,a2] Arrow[x,a3]

so the variable x appears at an out port in the node L[a1,x,a] and at in ports in the list Arrow[x,a2] Arrow[x,a3] .

There is only one FO tree with the free in port x and the free out ports a2, a3, namely FO[x,a2,a3].  Delete the list Arrow[x,a2] Arrow[x,a3] and replace it by FO[x,a2,a3]. This gives

L[a1,x,a] A[a2,a3,a1] FO[x,a2,a3]

which is a g-pattern! Here is what we do, graphically:

fo_first_ex

(b) Suppose that there is a port variable x which satisfies the description for (b), namely that x  does not occur as an out port variable but it occurs more than once as an in port variable. Because of the definition of the Trans operator, it must be that x will appear at least twice in a list Arrow[x,a1] … Arrow[x,aN] and nowhere else.

Pick then a FO tree FOTREE[x,a1,...,aN] with the only free in port variable x and the only free out port variables a1, …, aN.

Delete Arrow[x,a1] … Arrow[x,aN]  and replace it by FOTREE[x,a1,...,aN] .

In this way the variable x will appear only once, as a free in port variable.

For our example, we have

Trans[a,(xz)(yz)] =  A[a1,a2,a] A[a3,a4,a1]  Arrow[x,a3]  Arrow[z,a4]  A[a5,a6,a2]  Arrow[y,a5] Arrow[z,a6]

and the problem is with the port variable z which does not occur in any out port, but it does appear twice as an in port variable, namely in Arrow[z,a4]  Arrow[z,a6] .

We delete Arrow[z,a4]   Arrow[z,a6] and replace it by FO[z,a4,a6] and we get the g-pattern

A[a1,a2,a] A[a3,a4,a1]  Arrow[x,a3]  FO[z,a4,a6]  A[a5,a6,a2]  Arrow[y,a5]

In graphical version, here is what has been done:

fo_second_ex

 

OK, we are almost done.

It may happen that  there are out port variables which appear  from  Lx.A with x not occuring in A (i.e. free).  For example let’s start with a0=a and  A0 = Lx.(Ly. x) . Then Trans[a,Lx.(Ly.x)] = L[a1,x,a] Trans[a1, Ly.x] = L[a1,x,a] L[a2,y,a1] Trans[a2,x] = L[a1,x,a] Arrow[x,a2] L[a2,y,a1].

There is the port variable y which appears only as an out port variable in a L node, here L[a2,y,a1], and not elsewhere.

For those port variables x which appear only in a L[a,x,b] we add a termination node T[x].

In our example L[a1,x,a] Arrow[x,a2] L[a2,y,a1] becomes L[a1,x,a] Arrow[x,a2] L[a2,y,a1] T[y]. Graphically this is

third_ex

We may still have Arrow elements which can be absorbed into the nodes ports, therefore we close the conversion algorithm by:

Apply the COMB moves (see part III)  in the + direction and repeat until there is no place to apply them any more.

 

Exercice: Consider the Y combinator

Y = Lf.(  (Lx. f(xx))    (Ly. f(yy))   )

Find it’s conversion as a g-pattern.

________________________________________________________________

Do you know these guys?

We need CS collaborators for our project.  The project is about a computation model (Distributed GLC) which is:

  • decentralized
  • asynchronous
  • based on message passing
  • Actor Model style
  • using a Graph Rewriting System called chemlambda.

Here is the portrait of the ideal collaborator:

  • is a big fan of functional programming, but not a fanatic
  • understands the difference between the Actor Model and process calculi
  • has some understanding of real world asynchronous computation, on the Internet
  • is at ease with graph rewriting systems, say he has read Functional programming and parallel graph rewriting

 

Oh, and can discuss over the border with  quick learning mathematicians.

ALTERNATIVELY:

  • has money to fund us and
  • a supply of docile programmers who do what we need first, then optimize the programs after they see by example that, from some obscure reasons, they really work.

 

If interested please call and let’s make stuff that counts!

______________________________________________________________

Lambda calculus and the fixed point combinator in chemlambda (III)

This is the 3rd  (continuing from part I  and part II)  in a series of expository posts where we put together in one place the pieces from various places about:

  • how is treated lambda calculus in chemlambda
  • how it works, with special emphasis on the fixed point combinator.

I hope to make this presentation  self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)

_________________________________________________________

This series of posts may be used as a longer, more detailed version of sections

  • The chemlambda formalism
  • Chemlambda and lambda calculus
  • Propagators, distributors, multipliers and guns
  • The Y combinator and self-multiplication

from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication,  arXiv:1403.8046 [cs.AI],  which is accepted in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York, (go see the presentation of Louis Kauffman if you are near the event.) Here is a link to the published  article, free, at MIT Press.

_________________________________________________________

1.  The chemlambda formalism continued: graph rewrites.

Now we have all we need to talk about graph rewrites.

For clarity, see  part II for the notion of  “pattern”. Its meaning depends on what we use: the graphical version of the grammar version. In the graphical version a pattern is a chemlambda graph with the free ports and invisible nodes decorated with port variables. In the grammar version we have the equivalent notion of a g-pattern, which is a way to write a pattern as a multiset of graphical elements.

It is allowed to rename the port variables in a g-pattern, such that after the renaming we still get a g-pattern. That means that if M is a g-pattern  and f is  any one-to-one function from V(M) to another set of port variables A, then we may replace any port variable x from V(M) by f(x).  We shall not think about this (sort of alpha renaming for g-patterns) as being a graph rewrite.

 

I shall use the following equivalent names further:

  • “graph rewrite”
  • “move”
  • “reduction”

In simple words, a graph rewrite is a rule which says: “replace the pattern   graph_{1} by the pattern graph_{2}“.

 

Let’s see more precisely then what is a graph rewrite. (Technically this is a simple form of graph rewrite, which is not dependent on context, later we may speak about more involved forms. First let’s understand exactly this simple form!)

In order to define a graph rewrite, or move, we need two g-patterns, call them graph_{1} and graph_{2}, such that (perhaps after a renaming of port variables):

  • they have the same set of FV_in port variables
  • they have the same set of FV_out variables

A move is a pair of such g-patterns. The graph_{1} is called the LEFT pattern of the move, the graph_{2} is called the RIGHT pattern of the move.

The move can be performed from LEFT to RIGHT, called sometimes the “+” direction: replace the LEFT pattern by the RIGHT pattern.

Likewise, the move can be performed from RIGHT to LEFT, called sometimes the “-” direction: replace the RIGHT pattern with the LEFT pattern.

Technically, what I describe here can be made fully explicit as a DPO graph rewriting.

Even if the moves are reversible (they can be performed in the + or – direction), there is a preference to use only the “+” direction (and to embed, if needed, a move performed in the “-” direction into a sequence of moves, called “macro”, more about this later).

The “+” direction is not arbitrarily defined.

_________________________________________________________

OK, enough with these preparations, let’s see the moves.

We shall write the moves in two ways, which are equivalent.

When expressed with g-patterns, they are written as

LEFT pattern  <–name of move–>  RIGHT pattern

When expressed with patterns (i.e graphical), they appear as

move_genericThe port names appear in blue. The name of the move appears in blue, the LEFT is on the left, the RIGHT is on the right, the move is figured by a blue arrow.

Pedantic, but perhaps useful rant.     For some reason, there are people who confuse graphs (which are clearly defined mathematical objects) with their particular representations (i.e. doodles), taking them “literally”.  Graphs are graphs and doodles are doodles. When people use doodles for reasoning with graphs, this is for economy of words reasons, the famous “a picture is worth a thousand words”.  There is nothing wrong with using doodles for reasoning with graphs, as long as you know the convention used. Perhaps the convention is so intuitive that it would need 1000000 words to make it clear (for a machine), but however there is a simple criterion which helps those who don’t trust their sight: you got it right if you understand what the doodle means at the graph level.

Look again at the previous picture, which shows you what a generic move looks like. The move (from LEFT to RIGHT) consists into:

  • cutting the arrows and name them by free port variables (here used 1,…, 5)
  • replacing the LEFT pattern by the RIGHT pattern such that it respects the free port variables (1 to 1, 2 to 2, …)
  • gluing back the arrows with the rest of the graph, which is not depicted in the move.

How simple is that?

To make it even more simple, we use the following visual trick: use the relative placements of the free ports in the doodle  as the port variables.

If look carefully at the previous picture, then you notice that you may redraw it (without affecting what the doodle means at the graph level) by representing  the free ports of the RIGHT in the same relative positions as the free ports from the left.

The drawing would then look like this:

move_generic_2

Then you may notice that you don’t need to write the port variables on the doodles, because they have the same relative positions, so you may as well describe the move as:

move_generic_3

This is the convention used everywhere in the doodles from this blog (and it’s nothing special, it’s used everywhere).

I shall close the pedantic rant by saying that there is a deep hypocrisy in the claim that there is ANY need to spend so much words to make clear things clear, like the distinction between graphs and doodles, and relative positions and so on. I ask those who think that text on a page is clear and (a well done) doodle is vague: do you attach to your text a perhaps sound introduction which explains that you are going to use latin letters, that no, the letter and it’s image in the mirror are not the same, that words are to be read from left to right, that space is that character which separates two words, that if you hit the end of a text line then you should pass to the line from behind, that a line is a sequence of characters separated by an invisible character eol, …..? All this is good info for making a text editor, but you don’t need to program a text editor first in order to read a book (or to program a text editor).  It would be just crazy, right?  Our brains use exactly the same mechanisms to parse a doodle as a text page and a doodle as a depiction of a graph. Our brains understand very well that if you change the text fonts then you don’t change the text, and so on. A big hypocrisy, which I believe has big effects in the divide between various nerd subcultures, like IT and geometers, with a handicapping effect which manifests into the real life, under the form of the products the IT is offering. Well, end of rant.

 

Combing moves. These moves are not present in the original chemlambda formalism, because they  are needed at the level of the g-patterns. Recall  from part I that Louis Kauffman proposed to use commutative polynomials as graphical elements, which brings the need to introduce the Arrow element A[x,y]. This is the same as introducing invisible nodes in the chemlambda molecules (hence the passage from molecules to patterns). The combing moves are moves which eliminate (or add) invisible nodes in patterns. This corresponds in the graphical version to decorations (of those invisible nodes) on arrows of the molecules.

A combing move eliminates an invisible node (in the + direction) or adds an invisible node (in the – direction).

A first combing move is this:

Arrow[x,y] A rrow[y,z]  <–comb–> Arrow[x,z]

or graphically remove (or add) a (decoration of an) invisible node :

comb_1

Another combing move is:

Arrow[x,x] <–comb–> loop

or graphically an arrow with the in and out ports connected  is a loop.

Another family of combing moves is that if you connect an arrow to a port of a node then you can absorb the arrow into the port:

L[x,y,z] Arrow[u,x]  <–comb–> L[u,y,z]

L[x,y,z] Arrow[y,u] <–comb–> L[x,u,z]

L[x,y,z] Arrow[z,u] <–comb–> L[x,y,u]

______________________________________

FO[x,y,z] Arrow[u,x]  <–comb–> FO[u,y,z]

FO[x,y,z] Arrow[y,u] <–comb–> FO[x,u,z]

FO[x,y,z] Arrow[z,u] <–comb–> FO[x,y,u]

______________________________________

A[x,y,z] Arrow[u,x]  <–comb–> A[u,y,z]

A[x,y,z] Arrow[u,y] <–comb–> A[x,u,z]

A[x,y,z] Arrow[z,u] <–comb–> A[x,y,u]

______________________________________

FI[x,y,z] Arrow[u,x]  <–comb–> FI[u,y,z]

FI[x,y,z] Arrow[u,y] <–comb–> FI[x,u,z]

FI[x,y,z] Arrow[z,u] <–comb–> FI[x,y,u]

______________________________________

Now, more interesting moves.

The beta move.     The name is inspired from the beta reduction of lambda calculus (explanations later)

L[a1,a2,x] A[x,a4,a3]   <–beta–> Arrow[a1,a3] Arrow[a4,a2]

or graphically

beta_move_exp

If we use the visual trick from the pedantic rant, we may depict the beta move as:

beta_move_exp_2

i.e. we use as free port variables the relative positions  of the ports in the doodle.  Of course, there is no node at the intersection of the two arrows, because there is no intersection of arrows at the graphical level. The chemlambda graphs are not planar graphs.

The FAN-IN move. This is a move which resembles the beta move.

FI[a1,a4,x] FO[x,a2,a3]

<–FAN-IN–>

Arrow[a1,a3]  Arrow[a4,a2]

(I wrote it like this because it does not fit in one line)

Graphically, with the obvious convention from the pedantic rant, the move is this:

fanin_move_exp

The FAN-OUT moves.  There are two moves: CO-COMM  (because it resembles with a diagram which expresses co-commutativity) and CO-ASSOC (same reason, but for co-associativity).

FO[x,a1,a2]  <–CO-COMM–> FO[x,a2,a1]

and

FO[a1,u,a2] FO[u,a3,a4]

<-CO-ASSOC->

FO[a1,a3,v] FO[v,a4,a2]

or graphically:

convention_3

The DIST moves. These are called distributivity moves. Remark that the LEFT pattern is simpler than the RIGHT pattern in both moves.

A[a1,a4,u] FO[u,a2,a3]

<–DIST–>

FO[a1,a,b] FO[a4,c,d] A[a,c,a2] A[b,d,a3]

and

L[a1,a4,u] FO[u,a2,a3]

<–DIST–>

FI[a1,a,b] FO[a4,c,d] L[c,b, a2] L[d,a,a3]

 

or graphically:

convention_6

The LOCAL PRUNING moves. These are used with the termination node. There are four moves:

FO[a1,a2,x] T[x] <–LOC-PR–> Arrow[a1,a2]

L[a1,x,y] T[x] T[y] <–LOC-PR–> T[a1]

FI[a1,a2,x] T[x] <–LOC-PR–> T[a1] T[a2]

A[a1,a2,x] T[x] <–LOC-PR–> T[a1] T[a2]

 

or graphically

convention_4

 

____________________________________________________________

Artificial Life 14 from MIT Press, freely available under Creative Commons licenses

Thanks to the organizers of the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York! That’s the way to do it [source]:

“The proceedings of ALIFE 14 are now available from MIT Press. The full proceedings, as well as individual papers, are freely available under Creative Commons licenses.

http://mitpress.mit.edu/books/artificial-life-14

Great!

Here is a link to our published  article.

______________________________________________________

Lambda calculus and the fixed point combinator in chemlambda (II)

This is the second  (continuing from part I)  in a series of expository posts where we put together in one place the pieces from various places about:

  • how is treated lambda calculus in chemlambda
  • how it works, with special emphasis on the fixed point combinator.

I hope to make this presentation  self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)

_________________________________________________________

This series of posts may be used as a longer, more detailed version of sections

  • The chemlambda formalism
  • Chemlambda and lambda calculus
  • Propagators, distributors, multipliers and guns
  • The Y combinator and self-multiplication

from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication,  arXiv:1403.8046 [cs.AI],  which is accepted in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York, (go see the presentation of Louis Kauffman if you are near the event.) Here is a link to the published  article, free, at MIT Press.

_________________________________________________________

1.  The chemlambda formalism continued: molecules, patterns and g-patterns.

 

Chemlambda works with graphs which are called “molecules”. In the last post was proposed also a grammar version of molecules, based on the idea that a molecule is made by a finite number of graphical elements, each graphical element having ports, the ports are marked with “port variables”; two ports connect (in the graphical version) is the same as the repetition of a port variable in the grammar version of a molecule.

Here are the graphical elements, along with their grammar versions:

  • lambda node

lambda_graph_elem

 

  • fanout node

fanout_graph_elem

  • application node

appl_graph_elem

  • fanin node

fanin_graph_elem

  • arrow

arrow_graph_elem

  • termination node

termin_graph_elem

  • loop

loop_graph_elem

There is only one loop element. The orientation of the loop, as represented in a drawing, is not relevant!

_________________________________________________________

A chemlambda  graph   is any graph made by a finite number of graphical elements, such that there is no conflict of orientation of arrows (i.e. the ports named “in” may connect only with the ports named “out”).

By convention, an arrow graphical element which has no free port (i.e. which has the middle.in port and the middle.out port connected) is represented as an arrow between “invisible nodes”. The ports of an arrow element which are connected are called “invisible ports”.

A molecule is a chemlambda graph without invisible nodes.

By convention, we identify a chemlambda graph with the molecule obtained  by erasing the invisible nodes.

A pattern is a chemlambda graph with the free ports and invisible nodes decorated with different port variables.

Let’s give a name for the grammar version of a pattern.

A mess is any finite multiset of graphical elements in grammar version. The port variables of a mess  is the set of port variables which appear as arguments in the graphical elements  of the mess. The set of port variables of a mess M  is denoted by V(M).

g-pattern is a mess with the properties:

  • any port variable appears at most twice,
  • if a port variable appears twice then it appears in a pair of “in” and “out” ports.

Simple examples:

  • A[x,x,y] is a mess which is not a g-pattern, because the port variable x appears in TWO “in” ports
  • A[x,y,x]  and A[y,x,x] are g-patterns
  • FO[x,y,y] is a mess which is not a g-pattern, because the port variable y appears in TWO “out” ports.
  • L[a,b,c] A[a,d,b] A[x,y,d] is a g-pattern
  • L[a,b,c] A[a,d,b] A[a,y, d]  is a mess which is not a g-pattern, because the port variable “a” appears in 3 places
  • L[a,b,c] A[u,d,b] A[z,y, d] FO[a,u,z] is a g-pattern

 

 

The set of  free variables of a g-pattern M is the set of port variables which appear only once. This set is denoted by FV(M) and it decomposes into a disjoint union of  FV_in (M) and  FV_out(M) of free port variable which appear in a “in” port and free port variables which appear in a “out” port.

There are g-patterns which have an empty set of port variables: for example   V(loop)  is the empty set.

The set of invisible variables of a g-pattern M, denoted by Inv(M),  is made by those variables of M which are not free and they appear in one of the ports of an arrow element.

As an illustration, consider this:

ex_pattern

  • up is a pattern, with free ports decorated by a1, a4, a5 and with invisible nodes decorated by a2, a3
  • in the middle is the corresponding g-pattern, with one free port variable of type “in” a1,  invisible variables a2, a3 and two free port variables of type “out” a4, a5
  • down, the molecule obtained from the pattern from the  up side, with invisible nodes erased and with no decoration of its free ports.

 

_________________________________________________________

Synapse servers: plenty of room at the virtual bottom

The Distributed GLC is a decentralized asynchronous model of computation which uses chemlambda molecules. In the model, each molecule is managed by an actor,  which has the molecule as his state, and the free ports of the  molecule are tagged with other actors names. Reductions  between molecules (like chemical reactions) happen  only for those molecules which have actors who know  one the other, i.e. only between molecules managed by actors  :Alice and :Bob say, such that:

  • the state of :Alice (i.e. her molecule) contains a half of the graph_{1} pattern of a move, along with a free port tagged with :Bob name.
  • the state of :Bob contains the other half of the graph_{1} pattern, with a free port tagged with :Alice name
  • there is a procedure based exclusively on communication by packets, TCP style, [UPDATE: watch this recent video of an interview of Carl Hewitt!]  which allow to perform the reduction on both sides and which later informs the eventual other actors which are neighbors (i.e. appear as tags in :Alice or :Bob state) about possible new tags at their states, due to the reduction which happened (this can be done for example by performing the move either via introduction of new invisible nodes in the chemlambda molecules, or via the introduction of Arrow elements, then followed by combing moves).

Now, here is possibly a better idea. To explore. One which connects to a thread which is not developed for the moment (anybody interested? contact me)   neural type computation with chemlambda and GLC .

The idea is that once the initial configuration of actors and their initial states are set, then why not move the actors around and make the possible reductions only if the actors :Alice and :Bob are in the same synapse server.

Because the actor IS the state of the actor, the rest of stuff a GLC actor knows to do is so trivially easy so  that it is not worthy do dedicate one program per actor running some place fixed. This way, a synapse server can do thousands of reductions on different actors datagrams (see further) in the same time.

Instead:

  • be bold and use connectionless communication, UDP like, to pass the actors states (as datagrams) between servers called “synapse servers”
  • and let a synapse server to check datagrams to see if by chance there is a pair which allow a reduction, then perform in one place the reduction, then let them walk, modified.

There is so much place for the artificial chemistry chemlambda at the bottom of the Internet layers that one can then add some learning mechanisms to the synapse servers. One is for example this: suppose that a synapse server matches two actors datagrams and finds there are more than one possible reductions between them. Then the synapse server asks his neighbour synapse servers (which perhaps correspond to a virtual neuroglia) if they encouter this configuration. It chooses then (according to a simple algorithm) which reduction to make based on the info coming from its neighbours in the same glial domain and tags the packets which   result after  the reduction (i.e. adds to them in some field) a code for the mode which was made. Successful choices are those which have descendants which are active, say after more than $n$ reductions.

Plenty of possibilities, plenty of room at the bottom.

Lambda calculus and the fixed point combinator in chemlambda (I)

This is the first in a series of expository posts where we put together in one place the pieces from various places about:

  • how is treated lambda calculus in chemlambda
  • how it works, with special emphasis on the fixed point combinator.

I hope to make this presentation as easy to follow as possible, particularly by trying to make is self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)

_________________________________________________________

This series of posts may be used as a longer, more detailed version of sections

  • The chemlambda formalism
  • Chemlambda and lambda calculus
  • Propagators, distributors, multipliers and guns
  • The Y combinator and self-multiplication

from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication,  arXiv:1403.8046 [cs.AI],  which is accepted in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York, (go see the presentation of Louis Kauffman if you are near the event.) Here is a link to the published  article, free, at MIT Press.

_________________________________________________________

1.  The chemlambda formalism.

Chemlambda has been introduced with the name “chemical concrete machine” (as an allusion to Berry and Boudol CHAM)  in the article M. Buliga, Chemical concrete machine, arXiv:1309.6914 [cs.FL] , also available on figshare here: (doi).

 

Chemlambda is a graph-rewrite system.  From the linked wiki page, only slightly edited:

Graph transformation, or graph rewriting, concerns the technique of creating a new graph out of an original graph algorithmically.

Graph transformations can be used as a computation abstraction. The basic idea is that the state of a computation can be represented as a graph, further steps in that computation can then be represented as transformation rules on that graph. Such rules consist of an original graph, which is to be matched to a subgraph in the complete state, and a replacing graph, which will replace the matched subgraph.

Formally, a graph rewriting system usually consists of a set of graph rewrite rules of the form L \rightarrow R, with L being called pattern graph (or left-hand side) and R being called replacement graph (or right-hand side of the rule). A graph rewrite rule is applied to the host graph by searching for an occurrence of the pattern graph (pattern matching) and by replacing the found occurrence by an instance of the replacement graph.”

In order to define chemlambda we need two ingredients:

  • the graphs used in the formalism
  • the graph rewrites.

 

A chemlambda graph (aka a molecule) is any graph made by a finite number of the following graphical elements:

4_chem

A BNF form would be:

<graphical-element> ::= <lambda> | <fanout> |  <appl> | <fanin> | <arrow>  | <loop> | <termin>

<middle.in>::= port  variable

<middle.out>::= port  variable

<left.in> ::= port variable

<left.out> ::= port variable

<right.in> ::= port variable

<right.out>::= port variable

<lambda>: := L[<middle.in>,<left.out>,<right.out>]

<fanout>::= FO[<middle.in>,<left.out>,<right.out>]

<appl>::= A[<left.in>,<right.in>,<middle.out>]

<fanin>::=FI[<left.in>,<right.in>,<middle.out>]

<arrow>::= Arrow[<middle.in>,<middle.out>]

<loop>::= loop

<termin>::= T[<middle.in>]

This notation is inspired from Louis Kauffman proposal to use commutative polynomials for the graphical elements (then, the variables of the polynomials play the roles of the ports). Louis hacked on July 3rd 2014  the Mathematica symbolic reduction  for polynomial commutative algebra in order to reduce the fixed point combinator in GLC automatically.  I take his approach and  use it here for the grammar notation of chemlambda molecules.

Let’s see first the names of the elements, then let’s discuss more details about them.

  • (a) is the lambda node, it has one in arrow, called the port middle.in and two out arrows, which are NOT interchangeable, called left.out (the one from the left of the middle.in arrow port) and right.out (the one from the right of middle.in port).
  • (b) is the fanout node. As the lambda node, it has one in arrow, called the port middle.in, and two out arrows, called the ports left.out and right.out , following the same graphical conventions for representing it, as previously.   
  • (c) is the application node. It has only one arrow out, called the port middle.out, and two in arrows, which are NOT interchangeable. The two arrow ports are called left.in (the one from the left of the middle.out port) and right.in (the one from the right of the middle.out port).
  • (d) is the fanin node. It has the same configuration of ports as the application node
  • (e) there are three graphical elements there, let’s take them in order.  (up) is an arrow. Now, in chemlambda are allowed arrows with one, or both ends free (i.e. not connected to a chemlambda node). Also (middle) loops with no nodes are accepted.  (down) termination node, with only one port in.

A  shorter, geometrical way to say all this is that the 3valent nodes are all locally planar.

A chemlambda graph is called “molecule”, and it is made by a finite number of graphical elements, i.e.

<molecule> ::= <graphical-element> | <molecule> <molecule>

with the convention that:

  • the order of the writing of the graphical elements of the molecule does NOT matter (one can permute the graphical elements in the grammar notation of the molecule)
  • any in port is connected with at most one out port
  • any out port is connected with at most one in port
  • any concatenation of two arrows is an arrow or a loop
  • any concatenation of an arrow and a port is a port.

If we use a grammar for this (some might like the 1000 words instead of the picture) that means that a <molecule> is well written if:

  • any port variable appears at most twice; when it appears twice then one of the appearances is a  port in (i.e. <middle.in> |  <left.in>  | <right.in>) and the other appearance is a port out (i.e. <middle.out> | <left.out> | <right.out>)
  • when a port variable appears only once then we call it a free port variable. The collection of free port variables (in grammar notation) is denoted by FV(<molecule>).  In the graphical notation we shall add, when needed, the free port variables in blue. Notice however that the graphical notation will make a difference between molecules, which are graphs with free ports NOT tagged with a port variable, and patterns, which are  graphs with free ports tagged with different port variables. This difference does not exist at the level of the grammar notation (helas!).
  • for the concatenation of arrows or concatenation of arrows or ports, this means that at the grammar level we have to introduce some combing rewrites, more about this later.
  • at the level of grammar we have all the kitchen with port variables which asks to well manage them and rename them consistently. At the graphic level this problem does not exist, with the exception of port variables which are free, which may be used for the definition of graph rewrites.

Question: why in graphical notation are needed only two colours for the 3valent nodes?

Answer: because the lambda node and the fanout node have the same type, therefore we colour the first red and the second green. Likewise the fanin node and the application node have the same type, so we colour the first red and the second green.

Let’s see some simple examples. (You can click on the figure to make it bigger.)

molecules_ex

First row: at the left is a molecule in graphical notation. At the right is the same molecule in grammar notation.  Look at the arrow which appears vertical in the graphical notation from the left, where is it in the grammar notation? Well look at the port variable “e”. It appears in the right.out port of the node lambda L[a,b,e] and also in the left.in port of the node application A[e,d,c].

In the grammar notation the same graph may be written as

L[a,b,e] Arrow[e,f] A[f,d,c]

but this is for the next time, when we shall talk about the combing rewrites. (What will happen is that the Arrow[e,f] will disappear because it connects the out port named with the variable “e” with the middle.in port of the Arrow[e,f], which makes the Arrow element redundant.)

Second row:   At the left a graph made by two arrows and two loops. Notice two things:

  • it does not matter if the arrows cross, there is no node there, only an artifact of the drawing a graph  in the plane of the screen,
  • also, even if on the screen we see two oriented loops with opposed orientation in the plane of the screen, as graphical elements the two loops (which have no node, in particular no 3valent node) are the same. “Locally planar” constraint act only as a cyclical order of ports of the 3valent nodes (which combined with the fact that each of the four 3valent nodes has one port which is special, the one called “middle”, gives then a non-ambiguous notion of “left”  and “right” ports).

At the right we see the grammar notation of the same graph.

Third row: Here is a more consistent graph (at the left) and it’s grammar notation at the right. There are 3 nodes, fanout, lambda and termination nodes. Notice  that the port variable “a” appears only in L[a,b,a] which corresponds to the fact that the right.out port and the middle,in port of that lambda node are connected (both being tagged with the port variable “a”.

_________________________________________________________

Here are the 1000 words which are needed to properly explain the notion of a molecule in chemlambda (i.e. if you don’t trust your sight). Such a description is of course useful for writing programs.

After this straightforward description, perhaps extremely boring one, because it can be immediately deduced from the short definition, next time we shall see the graph rewrites of chemlambda.

_________________________________________________________

 

Open notebook science for everyone, done by everyone

I am deeply impressed by the post:

Jean Claude Bradley Memorial Symposium; July 14th; let’s take Open Notebook Science to everyone

Here are some quotes:

Jean-Claude Bradley was one of the most influential open scientists of our time. He was an innovator in all that he did, from Open Education to bleeding edge Open Science; in 2006, he coined the phrase Open Notebook Science. His loss is felt deeply by friends and colleagues around the world.

“Science, and science communication is in crisis. We need bold, simple visions to take us out of this, and Open Notebook Science (ONS) does exactly that. It:

  • is inclusive. Anyone can be involved at any level. You don’t have to be an academic.
  • is honest. Everything that is done is Open, so there is no fraud, no misrepresentation.
  • is immediate. The science is available as it happens. Publication is not an operation, but an attitude of mind
  • is preserved. ONS ensures that the record, and the full record, persists.
  • is repeatable or falsifiable. The full details of what was done are there so the experiment can be challenged or repeated at any time
  • is inexpensive. We waste 100 Billion USD / year of science through bad practice so we save that immediately. But also we get rid of paywalls, lawyers, opportunity costs, nineteenth century publishing practices, etc.”

Every word is true!

This is the future of the research communication. Or at least the beginning of it. ONS has open, perpetual peer review as a subset.

Personal notes.  Look at the left upper corner of this page, it reads:

chorasimilarity | computing with space | open notebook.

Yay! the time  is coming!  the weirdos who write on arXiv, now figshare,  who use open notebooks, all  as a replacement for legacy publication,   will soon be mainstream :)

Now, seriously, let’s put some gamification into it, so those who ask “what IS a notebook?”  can play too. They ARE the future. Hope that soon the Game of Research and Review, aka playing  MMORPG  games at the knowledge frontier, will emerge.

There are obvious reasons for that:

  • the smartphone freeds us from staying in one physical place while we surf the virtual world
  • which has as an effect that we rediscover that physical space is important for our interactions, see  Ingress
  • gamification of human activities is replacing the industrial era habits, (pyramidal, static organizations, uniformization, identification of humans with their functions (worker, consumer, customer, student) and with their physical location (this or that country, city, students in the benchs, professors at the chair, payment for working hours ans for staying at the counter, legacy publishing).

See also Notes for “Internet of Things not Internet of Objects”.

 

_________________________________________________

 

 

Some categorical thinking about chemlambda and why this may suck

I continue to file here, for dissemination and further processing, useful bits written about chemlambda and the Distributed  GLC model of decentralized computation.

Please contribute, contradict and dispute me here or by private communications, of course under the constraints of long attention span and reasonable understanding of the subject.

 

_________________________________

If you want a category for chemlambda, then is the category with arrows being the graph rewrites and with objects chemlambda graphs.

That is the sloppy formulation. Here is one more precise. But before, let’s be sure you know what a chemlambda graph is and what a move (or graph rewrite) is.

A chemlambda graph (aka a molecule) is any graph made by a finite number of the following graphical elements:

4_chem

The 3valent nodes are all locally planar. What is the meaning of this? Read the following comic strip (click on the image to make it bigger).

node_application_1

The list of moves (graph rewrites) of chemlambda is given in several places, for a short clear description read for example

M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication,    arXiv:1403.8046  , accepted in the
ALIFE 14: The Fourteenth International Conference on the Synthesis and Simulation of Living Systems, July 30th – Aug 2nd 2014, New York

Let’s take one move, the graphic beta move, how it functions? See the following image.

convention_2_mod

________________________________

OK then, what is a category theory approach to chemlambda?

… (and, more importantly, to  asynchronous decentralized computations with chemlambda?)

One possible formulation is the following.

1. You know that you can define a groupoid only in terms of arrows, as a family of items (arrows) with a partially defined composition operation which takes a pair of arrows from the domain of definition and gives an arrow, with a totally defined inverse operation which takes an arrow and gives its inverse.

There are some axioms satisfied by these two operations.
You identify the objects of the groupoid as arrows a^{-1} a.

2. Take then a graph in chemlambda (call it G) and associate to it the groupoid R(G) which is generated by all the graph rewrites which can be done on G. It is a groupoid, because all graph rewrites are reversible  (this gives the inverse) and because the composition of graph rewrites is the composition of arrows.

3. Remark that the objects of this groupoid are not simply the graphs which can be related to G by a finite sequence of reductions, but more. An object appears to be a graph with a selected subgraph which is the subject of the move.

4. There is more structure. Because each arrow is a graph rewrite, and each graph rewrite has a name, it follows that you have a decorated groupoid, with arrows decorated by a word in the free group generated by the graph
rewrites names.

5. You may want to privilege some directions of moves (move is a shorter name for graph rewrite) over the others. For example the beta move in the usual direction over the beta move in the opposite direction. But you may want to keep other directions as likely to be used, for example like in the
CO-COMM (co-commutativity move) and CO-ASSOC (co-associativity move).

 

convention_3

All in all this may give several variants of supplementary structures, all coming actually from supplementary structure over the free group generated by the moves names. Among them:
5.1. Partial order relations, like: an object A is smaller than an object  B if there is an arrow from A to B decorated only by positive or neutral moves.
5.2. Give to each move name a probability such that the probability for the move in the positive direction is greater than the probability for the  move in the opposite direction, and 50-50 for the neutral moves. Then define random walks on the groupoid.

6. Now, there is more structure, but it may be misleading. Add a lattice structure to the objects, coming from the fact that a disjoint union of two chemlambda graphs is a chemlambda graph. This will produce more objects than before, because until now the objects are a graph with one place selected for a rewrite. You get a bigger groupoid, which admits as objects chemlambda graphs with more than one place selected for rewrites and  new arrows which can be described as parallel composition of  other arrows.

From this point it matters very much what you choose to do.  Because it starts to suck, here is why.

At point 6 is  already introduced a global point of view, namely that there is any meaning to parallel composition (the problem is that in the real world there is no  meaning of this composition in the absence of a global point of view).

Another critic I have against such an approach is that the groupoid R(G) and most of the other structure introduced is global, in the sense that it is needed only for explaining the model by following the category fashion. As an outcome we obtain  a God’s view of the model.

You don’t need this  structure to define what a local move is.

Even worse, let’s  make an analogy between chemlambda graphs and you, me, any other user of the net or any living being in the world,  and between moves and any interactions between  you, me and anybody else.
Then you don’t need to know how, in China, that donkey stumbled upon that rock while we have a meaningful conversation. Heck, you can’t even define what means that as we have this conversation that donkey in China stumbled upon  that rock, unless you use some external, unneeded reference.

Even less sense makes to model this by  saying  that it does not matter because our conversation is the same in the case the donkey stumbled before you read this post or after you read this post.

 

Because such an explanation of a local, asynchronous interaction introduces by the back door that there is a global reference needed, but independent means the  succession relations with respect to this global reference do not matter.
_______________________________________________

 

 

Distributed GLC discussion

This is my contribution to a discussion about the distributed GLC model of computation and about the associated gui.

Read carefully.  It has a fractal structure.

Basics about chemlambda graphs and the GUI

The chemlambda graphs (molecules) are not flowcharts. One just has to specify certain (known) graphs with at most 4 nodes and how to replace them with other known simple graphs. That is all.

That means that one needs:

- a file which specifies what kind of graphs are used (by giving the type of nodes and arrows)

- a file which specifies which are the patterns (i.e. graphs) and which are the rewrite moves

- and a program which takes these files as input and a graph and does things, like checking if the graph is of the kind described in file 1, if there are any patterns from the file 2 and do the rewrite in a place where the user wants, do a sequence of rewrites until it forks, if the user wants, take as input a lambda expression given by the user and transform it into a graph.

- then there is the visualization of the graphs program(s), that is the hard part, but it is already done in multiple places. Means that one has to write only the possible conversions of file formats from and to the viz tool.

That is the minimal configuration.

Decorations

There are various reasons why one wants to be able to decorate the graphs, locally, as a supplementary thing, but in no way is this needed for the basic process.

Concerning decorations, one needs a file which specifies how to decorate arrows and which are the relations coming from nodes. But this is not part of the computation.

If we want to make it more powerful then it gets more complex because if we want to do symbolic computations of decorations (like elimination of a relation coming from a node) then probably it is better to output a file of decorations of arrows and relations from nodes and input it in a symbolic soft, like mathematica or something free, there is no need to reinvent the wheel.

After the graph rewrite you loose the decoration, that’s part of the fun which makes decorations less interesting and makes supposedly the computation more secure.

But that depends on the choice of decoration rules.

For example, if you decorate with types then you don’t loose the decoration after the move. Yes, there are nodes and arrows which disappear, but outside of the site where the move was applied, the decorations don’t change.

In the particular case of using types as decorations, there is another phenomenon though. If you use the decoration with types for graphs which don’t represent lambda terms then you will get relations between types, relations which are supplementary. a way of saying this is that some graphs are not well typed, meaning that the types form an algebra which is not free (you can’t eliminate all relations). But the algebra you get, albeit not free, is still an interesting object.

So the decoration procedure and the computation (reduction) procedure are orthogonal. You may decorate a fixed graph and you may do symbolic algebraic computations with the decorations, in an algebra generated by the graph, in the same way as a know generates an algebra called quandle. Or you may reduce the graph, irrespective of the decorations, and get another graph. Decorations of the first graph don’t persist, a priori, after the reduction.

An exception is decoration with types, which persists outside the place where the reduction is performed. But there is another problem, that even if the decoration with types satisfies locally (i.e. at the arrows of each node) what is expected from types, many (most) of the graphs don’t generate a free algebra, as it would be expected from algebras of types.

The first chemical interpretation

There is the objection that the reductions can’t be like chemical reactions because the nodes (atoms) can’t appear or disappear, there should be a conservation law for them.

Correct! What then?

The reduction, let’s pick one – the beta move say – is a chemical reaction of the graph (molecule) with an enzyme which in the formalism appears only with the name “beta enzyme” but is not specified as a graph in chemlambda. Then, during the reaction, some nodes may disappear, in the sense that they bond to the beta enzyme and makes it inactive further.

So, the reduction A –>beta B appears as the reaction

A + beta = B + garbage

How moves are performed

Let’s get a bit detailed about what moves (graph rewrites) mean and how they are done. Every move says replace graph_1 with graph_2 , where graph_1, graph_2 are graphs with a small number of nodes and arrows (and also “graph” may well be made only by two arrows, like is the case for graph_2 for the beta move).

So, now you have a graph G. Then the program looks for graph_1 chunks in G and adds some annotation (perhaps in an annotation file it produces). Then there may be script which inputs the graph G and the annotation file into the graph viz tool, which has as effect, for example, that the graph_1 chunk appears phosphorescent on the screen. Or say when you hover with the mouse over the graph_1 chunk then it changes color, or there is an ellipse which encircles it and a tag saying “beta”.

Suppose that the user clicks, giving his OK for performing the move. Then on the screen the graph changes, but the previous version is kept in the memory, in case the user wants to go back (the moves are all reversible, but sometimes, like in the case of the beta move, the graph_2 is too common, is everywhere, so the use of both senses of some moves is forbidden in the formalism, unless it is used in a predefined sequence of moves, called “macro move”).

Another example would be that the user clicks on a button which says “go along with the reductions as long as you can do it before you find a fork in the reduction process”. Then, of course it would be good to keep the intermediate graphs in memory.

Yet another example would be that of a node or arrow of the graph G which turn out to belong to two different interesting chunks. Then the user should be able to choose which reduction to do.

It would be good to have the possibility to perform each move upon request,

plus

the possibility to perform a sequence of moves which starts from a first one chosen by the user (or from the only one available in the graph, as is the case for many graphs coming from lambda terms which are obtained by repeated currying and nesting)

plus

the possibility to define new, composed moves at once, for example you notice that there is a chunk graph_3 which contains graph_1 and after reduction of graph_1 to graph_2 inside graph_3, the graph_3 becomes graph_4; graph_4 contains now a chunk graph_1 of another move, which can be reduced and graph_4 becomes graph_5. Now, you may want to say: I save this sequence of two moves from graph_3 to graph_5 as a new move. The addition of this new move does not change the formalism because you may always replace this new move with a sequence of two old moves

Practically the last possibility means the ability to add new chunks graph_3 and graph_5 in the file which describes the moves and to define the new move with a name chosen by the user.

plus

Finally, you may want to be able to either select a chunk of the input graph by clicking on nodes and arrows, or to construct a graph and then say (i.e. click a button) that from now on that graph will be represented as a new type of node, with a certain arity. That means writing in the file which describes the type of nodes.

You may combine the last two procedures by saying that you select or construct a graph G. Then you notice that you may reduce it in an interesting way (for whatever further purposes) which looks like this:

- before the chain of reduction you may see the graph G as being made by two chunks A and B, with some arrows between some nodes from chunk A and some nodes from chunk B. After the reduction you look at the graph as being made by chunks C, D, E, say.

- Then you “save” your chunks A, B, C, D, E as new types of nodes (some of them may be of course just made by an old node, so no need to save them) and you define a new move which transforms the chunk AB into the chunk CDE (written like this only because of the 1D constraints of writing, but you see what I mean, right?).

The addition of these new nodes and moves don’t change the formalism, because there is a dictionary which transforms each new type of node into a graph made of old nodes and each new move into a succession of old moves.

How can this be done:

- use the definition of new nodes for the separation of G into A, B and for the definition of G’ (the graph after the sequence of reductions) into C,D,E

- save the sequence of moves from G to G’ as new composite move between G and G’

- produce a new move which replaces AB with CDE

That’s interesting how should work properly, probably one should keep both the move AB to CDE and the move G to G’, as well as the translations of G into AB and G’ into CDE.

We’re getting close to actors, but the first purpose of the gui is not to be a sandbox for the distributed computation, that would be another level on top of that.

The value of the sequence of moves saved as a composite move is multiple:

- the graph_3 which is the start of the sequence contains graph_1 which is the start of another move, so it always lead to forks: one may apply the sequence or only the first move

- there may be a possible fork after you do the first reduction in graph_3, in the sense that there may be another chunk of another move which could be applied

GLC actors

The actors are a special kind of decoration which transform (some) moves (at the choice of the user) into interaction devices.

You decorate the nodes of a graph G with actors names (they are just names, for the moment, at your choice). As a convention let’s say that we denote actor names by :a , :b , etc

You also decorate arrows with pairs of names of actors, those coming from the decorations of nodes, with the convention that (:a, :a) is identified (in the user mind) with :a (nothing surprising here, think about the groupoid over a set X, which is the set of “arrows” X \times X and X appears as the set of objects of the groupoid and it identifies with the set of pairs (x,x) with x \in X).

Now, say you have a move from graph_1 to graph_2. Then, as in the boldfaced previous description, but somehow in the opposite sense, you define graphs A, B such that graph_1 is AB and graphs C,D such that graph_2 is CD.

Then you say that you can perform the reduction from graph_1 to graph_2 only if the nodes of A are all decorated with :a and the nodes of :b are decorated with :b, a different name than :a.

After reduction you decorate the nodes of C with :a and the nodes of D with :b .

In this way the actors with identities :a and :b change their state during the reduction (i.e. the graph made by the nodes decorated with :a and the arrows decorated with (:a,:a) change, same for :b).

The reduction can be done for the graph G only at chunks graph_1 which are decorated as explained.

To explain what actor :Bob is doing it matters from which point of view. Also, what is the relation between actors and the chemical interpretation, how they fit there?

So let’s take it methodically.

The point of view of the GUI

If we discuss from the point of view of playing with the gui, then the user of the gui has global, God’s view over what happens. That means the the user of the gui can see the whole graph at one moment, the user has a clock which is like a global notion of time. So from this point of view the user of the gui is the master of space and time. He sees the fates of :Bob, :Alice, :Claire, :Dan simultaneously. The user has the right in the gui world to talk about parallel stuff happening (i.e. “at the same time”) and sequential stuff happening (to the same actor or actors). The user may notice that some reductions are independent, in the sense that wrt to the user’s clock the result is the same if first :Bob interacts with :Alice and then :Claire interacts with :Dan or conversely, which makes the user think that there is some notion more general than parallelism, i.e. concurrency.

If we discuss from the point of view of :Bob, it looks different. More later.

Let’s stay at the user of the gui point of view and think about what actors do. We shall use the user’s clock for reference to time and the user’s point of view about space (what is represented on the screen via the viz tool) to speak about states of actors.

What the user does:

- he defines the graph types an the rules of reduction

- he inputs a graph

- he decorates it with actors names

- he click some buttons from time to time ( deus ex machina quote : “is a plot device whereby a seemingly unsolvable problem is suddenly and abruptly resolved by the contrived and unexpected intervention of some new event, character, ability or object.” )

At any moment the actor :Bob has a state.

Definition: the state of :Bob at the moment t is the graph formed by the nodes decorated with the name :Bob, the arrows decorated by (:Bob, :Bob) and the arrows decorated by (:Bob, :Alice), etc .

Because each node is decorated by an actor name, it follows that there are never shared nodes between different actors, but there may be shared arrows, like an arrow decorated (:Bob, :Alice), which is both belonging to :Bob and :Alice.

The user thinks about an arrow (:Bob, :Alice) as being made of two half arrows:

- one which starts at a node decorated with :Bob and has a free end, decorated with :Alice ; this half arrow belongs to the state of :Bob

- one which ends at a node decorated with :Alice and has a free start, decorated with :Bob ; this half arrow belongs to the state of :Alice

The user also thinks that the arrow decorated by (:Bob, :Alice) shows that :Bob and :Alice are neighbours there. What means “there”? Is like you, Bob, want to park your car (state of Bob) and the your front left tyre is close to the concrete margin (i.e. :Alice), but you may consider also that your back is close to the trash bin also (i.e :Elvis).

We may represent the neighboring relations between arrows as a new graph, which is obtained by thinking about :Bob, :Alice, … as being nodes and by thinking that an arrow decorated (:Bob, :Alice) appears as an arrow from the node which represents :Bob to the node which represents :Alice (of course there may be several such arrows decorated (:Bob, :Alice) ).

This new graph is called “actors diagram” and is something used by the gui user to put order in his head and to explain to others the stuff happening there.

The user calls the actors diagram “space”, because he thinks that space is nothing but the neighboring relation between actors at a moment in time (user’s time). He is aware that there is a problem with this view, which supposes that there is a global time notion and a global simultaneous view on the actors (states), but says “what the heck, I have to use a way to discuss with others about what’s happening in the gui world, but I will show great caution and restraint by trying to keep track of the effects of this global view on my explanation”.

Suppose now that there is an arrow decorated (:Bob, :Alice) and this arrow, along with the node from the start (decorated with :Bob) and the node from the end (decorated with :Alice) is part of the graph_1 of one of the graph rewrites which are allowed.

Even more general, suppose that there is a graph_1 chunk which has the form AB with the sub-chunk A belonging to :Alice and the sub-chunk B belonging to Bob.

Then the reduction may happen there. (Who initiates it? Alice, Bob, the user’s click ? let’s not care about this for a moment, although if we use the user’s point of view then Alice, Bob are passive and the user has the decision to click or not to click.)

This is like a chemical reaction which takes into consideration also the space. How?

Denote by Alice(t) and Bob(t) the respective states of :Alice and :Bob at the moment t. Think about the states as being two chemical molecules, instead of one as previously.

Each molecule has a reaction site: for Alice(t) the reaction site is A and for Bob(t) the reaction site is B.

They enter in the reaction if two conditions are satisfied:

- there is an enzyme (say the beta enzyme, if the reduction is the beta) which can facilitate the reaction (by the user’s click)

- the molecules are close in space, i.e. there is an arrow from A to B, or from B to A

So you see that it may happen that Alice(t) may have inside a chunk graph which looks like A and Bob(t) may have a chunk graph which looks like B, but if the chunks A, B are not connected such that AB forms a chunk which is like the graph_1 of the beta move, then they can’t react because (physical interpretation, say) they are not close in space.

The reaction sites of Alice(t) and Bob(t) may be close in space, but if the user does not click then they can’t react because there is no beta enzyme roaming around to facilitate the reaction.

If they are close and if there is a beta enzyme around then the reaction appears as

Alice(t) + Bob(t) + beta = Alice(t+1) + Bob(t+1) + garbage

Let’s see now who is Alice(t+1) and Bob(t+1). The beta rewrite replaces graph_1 (which is AB) by graph_2 (which is CD). C will belong to Alice(t+1) and D will belong to Bob(t+1). The rest of Alice(t) and Bob(t) is inherited unchanged by Alice(t+1) and Bob(t+1).

Is this true? What about the actors diagram, will it change after the reaction?

Actually graph_1, which is AB, may have (and it usually does) other arrows besides the ones decorated with (:Bob, :Alice). For example A may have arrows from A to the rest of Alice(t), i.e. decorated with (:Alice, :Alice), same for B which may have others arrows from B to the rest of B(t), which are decorated by (:Bob, :Bob).

After the rewrite (chemical reaction) these arrows will be rewired by the replacement of AB by CD, but nevertheless the new arrows which replace those will be decorated by (:Alice, :Alice) (because they will become arrows from C to the rest of Alice(t+1)) and (:Bob, :Bob) (same argument). All in all we see that after the chemical reaction the molecule :Alice and the molecule :Bob may loose or win some nodes (atoms) and they may suffer some internal rewiring (bonds), so this looks like :Alice and :Bob changed the chemical composition.

But they also moved as an effect of the reaction.

Indeed, graph_1, which is AB, may have other arrows besides he ones decorated with (:Bob, :Alice) , (:Bob, :Bob) or (:Alice, :Alice). The chunk A (which belongs to Alice(t)) may have arrows which connect it with :Claire, i.e. there may be arrows from A to another actor, Claire, decorated with (:Alice, :Claire), for example.

After the reaction which consist in the replacement of AB by CD, there are rewiring which happened, which may have as effect the apparition of arrows decorated (:Bob, :Claire), for example. In such a case we say that Bob moved close to Claire. The molecules move this way (i.e. in the sense that the neighboring relations change in this concrete way).

Pit stop

Let’s stop here for the moment, because there is already a lot. In the next message I hope to talk about why the idea of using a Chemical reaction network image is good, but still global, it is a way to replace the user’s deus ex machina clicks by random availability of enzymes, but still using a global time and a global space (i.e. the actors diagrams). The model will be better also than what is usually a CRN based model, where the molecules are supposed to be part of a “well stirred” solution (i.e. let’s neglect space effects on the reaction), or they are supposed to diffuse in a fixed space (i.e let’s make the space passive). The model will allow to introduce global notions of entropy.

Such a CRN based model deserves a study for itself, because it is unusual in the way it describes the space and the chemical reactions of the molecules-actors as aspects of the same thing.

But we want to go even further, towards renouncing at the global pov.

Notes for gamification of chemlambda

Here are some obvious notes  which are filed here about chemlambda graphs in GraphML.  Hope they may be used with … ? … why not Liviz.js  to get quick something OS to play with.

In the previous post The Quantomatic GUI may be useful for chemlambda (NTC vs TC, III) is mentioned the quantomatic gui, which can easily do all this, but is not free. Moreover, the goals for the gui proposed here are  more modest and easily attainable. Don’t care about compatibility with this or that notion from category theory because our approach is different and because the gui is just step 0. So any quick and dirty, but effective code will do, I believe.

______________________________

Recall the idea: gamification of chemlambda.

  • get quick a chemlambda GUI, as described  in  A user interface for GLC  and use it, with a collection of predefined goals to make a gamification of chemlambda.
  • obviously that means choosing a format for representing the graphs and moves, many variants here, in this post is described such a choice — GraphML. The work consists into transforming the definitions of graphs and moves into a chosen format.
  • the various buttons and options from the post describing the gui are simple scripts, perhaps in javascript?  of the kind: find and replace predefined patterns by others, all involving up to 4 nodes and 8 arrows, so almost a no brainer, consisting in writing into the chosen format which are the patterns (i.e. the configurations of at most 4 nodes and 8 arrows ) which are involved in the chemlambda moves and which are the replacement rules (i.e. transforming the definition of chemlambda moves into code)
  • open the possibility to define arbitrary patterns (named here “macros”) and arbitrary moves (named here “macro moves”)
  • transform into code the algorithm described in Conversion of lambda calculus terms into graphs , with the care to use instead chemlambda. This is good for having a button for importing lambda terms into chemlambda  easily.
  • with all this done, pick a visualization tool which is open, looks good and  is easy to use, like the one previously mentioned.
  • write some scripts for converting to and from GraphML (or whatever other choice one likes) to  the input of the viz tool.
  • pick some examples, transform them into challenges and make public the game.

___________________________

For those with a  non functional right hemisphere, here is the GraphML description of chemlambda graphs and what would mean to do a move.

I use this source for GraphML.

A chemlambda graph is  any graph which is directed, with two types of 3-valent nodes and one type of  1-valent node, with the nodes having some attributes. [For mathematicians, is an oriented graph made by trivalent, locally planar nodes, and by some 1-valent nodes, with arrows which may have free starts or free ends, or even loops.]

Moreover, we accept arrows (i.e. directed edges) with free starts, free ends, or both, or with start=end and no node (i.e. loops with no node). For all those we need, in GraphML, to use some “invisible” nodes [multiple variants here, only one is described.]

Here are the trivalent, locally planar nodes:

 

  • application node

<node id=”n0″ parse.indegree=”2″ parse.outdegree=”1″>
<data key=”d0″>green</data>
<port name=”middle_out”/>
<port name=”left_in”/>
<port name=”right_in”/>
</node>

  • fanin node

<node id=”n3″ parse.indegree=”2″ parse.outdegree=”1″>
<data key=”d0″>red</data>
<port name=”middle_out”/>
<port name=”left_in”/>
<port name=”right_in”/>
</node>

  • lambda node

<node id=”n1″ parse.indegree=”1″ parse.outdegree=”2″>
<data key=”d0″>red</data>
<port name=”middle_in”/>
<port name=”left_out”/>
<port name=”right_out”/>
</node>

  • fanout node

<node id=”n2″ parse.indegree=”1″ parse.outdegree=”2″>
<data key=”d0″>green</data>
<port name=”middle_in”/>
<port name=”left_out”/>
<port name=”right_out”/>
</node>

  • termination node

<node id=”n4″ parse.indegree=”1″ parse.outdegree=”0″>
<data key=”d0″>term</data> </node>

(where “term” denotes a color we agree to use for the termination node, in xfig made drawings this node  appears like this  –>–|  )
  • two invisible nodes 1 valent

<node id=”n5″ parse.indegree=”0″ parse.outdegree=”1″>
<data key=”d0″>invisible</data> </node>

<node id=”n6″ parse.indegree=”1″ parse.outdegree=”0″>
<data key=”d0″>invisible</data> </node>

(where “invisible” should be something to  agree to use)

  • invisible node 2 valent

<node id=”n7″ parse.indegree=”1″ parse.outdegree=”1″>
<data key=”d0″>invisible</data> </node>

Uses of  invisible nodes:

  • in chemlambda graphs we admit arrows which start from one of the 3 valent nodes but the end is free. Instead of the free end we may use  the invisible node  with id=”n6″ from before.

<edge source=”n101″ target=”n6″/>

  • There are also accepted arrows which end in a 3 valent node or in a 1 valent termination node, but their start is free. For those we may use the invisible node with id=”n5″ from before.

<edge source=”n5″ target=”n102″/>

  • There are accepted arrows with free starts and ends, so we represent them with the help of invisible nodes with id=”n5″ and id=”n6″.

<edge source=”n5″ target=”n6″/>

  • Finally, we accept also loops, with no nodes, these are represented with the help of the 2 valent invisible nodes line the one with id=”n7″

<edge source=”n7″ target=”n7″>

____________________________________________

 

Examples:
(a) – recognize pattern for beta move
(b) – perform (when called) the beta move

  • (a) recognize pattern for beta move.

- input is a chemlambda graph (in GraphML)

- output is the same graph and some supplementary file of annotations of the graph.

  • This program looks in the input graph for all patterns where the beta move can be applied.  This pattern looks like this in the GraphML convention:

<node id=”n101″ parse.indegree=”2″ parse.outdegree=”1″>
<data key=”d0″>green</data>
<port name=”middle_out”/>
<port name=”left_in”/>
<port name=”right_in”/>
</node>
<node id=”n102″ parse.indegree=”1″ parse.outdegree=”2″>
<data key=”d0″>red</data>
<port name=”middle_in”/>
<port name=”left_out”/>
<port name=”right_out”/>
</node>
<edge source=”n102″ target=”n101″ sourceport=”right_out” targetport=”left_in”/>
<edge source=”n106″ target=”n102″ sourceport=”???_1″ targetport=”middle_in”/>
<edge source=”n102″ target=”n103″ sourceport=”left_out” targetport=”???_2″/>
<edge source=”n105″ target=”n102″ sourceport=”???_3″ targetport=”right_in”/>
<edge source=”n101″ target=”n104″ sourceport=”middle_out” targetport=”???_4″/>

Here “???_i” means any port name.

If such a pattern is found, then the collection of id’s (of edges and nodes) from it is stored in some format in the annotations file which is the output.

  • (b) perform (when called) the beta move

When called, this program takes as input the graph and the annotation file for beta moves pattern and then wait for the user to pick one of these patterns (i.e. perhaps that the patterns are numbered in the annotation file, the user interface shows then on screen and the user clicks on one of them).

When the instance of the pattern is chosen by the user, the program erases from the input graph the pattern (or just comments it out, or makes a copy first of the input graph and then works on the copy, …) and replaces it by the following:

<edge source=”n106″ target=”n104″ sourceport=”???_1″ targetport=”???_4″/>
<edge source=”n105″ target=”n103″ sourceport=”???_3″ targetport=”???_2″/>

It works only when the nodes n101 or n102 are different than the nodes  n103 ,n104, n105, n106,  because if not then when erased leads to trouble. See the post Graphic beta move with details.

As an alternative one may proceed by introducing invisible nodes which serve as connection points for the arrows from n106 to n104 and n101 to n103, then erase the nodes  among n101, n102 which are not among  n103, n104, n105, n106 .

___________________________________________

A concrete example:

reg_1

  • The input graph is the one from the first row of the figure:

<node id=”n1″ parse.indegree=”0″ parse.outdegree=”1″>
<data key=”d0″>invisible</data>
<port name=”out”/>
</node>
<node id=”n2″ parse.indegree=”0″ parse.outdegree=”1″>
<data key=”d0″>invisible</data>
<port name=”out”/>
</node>
<node id=”n3″ parse.indegree=”1″ parse.outdegree=”0″>
<data key=”d0″>invisible</data>
<port name=”in”/>
</node>
<node id=”n4″ parse.indegree=”1″ parse.outdegree=”0″>
<data key=”d0″>invisible</data>
<port name=”in”/>
</node>

[comment: these are the four free ends of arrows which are numbered in the figure by "1", ... , "4".   So you have a graph with two inputs and two outputs,  definitely not a graph of a  lambda term! ]

<node id=”n105″ parse.indegree=”2″ parse.outdegree=”1″>
<data key=”d0″>green</data>
<port name=”middle_out”/>
<port name=”left_in”/>
<port name=”right_in”/>
</node>
<node id=”n106″ parse.indegree=”2″ parse.outdegree=”1″>
<data key=”d0″>green</data>
<port name=”middle_out”/>
<port name=”left_in”/>
<port name=”right_in”/>
</node>
<node id=”n108″ parse.indegree=”2″ parse.outdegree=”1″>
<data key=”d0″>green</data>
<port name=”middle_out”/>
<port name=”left_in”/>
<port name=”right_in”/>
</node>

[comment: these are 3 application nodes]

<node id=”n107″ parse.indegree=”1″ parse.outdegree=”2″>
<data key=”d0″>red</data>
<port name=”middle_in”/>
<port name=”left_out”/>
<port name=”right_out”/>
</node>
<node id=”n109″ parse.indegree=”1″ parse.outdegree=”2″>
<data key=”d0″>red</data>
<port name=”middle_in”/>
<port name=”left_out”/>
<port name=”right_out”/>
</node>
<node id=”n110″ parse.indegree=”1″ parse.outdegree=”2″>
<data key=”d0″>red</data>
<port name=”middle_in”/>
<port name=”left_out”/>
<port name=”right_out”/>
</node>

[comment: these are 3 lambda abstraction nodes]

<edge source=”n1″ target=”n105″ sourceport=”out” targetport=”right_in”/>
<edge source=”n107″ target=”n105″ sourceport=”left_out” targetport=”left_in”/>
<edge source=”n105″ target=”n106″ sourceport=”middle_out” targetport=”left_in”/>

<edge source=”n2″ target=”n106″ sourceport=”out” targetport=”right_in”/>
<edge source=”n106″ target=”n107″ sourceport=”middle_out”
targetport=”middle_in”/>
<edge source=”n107″ target=”n108″ sourceport=”right_out” targetport=”left_in”/>

<edge source=”n108″ target=”n109″ sourceport=”middle_out”
targetport=”middle_in”/>
<edge source=”n110″ target=”n108″ sourceport=”right_out” targetport=”right_in”/>
<edge source=”n109″ target=”n110″ sourceport=”right_out”
targetport=”middle_in”/>

<edge source=”n109″ target=”n4″ sourceport=”left_out” targetport=”in”/>
<edge source=”n110″ target=”n3″ sourceport=”left_out” targetport=”in”/>

  • This graph reduces via 3 beta reductions to

<node id=”n1″ parse.indegree=”0″ parse.outdegree=”1″>
<data key=”d0″>invisible</data>
<port name=”out”/>
</node>
<node id=”n2″ parse.indegree=”0″ parse.outdegree=”1″>
<data key=”d0″>invisible</data>
<port name=”out”/>
</node>
<node id=”n3″ parse.indegree=”1″ parse.outdegree=”0″>
<data key=”d0″>invisible</data>
<port name=”in”/>
</node>
<node id=”n4″ parse.indegree=”1″ parse.outdegree=”0″>
<data key=”d0″>invisible</data>
<port name=”in”/>
</node>

<edge source=”n1″ target=”n3″ sourceport=”out” targetport=”in”/>
<edge source=”n2″ target=”n4″ sourceport=”out” targetport=”in”/>

____________________________________________

For this choice which consists into using invisible nodes, a new program is needed (which may be useful for other purposes, later)

(c) arrow combing

The idea is that a sequence of arrows  connected via 2-valent invisible nodes should count as an arrow in a chemlambda graph.

So this program does exactly this: if n1001 is different than n1002  then replaces the pattern

<node id=”n7″ parse.indegree=”1″ parse.outdegree=”1″>
<data key=”d0″>invisible</data>
<port name=”in”/>
<port name=”out”/>
</node>
<edge source=”n1001″ target=”n7″ sourceport=”???_1″ targetport=”in”/>
<edge source=”n7″ target=”n1002″ sourceport=”out” targetport=”???_2″/>

by

<edge source=”n1001″ target=”n1002″ sourceport=”???_1″ targetport=”???_2″/>

 

_________________________________________

 

The Quantomatic GUI may be useful for chemlambda (NTC vs TC, III)

Since I discovered Quantomatic (mentioned in NTC vs TC I), I continued to look and it seems that it already has, or almost, the GUI which is the step 0  in the Distributed GLC project.

This is a very good news for me, because I tend to consider that possibly complex tasks are simple, therefore as any lazy mathematician will tell you, it is always good if some piece of work has been  done before.

In the post A user interface for GLC I describe what we would need and this corresponds, apparently, to a part of what the Quantomatic GUI can do.

See Aleks Kissinger Pictures of Processes: Automated Graph Rewriting for Monoidal Categories and Applications to Quantum Computing, DPhil thesis, [arXiv:1203.0202] , Chapter 9.

Without much ado, I shall comment on the differences, with the hope that the similarities are clear.

Differences.  I shall use as an anchor for explaining the differences the homepage of Aleks Kissinger, because is well written and straightforward to read. Of course, this will not mean much if you don’t know what I am talking about concerning NTC vs TC, chemlambda or distributed GLC.

1. Aleks explains

“But “box-and-wire” diagrams aren’t just for physics and multilinear algebra. In fact, they make sense whenever there is a notion a “map”, a procedure for composing maps (i.e. vertical composition), and a procedure for putting two maps “side-by-side” (horizontal composition). That is, this notation makes sense in any monoidal category.”

Here is a first difference from chemlambda, because the chemlambda graphs are open graphs (in the sense that they also have arrows with one or both ends free, as well as loops), but otherwise a chemlambda graph has not any preferred external order of examination. The arrows of the graph are not “wires” and the nodes are not “boxes”. There is no meaning of the vertical or horizontal composition, because there is no vertical and no horizontal.

2. Another quote from Aleks page:

“In Open Graphs and Monoidal Theories, Lucas Dixon and I defined the category of open-graphs and described how double-pushout graph rewriting, as defined in e.g. Ehrig et al5, can be applied to open-graphs.”

This marks the second basic difference, which consists into the effort we make to NOT go global. It is a very delicate boundary between staying local and taking a God’s pov, and algebra is very quickly passing that boundary. Not that it breaks a law, but it adds extra baggage to the formalism, only for the needs to explain things in the algebraic way.

Mind that it is very interesting to algebraize from God’s pov, but it might be also interesting to see how far can one go without taking the global pov.

 

3.  Maybe as an effect of 1. they think in terms of processes, we think in terms of Actors. This is not the same, but OMG how hard is to effect, only via words exchanges,  the brain rewiring which is needed to understand this!

But this is not directly related to the GUI, which is only step 0 for the distributed GLC.

_______________________________________

Putting these differences aside, it is still clear that:

  • chemlambda graphs are what they call “open graphs”
  • there are some basic tools in the Quantomatic gui which we need
  • mixing our minimal, purely local constraint on reasoning with their much more developed global point of view can be only fruitful, even if the goals and applications are different.

_______________________________________

What do you think about this?

_______________________________________

 

 

The example with the marbles

In a discussion about the possible advantages for secure computing with the  GLC actors model, I came up with this analogy, which I want to file here, not to get lost in the flow of exchanges:

Mind that this is only a thought  experiment, which might not be accurate in all aspects in it’s representation of the kind of computation with GLC or more accurately with chemlambda.

Imagine a large pipe, with a diameter of 1 m say, and 3 m long, to have an image. It is full of marbles, all identical in shape. It is so full that if one forces a marble at one end then a marble (or sometimes more) have to get out by the other end.

Say Alice is on one end of the pipe and Bob is at the other end. They agreed previously to communicate in the most primitive manner, namely by the spilling  of a small (say like ten) or a big (for example like 50)   marbles at their respective ends. The pipe contains maybe 10^5   or 10^6 marbles, so both these numbers are small.

There is also Claire who, for some reason, can’t see the ends of Alice and Bob, but the pipe has a window at the middle and Claire can see about 10% of the marbles from the pipe, those which are behind the window.

Let’s see how the marbles interact. Having the same shape, and because the pipe is full of them, they are in a local configuration which minimizes the volume (maybe not all of them, but here the analogy is mum about this). When a marble (or maybe several) is forced at Alice’s end of the pipe, there are lots of movements which accommodate the new marbles with the old ones. The physics of marbles is known, is the elastic contact between them and there is a fact in the platonic sky which says that for any local portion of the pipe the momentum and energy are conserved, as well as the volume of the marbles. The global conservation of these quantities is an effect of those (as anybody versed in media mechanics can confirm to you).

Now, Claire can’t get anything from looking  by the window. At best Claire remarks complex small movements, but there is no clear way how this happens (other than if she looks at a small number of them then she might figure out the local mechanical ballet imposed by the conservation laws), not are Alice’s marbles marching towards Bob’s end.

Claire can easily destroy the communication, for example by opening her window and getting out some buckets of marbles, or even by breaking the pipe. But this is not getting Claire closer to understanding what Alice and Bob are talking about.

Claire could of course claim that i the whole pipe was transparent, she could film the pipe and then reconstruct the communication. But in this case Claire would be the goddess of the pipe and nothing would be hidden to her. Alice and Bob would be her slaves because Claire would be in a position which is equivalent to having a window at each end of the pipe.

__________________________

Comments:

  • each marble is a GLC actor
  • they interact locally, by known and simple rules
  • this is an example of signal transduction
  • which encrypts itself, more  communication makes the decoding harder. It is the same problem which is encountered when observing a living system, for example a cell. You may freeze it (and therefore kill it) and look at it but you won’t see how it functions. You can observe it alive, but it is complex by itself, you never see, or only rare glimpses of meaning.
  • the space (of the pipe) represents  an effect of the local, decentralized, asynchronous interactions.

Beneath under there is just local interaction, via the moves which act on patterns of graphs which are split between actors. But this locality gives space, which is an emergent, global effect of these distinctions which communicate.

Two chemical molecules which react are one composite molecule which reduces itself, splitted between two actors (one per molecule). The molecules react when they are close is the same as saying that their associated actors interact when they are in the neighboring relation.  And the reaction modifies not only the respective molecules, but also the neighboring relation between actors, i.e. the reaction makes the molecules to move through space. The space is transformed as well as the shape of the reactants, which looks from an emergent perspective as if the reactants move through some passive space.

Concretely, each actor has a piece of the big graph, two actors are neighbours if there is an arrow of the big graph which connects their respective pieces, the reduction moves can be applied only on patterns which are splitted between two actors and as an effect, the reduction moves modify both the pieces and the arrows which connect the pieces, thus the neighbouring of actors.

What we do in the distributed GLC project is to use actors to transform the Net into a space. It works exactly because space is an effect of locality, on one side, and of universal simple interactions (moves on graphs) on the other side.

__________________________________________

The tone goes up on the OPEN front

This post has a collection of savory quotes and further comments about the psychological changes which are ongoing, around new ways of dissemination and communication of scientific research.

Aka OPEN …

  • access
  • peer review
  • data
  • notebooks

We are closing to a change, a psychological change, from indifference and disdain from the majority of (more or less established) researchers to a public acknowledgement of the stupidity and immorality of the procedure which is in force, still.

[Rant, jump over if not interested into personal stuff.

Please take into consideration that even if I embrace with full heart these changes, I don't have any merit or real contribution to these, excepting modest posts here at chorasimilarity, under the tags cost of knowledge and open peer review. More than this, I suffered like probably some of my colleagues by choosing to publish through arXiv mostly and not playing the stupid game, which led to a very damaged career, but unfortunately I did not had the opportunity to create change through participation in teams which now are shaping the future of OPEN whatever. Bravo for them, my best wishes for them, why not sometimes a honest criticism from my small point of view, and thanks for the feeling of revenge which I have, the "I was right" feeling which I hope will grow and grow, because really the research world is damaged to the bones by this incredible stupidity, maybe cupidity and surely lack of competence and care for the future manifested by a majority of leaders.

The second thing I want to mention is that even if I refer to "them", to a "majority", all these generalizations have to be nuanced by saying that, as always, as everywhere, the special ones, the creative ones, the salt and pepper of the research world are either excused or completely innocent. They are also everywhere, maybe many of them not in any strong influence position (as in music, for example, the most well known musicians are always never the best, but surely they are among the most hard working), but creating their stuff and possibly not really caring about these social aspects, because they are too deep into the platonic realm. All of them are not the subject or part of any "majority", they are not "them" in any way.

The third point is that there may be a sloppy use of "young" and "old". This has nothing to do with physical age. It is true that every old moron was a young moron before. Every old opportunist was a young one some years earlier. Their numbers are continually replenished and we find them everywhere, albeit much more present than the salt and pepper of the research community, and more in the good hard worker, but not really, seriously creative part.  No, young or old refers to the brain quality, not to physical age.

End of rant]

Back to the subject. From timid or rather lonely comments, now we passed to more strong ones.

And the words are harder.

From Causes of the persistence of impact factor mania, by Arturo Casadevall and Ferric C. Fang,

“Science and scientists are currently afflicted by an epidemic of mania manifested by associating the value of research with the journal where the work is published rather than the content of the work itself. The mania is causing profound distortions in the way science is done that are deleterious to the overall scientific enterprise. In this essay, we consider the forces responsible for the persistence of the mania and conclude that it is maintained because it disproportionately benefits elements of the scientific enterprise, including certain well-established scientists, journals, and administrative interests.”

Fully agree with them, besides of this I consider very interesting their explanation that we face a manifestation of the tragedy of the commons.

From Academic self-publishing: a not-so-distant-future, here is a big quote, is too beautiful to crop:

A glimpse into the future
Erin is driving back home from the laboratory with a big smile on her face. After an exciting three-hour brainstorming session discussing the intracranial EEG data from her last experiment, she can’t wait to get her hands back on the manuscript. A new and unexpected interpretation of the findings seems to challenge a popular assumption about the role of sleep in declarative memory consolidation. She had been looking over the figures for more than a month without seeing a clear pattern. But now, thanks to a moment of insight by one of her colleagues, the pieces finally fit together and a new logic is emerging. She realizes it will be hard for the community to accept these new findings, but the methodology is solid and she is now convinced that this is the only reasonable explanation. She is so anxious to see what Axell’s group thinks about new evidence that refutes its theoretical model.

After a week’s hard work, the first draft is ready. All the figures and their long descriptive legends are in place, the literature review is exhaustive, the methodology is clear as a bell, and the conclusions situate the finding in the general context of the role of sleep in memory consolidation. Today, the group had a brief morning meeting to decide which colleagues they will ask to review their draft. Of course, they will ask Axell for his opinion and constructive criticism, but they also agree to invite Barber to confirm that the application of independent component analysis on the data was performed correctly, and Stogiannidis to comment on the modification of the memory consolidation scale. For a review of the general intracranial EEG methodology, the group decides to first approach Favril herself and, if she declines, they will ask Zhang, who recently reviewed the subject for Nature.

After the lunch break, Erin submits the manuscript to the university’s preprint repository that provides a DOI (digital object identifier) and an open attribution licence. When she hits the submit button, she feels a chill running down her spine. More than a year’s hard work is finally freely available to her peers and the public. The next important step is to invite the reviewers. She logs in to her LIBRE profile and inserts the metadata of the manuscript with a hyperlink to the repository version (see LIBRE, 2013). She then clicks the invite reviewer button and writes a quick personal message to Axell, briefly summarizing the main result of the study and why she thinks his opinion is vital for the debate this manuscript will spark. She then invites Stogiannidis to comment on the modification of the memory consolidation scale, and Barber, specifically asking him to check the application of independent component analysis, and also letting him know that all data are freely and openly available at Figshare. After finishing with the formal invitations, Erin tweets the LIBRE link to her followers and sends it as a personal message to specific colleagues from whom she would like to receive general comments. She can now relax. The word is out!

A couple of weeks later, Erin is back at work on the project. Both Favril and Zhang refused to review because of heavy work schedules, but Stogiannidis wrote an excellent report totally approving the modification of her scale. She even suggested a future collaboration to test the new version on a wider sample. Barber also submitted a brief review saying that he doesn’t find any caveats in the analysis and approves the methodology. As Erin expected, Axell didn’t take the new result lightly. He submitted a harsh critique, questioning both the methodology and the interpretation of the main findings. He even mentioned that there is a new paper by his group currently under journal review, reporting on a similar experiment with opposite results. Being pipped to the post and being second to report on this innovative experimental design, he must be really peeved, thinks Erin. She grins. Maybe he will learn the lesson and consider self-publishing next time. Anyway, Erin doesn’t worry too much as there are already two independent colleagues who have marked Axell’s review as biased on LIBRE. Last night, Xiu, Erin’s colleague, finished retouching one of the figures based on a very insightful comment by one of LIBRE’s readers, and today she will upload a new version of the manuscript, inviting some more reviewers.

Two months later, Erin’s paper is now in version number 4.0 and everyone in the group believes it is ready for submission to a journal and further dissemination. The issues raised by seven reviewers have now been adequately addressed, and Axell’s review has received six biased marks and two negative comments. In addition, the paper has attracted a lot of attention in the social media and has been downloaded dozens of times from the institutional repository and has been viewed just over 300 times in LIBRE. The International Journal for the Study of the Role of Sleep in Memory Consolidation has already been in touch with Erin and invited her to submit the paper to them, but everybody in the group thinks the work is of interest to an even wider audience and that it should be submitted to the International Journal for the Study of Memory Consolidation. It charges a little more – 200 euros – but it is slightly more esteemed in the field and well worth the extra outlay. The group is even considering sending the manuscript in parallel to other journals that embrace a broader neuroscience community, now that the group’s copyright and intellectual property rights have been protected. Anyway, what is important (and will count more in the grant proposal Erin plans to submit next year) is that the work has now been openly approved by seven experts in the field. She is also positive that this paper will attract ongoing reviews and that she may even be invited as an expert reviewer herself now that she is more visible in the field. A debate has started in her department about how much the reviewer’s track record should weigh in how future tenure decisions are evaluated, and she has been invited to give a talk on her experience with LIBRE and the versioning of the group’s manuscript, which has now become a dynamic paper (Perakakis et al., 2011).”

I love this, in all details! I consider it among the most well written apology of, particularly, open peer review. [See if you care, also my post Open peer review as a service.]

From Your university is paying too much for journals, by Bjorn Brembs:

“Why are we paying to block public access to research, when we could save billions by allowing access?”

Oh, I’m sure that those in charge with these decisions have their reasons.

From the excellent We have met the enemy: part I, pusillanimous editors, by Mark C. Wilson

“My conclusions, in the absence of further information: senior researchers by and large are too comfortable, too timid, too set in their ways, or too deluded to do what is needed for the good of the research enterprise as a whole. I realize that this may be considered offensive, but what else are the rest of us supposed to think, given everything written above? I have not even touched on the issue of hiring and promotions committees perpetuating myths about impact factors of journals, etc, which is another way in which senior researchers are letting the rest of us down”…

Read also the older, but great We have met the enemy and it is us by Mark Johnston.  I commented about it here.

What is your opinion about all this? It’s getting hotter.

_________________________________________

My first NSF experience and the future of GLC

Just learned that the project “Secure Distributed Computing with Graphic Lambda Calculus” will not be funded by NSF.

I read the reviews and my conclusion is that they are well done. The 6 reviewers all make good points and a good job to detect strong points and weaknesses of the project.

Thank you NSF for this fair process. As the readers of this blog know, I don’t have the habit to hide my opinions about bad reviews, which sometimes may be harsh. Seen from this point of view, my thanks look, I hope, even more sincere.

So, what was the project about? Distributed computing, like in the “GLC actors, artificial chemical connectomes, topological issues and knots”  arXiv:1312.4333 [cs.DC], which was branded as useful for secure computing. The project has been submitted in Jan to Secure and Trustworthy Cyberspace (SaTC) NSF program.

The point was to get funding which allows the study of the Distributed GLC, which is for the moment fundamental research.  There are reasons to believe that distributed GLC may be good for secure computing, principal among them being that GLC (and chemlambda, actually the main focus of research) is not based on the IT paradigm of gates and wires, but instead on something which can be described as signal transduction, see How is different signal transduction from information theory?   There is another reason, now described by the words “no semantics“.

But basically,  this is not naturally a project in secure computing. It may become one, later, but for the moment the project consists into understanding asynchronous, decentralized computations performed by GLC actors and their biological like behaviour. See What is new in distributed GLC?

Together with Louis Kauffman, we are about to study this, he will present at the ALIFE 14 conference our paper  Chemlambda, universality and self-multiplication,   arXiv:1403.8046.

There is much more to tell about this, parts were told already here at chorasimilarity.

From this moment I believe that instead of thinking security and secrecy, the project should be open to anybody who wishes to contribute, to use or to criticize. That’s the future anyway.

______________________________________________________

 

 

How non-commutative geometry does not work well when applied to non-commutative analysis

I expressed several times the belief that sub-riemannian geometry represents an example of a mathematically new phenomenon, which I call “non-commutative analysis”. Repeatedly happened that apparently general results simply don’t work well when applied to sub-riemannian geometry. This “strange” (not for me) phenomenon leads to negative statements, like rigidity results (Mostow, Margulis), non-rectifiability results (like for example the failure of the theory of metric currents for Carnot groups).  And now, to this adds the following,  arXiv:1404.5494 [math.OA]

“the unexpected result that the theory of spectral triples does not apply to the Carnot manifolds in the way one would expect. [p. 11] “

i.e.

“We will prove in this thesis that any horizontal Dirac operator on an arbitrary Carnot manifold cannot be hypoelliptic. This is a big difference to the classical case, where any Dirac operator is elliptic. [p. 12]“

It appears that the author reduces the problems to the Heisenberg groups. There is a solution, then, to use

R. Beals, P.C. Greiner, Calculus on Heisenberg manifolds, Princeton University Press, 1988

which gives something resembling spectral triples, but not quite all works, still:

“and show how hypoelliptic Heisenberg pseudodifferential operators furnishing a spectral triple and detecting in addition the Hausdorff dimension of the Heisenberg manifold can be constructed. We will suggest a few concrete operators, but it remains unclear whether one can detect or at least estimate the Carnot-Caratheodory metric from them. [p. 12]“

______________________________

This seems to be an excellent article, more than that, because it is a phd dissertation  many things are written clearly.

I am not surprised at all by this, it just means that, as in the case with the metric currents, there is an ingredient in the spectral triples theory which introduces by the backdoor some commutativity, which messes then with the non-commutative analysis  (or calculus).

Instead I am even more convinced than ever that the minimal (!) description of sub-riemannian manifolds, as models of a non-commutative analysis, is given by dilation structures, explained most recently in arXiv:1206.3093 [math.MG].

A corollary of this is: sub-riemannian geometry (i.e. non-commutative analysis of dilation structures)  is more non-commutative than non-commutative geometry .

I’m waiting for a negative result concerning the application of quantum groups to sub-riemannian geometry.

__________________________________________

 

 

 

Quantum experimenters Alice and Bob are actors (NTC vs TC, II)

… i.e. if you turn the diagrams which explain quantum protocols by 90 deg, then the graph rewrites which are used in the NTC (topology does not compute) categorical quantum mechanics transform into TC (topology computes) diagrams. (See NTC vs TC part 1 for the terminology.)

And Alice and Bob become GLC actors.  Indeed, they  behave like actors from Hewitt Actor model and moreover communicate by the intermediary of graph rewrites.

I shall come back to this with many details, for the moment I can’t do decent drawings, they will appear soon.

It is intriguing, right? Yes, because when you turn the diagrams by 90 deg, the time and space “directions” change places. Moreover, the Alice from the experiment 1 (described by what appears in categorical quantum mechanics as the process 1, a decorated graph) is united with the Alice from the experiment 2 (described by what appears as the process 2, which is equivalent by graph rewrites with the process 1). That is, if you turn the diagrams by 90 deg, the two instances of Alice (or all the instances of Alice from all the intermediary steps of the graph rewrites) form the GLC actor Alice. Same for Bob, of course.

What this equivalence means, is intriguing. Very much intriguing.

________________________________________

 

Hamiltonian systems with dissipation, slides

If you want to see my other, applied math side, then here are the slides for a talk here at Lille, scheduled for Thu 12.06.2014.

Hamiltonian systems with dissipation

This is not what I worked these weeks, but a preparation. Soon will be finished something called “the symplectic Brezis-Ekeland-Nayroles principle”, written in collaboration with Gery de Saxce. Together with Gery, we work on the rigorous formulation of a generalization of some parts of convex analysis, since a number of years already, see the articles which contain the word “bipotential” here.

UPDATE: If you are in Paris next Tue June 17 then you may come to the seminaire D’Alembert where, due to the kind invitation of Djimedo Kondo,  I shall give a variant of the talk on Hamiltonian systems with dissipation, with some updates on the work done with Gery.

_____________________________

 

Gamification of peer review with Ingress

Seems possible to adapt Ingress in order to play the Game of Research and Review.

In the post MMORPGames at the knowledge frontier I propose a gamification of peer review which is, I see now very close to the Ingress game:

“… we could populate this world and play a game of conquest and exploration. A massively multiplayer online game.  Peer-reviews of articles decide which units of places are wild and which ones are tamed. Claim your land (by peer-reviewing articles), it’s up for grabs.  Organize yourselves by interacting with others, delegating peer-reviews for better management of your kingdoms, collaborating for the exploration of new lands.

Instead of getting bonus points, as mathoverflow works, grab some piece of virtual land that you can see! Cultivate it, by linking your articles to it or by peer-reviewing other articles. See the boundaries of your small or big kingdom. Maybe you would like to trespass them, to go into a near place? You are welcome as a trader. You can establish trade with other near kingdoms by throwing bridges between the land, i.e. by writing interdisciplinary articles, with keywords of both lands. Others will follow (or not) and they will populate the new boundary land you created.”

In Ingress (from the wiki source):

“The gameplay consists of establishing “portals” at places of public art, landmarks, cenotaphs, etc., and linking them to create virtual triangular fields over geographic areas. Progress in the game is measured by the number of Mind Units, i.e. people, nominally controlled by each faction (as illustrated on the Intel Map).[7][8] The necessary links between portals may range from meters to kilometers, or to hundreds of kilometers in operations of considerable logistical complexity.[9] International links and fields are not uncommon, as Ingress has attracted an enthusiastic following in cities worldwide[10] amongst both young and old,[11] to the extent that the gameplay is itself a lifestyle for some, including tattoos. “

 

Instead of public art, Portals could be openaccess articles (from the arXiv, for example, not from the publishers).

 

“A portal with no resonators is unclaimed; to claim a portal for a faction, a player deploys at least one resonator on it.”

Resonators are reviews.

Links between portals are keywords.

 

Something to think about!

____________________________________________________

 

 

Topology does not compute vs topology computes (I)

The article Zipper logic   arXiv:1405.6095  contains in its last section a discussion about the role of topology in computing formalisms which use topological objects, like tangle diagrams, or other graphs, and graph rewrites.

The purpose of this post is to start a discussion about the differences between two uses of graph rewrites in such formalisms.

I shall use further the NTC vs TC denomination suggested by Louis Kauffman instead of the longer one:

  • NTC = “topology does not compute” , meaning that the graph rewrites are NOT part of the computation notion
  • TC = “topology computes”, meaning that the graph rewrites are the fundamental blocks of the computation notion.

There are many articles which fall in the NTC camp, among them the ZX calculus of Coecke and Duncan from Interacting Quantum Observables: Categorical Algebra and Diagrammatics   arXiv:0906.4725 [quant-ph] , which is a part of the beautiful Categorical Quantum Mechanics. I shall use this article as an anchor which allows to compare with the TC camp. In future posts I shall add more articles to the list, because this is a rather big research field now.

Chemlambda is in the TC camp, like zipper logic.

So let us compare them, concretely.

At first sight they look related, they seem to have the same fundamental genes, say like an elephant (NTC) has almost all genes in common with a mouse (TC). They are both graph rewrites systems, they both use almost the same family of graphs (details later), they both speak about computation:

  • in the NTC article case it is about quantum computing, there is also a practical computing arm, the Quantomatic project
  • in the TC article case is about distributed, decentralized computing, and there is  the distributed GLC project.

The goals are different, the means are different, the NTC camp is much more developed than the TC camp, the question is:

Is there anything  fundamentally different in the TC camp?

 

The answer is YES. There are several fundamentally different things in the TC camp, maybe the biggest among them being the different role played by graph rewrites, summarized as NTC vs TC.

Let’s see in more detail, by comparing ZX with chemlambda.

 

In order to see what is really different, it is useful to make the two formalisms as alike as possible. I shall start by discussing if there is any difference between the graphs which appear in ZX from the ones which appear in chemlambda.

The graphs from ZX don’t have oriented arrows, but instead there is a global orientation convention, say from up to down when represented on page.

In ZX here is also a global horizontal “simultaneity” of graphical elements, which is part of the recipe of associating to a ZX graph an object related to a Hilbert space.

 

In chemlambda the graphs have oriented arrows. There is no global arrow orientation, nor any horizontal simultaneity.

Otherwise, both in ZX and chemlambda are used loops, free arrows and “half-arrows”.

So, there is an obvious procedure to associate to a graph in chemlambda a graph from ZX, by adding to the graph in chemlambda a global arrow orientation and a horizontal simultaneity.

This means, for example that the graph of the K combinator from chemlambda is represented in ZX like this:

two_kThe price is that in ZX there are several new graphical elements which need to be introduced, like the CAP, the CUP and the CROSS. Here is the list of all graphical elements needed to represent chemlambda graphs in ZX.

alphabet

What about the graph rewrites? Is not clear to me if the chemlambda graph rewrites can be done as finite sequences of ZX graph rewrites, but this is not very important for this discussion. There seems to be anyways a kind of a fundamental alphabet of graph rewrites which keeps appearing everywhere. I shall not be worried about the detailed graph rewrites, but however here is how the FAN-IN and the beta moves from chemlambda appear in ZX translation.

beta_fan_in

It is worthy to mention that the global arrow orientation and the horizontal simultaneity used in ZX introduces new moves to (the translation of) chemlambda graphs which are related to these constraints, and which are not needed in chemlambda.

Now we are close to see the differences.

There is a recipe for associating to each graph G from ZX an object O(G) on a Hilbert space. This is done by using the global arrow orientation convention and the horizontal simultaneity, along with a dictionary which associates to any graphical element from ZX something from a Hilbert space.

 

Here is the important thing about the NTC ideology:  if two graphs G and G' have the property that we can go from G to G' by a finite sequence of ZX graph rewrites, then O(G) = O(G').

Each graph G from ZX represents a computation. The graph rewrites from ZX are used to prove that TWO computations are equivalent.

More than this: a graph from ZX represents a computation in the sense that the global arrow orientation is a global time orientation, the horizontal simultaneity express space simultaneity, the CAP is a preparation, the CUP is a measurement. The nodes of the graph are GATES and the arrows of the graph are WIRES which transmit some signals.

That is why, in the NTC camp the graph rewrites DO NOT COMPUTE. The graph rewrites are used to prove an equivalence of two computations, like for example in the case of graphs which represent quantum teleportation protocol.

Now, in chemlambda, a graph is NOT a computation. Each graph is like a chemical molecule, with nodes as atoms and arrows as bonds. There is only one computation in chemlambda, which is done by graph rewrites.

So, even if we can translate chemlambda graphs into ZX graphs, there is this difference.

Suppose G is a ZX graph which is obtained by translating a chemlambda graph, as previously explained. Let G' be another such graph. Suppose that we can arrive from G to G' by a sequence of translations of chemlambda moves (graph rewrites), maybe also by using some of the supplementary “topological” moves which we need in ZX because of the global orientation and simultaneity constraints.

What do we have in ZX?

The graphs G and G' represent two computation which are equivalent.

What do we have in chemlambda?

The graphs G and G' represent the beginning and end of a computation done by the graph rewrites which in ZX serve to prove equivalence of computations.

This very big difference between NTC and TC has a big effect in all the rest of the formalisms.

______________________________________________

A solution for the “Amazon’s war on publishers”

I spent some time thinking about the “Amazon’s war on publishers” (learned via +Tim O’Reilly post https://plus.google.com/u/0/+TimOReilly/posts/WhyyNx7PYnt ) and now I believe that I have a solution.

The publishers and Amazon could behave maturely and use the method of scientific publishers with the researchers!

Step 1.  Amazon should hire some prominent authors or publishing managers

Step 2.  Amazon should ask the publishers to renounce at their rights, should they want their creation to appear on Amazon site

Step 3. The hired prominent authors and managers should make clear to any publisher and author that if their work is not on Amazon then it is probably crap.

You see, scientists are the most clever people, so if they are happy with this system, then  everybody should be happy too!

[There is a false assumption in this text, which is, unfortunately, a pamphlet.]

________________________________________________

Zipper logic appeared in math.CO (combinatorics)

Today,  a week after the submission, the article Zipper Logic   arXiv:1405.6095 appeared in math.CO , math GT, and not in math.LO , math.GT.

Actually, combinatorics, especially lately, is everywhere, being probably one of the most dynamic research fields in mathematics.

There is a small world connection of zipper logic with combinatorics,   because zipper logic is a variant of chemlambda, which is a variant of GLC, which has an emergent algebra sector  arXiv:1305.5786 section 5 , which is a sort of approximate algebraic structure, alike, but not the same as the approximate groups arXiv:1110.5008 .

____________________________________

 

Morlocks and eloi in the Internet of Things

For any fan of Neal Stephenson and Cory Doctorow,  the contents of the following opinion piece on goals and applications of the Internet of Things (IoT) should be no great surprise.

I am using the post Technical Machine – Designing for Humans as a study case.

[ Technical Machine is the company which builds  the Tessel. This is a product with a great potential! I wish I could use tessels for   the purpose explained in the post Experimental alife IoT with Tessel .  ]

This nice post is interesting in itself, but it is also an example of the shifting of the ideology concerning the Internet of Things.

I extract two contradictory quotes from the post and then I discuss them (and explain why they seem to me contradictory).

(1) ” A completely interactive tool, one that seamlessly incorporates humans as a piece of the system, is a tool that people don’t even think about. That’s the end goal: Ubiquitous Computing as Mark Weiser imagined it. Every object is an embedded device, and as the user, you don’t even notice the calm flow of optimization.
The Nest thermostat is a good example of this sort of calm technology. The device sits on your wall, and you don’t spend much time interacting with it after the initial setup. It learns your habits: when you’re home, when you’re not, what temperatures you want your house to be at various points in the day. So you, as the user, don’t think about it. You just live in a world that’s better attuned to you.”

_______

(2) “I think that one of the most interesting things we’ll see in the near future is the creation of non-screen interfaces. Interacting with technology, we rely almost solely on screens and buttons. But in the physical world, we use so many other interfaces. [...] there’s a lot of fascinating work going on to receive outputs from humans. [...] The implications there are amazing: you can wire up your own body as an electrical input into any electrical system– like a computer, or a robot, or whatever else you might build. You can control physical and digital things just by thinking really hard or by twitching your fingers.”

_______________

Now the discussion. Why are (1) and (2) contradictory?

I shall explain this by using the morlocks/eloi evocative oversimplification.

From historical reasons maybe the morlocks (technical nerds) are trained/encouraged/selected to hate discussions, human exchanges and interactions in general. Their dream technology is one like in (1), i.e. one which does not talk with the humans, but quietly optimize (from the morlock pov) the eloi environment.

On the contrary, the eloi love to talk, love to interact one with the others. In fact the social Net is a major misuse of morlock technology by eloi. Instead of a tool for fast and massive share of data, as the morlocks designed it, the Net became a very important (most important?) fabric of human interactions, exchanging lolcats images and sweet little nonsenses which make the basis of everyday empathic interaction with our fellow humans. And much more: the eloi prefer to use this (dangerous) tool for communicating, even if they know that the morlocks are sucking big data from them. They (the eloi) would prefer by far to not be in bayesian bubbles, but that’s life, they are using opportunistically things they don’t understand how they work, despite being told to be more careful.

The quote (2) show that people start to think about the IoT as an even more powerful tool of communication. OK, we have this nice technology which baby-sits us and we live calm lives because quietly the machine optimizes the little details without asking us. But, think that we can use the bit IoT machine for more than conversations. We can use it as the bridge which unites the virtual and the meat spaces, we can make real things  from discussions and we can discuss about real objects.

This is a much more impressive application of the IoT than the one which optimizes our daily life. It is something which would allow to make our dreams come true, literary! And collaboratively.

I have argued before about that, noticing that “thing” means both an assembly and a discussion (idea taken via Kenneth Olwig) and object is nothing but the result,  or outcome of a discussion, or evidence for a discussion. See the more at the post Notes for Internet of Things not Internet of objects.

It’s called “Internet of Things” and not “Internet of Objects” and it seems that morlocks start to realize this.

_________________________________________

 

 

 

Experimental alife IoT with Tessel

Here is an idea for testing the mix between the IoT and chemlambda.  This post almost qualifies as an What if? one but not quite, because in principle it might be done right now, not in the future.

Experiments have to start from somewhere in order to arrive eventually to something like a Microbiome OS.

Take Tessel.

Tessel is a microcontroller that runs JavaScript.
It’s Node-compatible and ships with Wifi built in.

Imagine that there is one GLC actor per Tessel device. The interactions between GLC actors may be done partially via the Wifi.

The advantage is that one may overlap the locality of graph rewrites of chemlambda with space locality.

 

Each GLC actor has as data a chemlambda molecule, with it’s in and out free arrows (or free chemical bonds) tagged with names of other actors.

A bond between two actors form, in the chemlambda+Tessel world, if the actors are close enough to communicate via Wifi.

Look for example at the beta move, done via the behaviour 1 of GLC actors. This may involve up to 6 actors, namely the two actors which communicate via the graphic beta move and at most 4 other actors which have to modify their tags of neighbouring actors names as a result of the move. If all are locally connected by Wifi then this becomes straightforward.

What would be the experiment then? To perform distributed, decentralized computations with chemlambda (in particular to do functional programming in a decentralized way) which are also sprawled over the physical world.  The Tessel devices involved in the computation don’t have to be all in possible Wifi connections with the others, on the contrary, only local connections would be enough.

Moreover, the results of the computations could as well have physical effects (in the sense that the states of the actors could produce effects in the real world) and as well the physical world could be used as input for the computation (i.e. the sensors connected to Tessel devices could modify the state of the actor via a core-mask mechanism).

That would play the role of a very primitive, but functional, experimental ancestor of a Microbiome OS.

 

_______________________________________________

 

 

FAQ: chemlambda in real and virtual worlds

Q1. is chemlambda different than previous models, like the algorithmic chemistry of Fontana and Buss, or the CHAM of Berry and Boudol?

A1. Yes. In chemlambda we work with certain graphs made of nodes (atoms) and bond (arrows), call such a graph a  molecule.  Then:

  • (A) We work with individual molecules, not with populations of molecules. The molecules encode information in their shape, not in their number.
  • (B) different from algorithmic chemistry, the application and abstraction are atoms of the molecules.
  • (C) There are no variables in chemlambda and there is no need to introduce one species of molecules per variable, like in the previous models.
  • (D) chemlambda and it’s associated computing model (distributed GLC)  work well in a decentralized world, there is no need for having a global space or a global time for the computation.

There is a number of more technical differences, like (non exhaustively):

  • (E)  molecules  are not identified with their functions. Technically, chemlambda rejects eta reduction, so even for those molecules which represent lambda terms, they are not identified (as happens when we use eta reduction) with their function. This calls for an “extreme” functional programming style.
  • (F) only a small part of the chemlambda molecules correspond to lambda terms (there is a lambda calculus “sector”).
  • (G) there is no  global semantics.

__________________________________________________
Q2.  is chemlambda a kind of computing with chemical reaction networks (CRNs)?

A2. No. Superficially, there are resemblances, and really one may imagine CRNs based on chemlambda, but this is not what we are doing.

__________________________________________________

Q3. Why do you think chemlambda has something to tell about the real or even about the living world?

A3. Because the real world, in it’s fundamental workings, does not seem to care about 1D language based constructs which we cherish in our explanation. The real and especially the living world seems to be based on local, asynchronous interactions which are much more like signal transductions and much less like information passing. (See How is different signal transduction from information theory? )
Everything happens locally, in nontrivial but physical ways, leading to emergent complex behaviours. Nature does not care about coordinates and names of things or abstractions, unless they are somehow physically embodied. This is the way chemlambda functions.

__________________________________________________
Q4. Why do you think chemlambda has something to say about the virtual  world of the Net?

A4. Because it puts the accent on alife instead of AI, on decentralization instead of pyramidal constructs.  A microbial ecology like internet is much more realistic to hope to realize than one based on benevolent pyramidal AI constructs (be them clouds, or other corporations constructs). Because real and virtual worlds are consistent only locally.

__________________________________________________
Q5. What about the Internet of Things?

A5. We hope to give to the IoT the role of the bridge which unites two kinds of computations real-virtual, under the same chemistry.

__________________________________________________
Q6. What would happen in your dream world?

A6. There are already some (fake) news about it here: what if

__________________________________________________

 

 

 

Zipper logic and RNA pseudoknots

Zipper logic is a variant of chemlambda, enhanced a bit with a pattern identification move called CLICK.

Or, chemlambda is an artificial chemistry.  Does it have a natural, real counterpart?

It should. Finding one is part of the other arm of the research thread presented here, which aims to unite (the computational part of) the real world with the virtual world, moved by the same artificial life/real life basic bricks. The other arm is distributed GLC.

A lot of help is needed for this, from specialists!Thank you for pointing to the relevant references, which I shall add as I receive them.

 

Further is a speculation, showing that zipper graphs can be turned into something which resembles very much with RNA pseudoknots.

(The fact that one can do universal computation with RNA pseudoknots is not at all surprising, maybe with the exception that one can do functional style programming with these.)

It is a matter of notation changes. Instead of using oriented crossings for denoting zippers, like in the thread called “curious crossings” — the last  post is Distributivity move as a transposition (curious crossings II) — let’s use another, RNA inspired one.

rna_2

So:

  • arrows (1st row) are single stranded RNA pieces. They have a natural 5′-3′ orientation
  • termination node (2nd row) is a single strand RNA with a “termination word on it”, figured by a magenta rectangle
  • half-zippers are double strands of RNA (3rd and 4th rows). The red rectangle is (a pair of antiparallel conjugate) word(s) and the yellow rectangle is a tag word.

Remark that any (n) half-zipper can be transformed into a sequence of (1) half-zippers, by repeated applications if the TOWER moves, therefore (n) half-zippers appear in this notation like strings of repeated words.

Another interesting thing is that on the 3rd row of the figure we have a notation for the lambda abstraction node and on the 4th row we have one of the application  node form chemlambda. This invites us to transform the other two nodes — fanin and fanout — of chemlambda into double stranded RNA. It can be done like this.

rna_3

On the first two rows of this picture we see the encoding of the (1) half-zippers, i.e. the encoding of the lambda abstraction and application, as previously.

On the 3rd row is the encoding of the fanout node and on the 4th row the one of the fanin.

In this way, we arrived into the world of RNA. The moves transform as well into manipulations of RNA strings, under the action of (invisible and to be discovered) enzymes.

But why RNA pseudoknots? It is obvious, as an exercise look how the zipper I and K combinators look in this notation.

rna_4

We see hairpins.

___________________________________

Conclusion:  help needed for chemistry experts. Low hanging fruits here.

___________________________________

 

Chemlambda at ALIFE 14

Good news for chemlambda, our article

M. Buliga, L. H. Kauffman, Chemlambda, universality and self-multiplication, arXiv:1403.8046 [cs.AI]

has been accepted in the ALIFE 14 conference:

ALIFE 14: The Fourteenth International Conference on the Synthesis and Simulation of Living Systems,

July 30th – Aug 2nd, New York

Louis Kauffman will give the talk. I shall not be there, but for anybody interested in chemlambda, who will also be in New York then, this is a good occasion to meet and discuss more there, live.

______________________________

How is different signal transduction from information theory?

They look different.

“Signal transduction occurs when an extracellular signaling[1] molecule activates a specific receptor located on the cell surface or inside the cell. In turn, this receptor triggers a biochemical chain of events inside the cell, creating a response.[2] Depending on the cell, the response alters the cell’s metabolism, shape, gene expression, or ability to divide.[3] The signal can be amplified at any step. Thus, one signaling molecule can cause many responses.[4]

Signal transduction involves the binding of extracellular signalling molecules and ligands to cell-surface receptors that trigger events inside the cell. The combination of messenger with receptor causes a change in the conformation of the receptor, known as receptor activation. This activation is always the initial step (the cause) leading to the cell’s ultimate responses (effect) to the messenger. Despite the myriad of these ultimate responses, they are all directly due to changes in particular cell proteins. Intracellular signaling cascades can be started through cell-substratum interactions; examples are the integrin that binds ligands in the extracellular matrix and steroids.[13]

[wiki source]

 

It seems that signal transduction involves:

  • messenger molecule
  • receptor molecule
  • the messenger reacts with the receptor, which changes conformation (receptor activation)
  • receptor activation triggers other chemical reactions

Let’s condense further:

  • molecule M (messenger) binds to molecule R (receptor)
  • the complex MR changes shape (a spatial notion, in a generalized sense)
  • which triggers other reactions between (the new) R with other neighboring molecules.

It is interesting for me because that is how distributed GLC works:

  • the actor M reacts with the actor R
  • after interaction both molecules may change, the one which belongs to the actor M and the one which belongs to the actor R,
  • but also the actors adjacencies may change as well, due to the reductions involving the reaction between M and R (this corresponds to the receptor activation)
  • which triggers other interactions between GLC actors.

 

Information theory, on the other side, concerns a sender, a channel and a receiver. The sender sends messages through the channel to the receiver.

Completely different frames.

One may, of course, partially ignore the mechanism (signal transduction) and look instead at the environment as a sender, to the cell as a receiver and to the cell’s membrane as a channel (just an example).

But sender, channel and receiver look (to me) as mind constructs which are useful for the human trying to make a sense of what is happening, from outside. What is happening though,  is signal transduction.

_________________________________

 

 

Autodesk releases SeaWater (another WHAT IF post)

[ This is another  WHAT IF  post  which  responds to the challenge formulated in  Alife vs AGI.  You are welcome to suggest another one or to make your own.]

The following is a picture of a random splash of sea water, magnified 25 times [source]

scoop-of-water-magnified-990x500

As well, it could be  just a representation of the state of the IoT in a small neighbourhood of you, according to the press release describing SeaWater, the new product of Autodesk.

“SeaWater is a design tool for the artificial life based decentralized Internet of Things. Each of the tiny plankton beings which appear in the picture is actually a program, technically called a GLC actor. Each plankton being has it’s own umwelt, it’s own representation of the medium which surrounds it. Spatially close beings in the picture share the same surrounding and thus they can interact. Likewise, the tiny GLC actors interact locally one with another,  not in real space, but on the Net. There is no real space in the Net, instead, SeaWater represents them closer when they do interact.

Sea Water is a tool for Net designers. We humans are visual beings. A lot of our primate brains powers can be harnessed for designing the alife decentralized computing which form the basis of the Internet of Things.

It improves very much the primitive tools which give things like this picture [source]

ack_6

 

 

Context. Recall that IoT is only a bridge between two worlds: the real one, where life is ruled by real chemistry and the artificial one, based on some variant of an artificial chemistry, aka  chemlambda.

As Andrew Hessel points out, life is a programming language (chemically based),  as well as the virtual world. They are the same, sharing the same principles of computation. The IoT is a translation tool which unites these worlds and lets them be one.

This is the far reaching goal. But in the meantime we have to learn how to design this. Imagine that we may import real beings, say microbes, to our own unique Microbiome OS.  There is no fundamental difference between synthetic life, artificial life and real life, at least at this bottom level.

Instead of aiming for human or superhuman artificial intelligence, the alife decentralized computing community wants to build a world where humans are not treated like bayesian units by  pyramidal centralized constructs.  There is an immense computing power already at the bottom of alife, where synthetic biology offers many valuable lessons.

______________________________

UPDATE.  This is real: Autodesk Builds Its Own Virus, as the Software Giant Develops Design Tools for Life Itself.

Distributivity move as a transposition (curious crossings II)

It looks that there is a connection of the following with the research on DNA topology, but I shall comment on this in a future post, mainly because I am learning about this right now. (But another main reference is Louis Kauffman.)

Let’s see.

If I am using this encoding of chemlambda nodes with crossings

 

ccdeco_1

which is a variant of the encoding from Curious crossings   then, as in the mentioned post,   the beta move becomes a CLICK between “sticky ends” which are marked with dashed lines, followed by R2b move, and also the FAN-IN move becomes a CLICK between sticky ends, followed by a R2a move.

What about the DIST moves? Let’s take the DIST move which involves an application node and a fanout node. The move looks like this:

ccdeco_surgery_1(Click on the picture to make it bigger.)

In the right column we see the DIST move with chemlambda drawing conventions. In the left column there is the translation with crossings and sticky ends.

What do we see? The strings 1-5 and 6-3 are transposed by the DIST move and a new string appears, which crosses them.

I can draw the same move like this:

ccdeco_surgery_2In this figure, the left column is as before, but the right column has changed. I just kept the order, from left to right, of the strings 6-3 and 1-5, and I wiggled the string 2-4 for this.

This is starting to look interestingly alike some other pictures from  DNA topology.

 

___________________________________________

 

Curious crossings

I’m coming back to  the post   Halfcross way to pattern recognition (in zipperlogic)    and I am going to modify the drawings a bit.

Instead of this convention of transforming chemlambda nodes into half-crossings

halfcross_1

I draw this mapping from chemlambda nodes to crossings:

cdeco_1

Remark that at the left we have now crossings. At the right we have the chemlambda nodes, each with a dashed half-arrow attached, so now the nodes become 4-valent locally planar ones.

As usual (here at chorasimilarity) the crossing diagrams are only locally planar!

Look closer: there are two kinds of oriented crossings, right? To each kind corresponds a colour (green or red) of a chemlambda node.

This is no longer a dictionary, there is no longer a bijective correspondence, because for each oriented crossing there are two possible chemlambda nodes, depending on where is the dashed half-arrow! That is the meaning of the wiggling blue arrow from right to left, it’s uni-directional.

OK then, instead of drawing this interpretation of the beta move

halfcross_2

we get the following one

cdeco_2

where the arrows are drawn like that in order to see what transforms into what (otherwise there is no need for those convoluted arrows in the formalism).

Likewise, instead of this

halfcross_3

I draw this:

cdeco_3

Funny, what could that mean?

________________________________________