Distributed GLC discussion

This is my contribution to a discussion about the distributed GLC model of computation and about the associated gui.

Read carefully.  It has a fractal structure.

Basics about chemlambda graphs and the GUI

The chemlambda graphs (molecules) are not flowcharts. One just has to specify certain (known) graphs with at most 4 nodes and how to replace them with other known simple graphs. That is all.

That means that one needs:

– a file which specifies what kind of graphs are used (by giving the type of nodes and arrows)

– a file which specifies which are the patterns (i.e. graphs) and which are the rewrite moves

– and a program which takes these files as input and a graph and does things, like checking if the graph is of the kind described in file 1, if there are any patterns from the file 2 and do the rewrite in a place where the user wants, do a sequence of rewrites until it forks, if the user wants, take as input a lambda expression given by the user and transform it into a graph.

– then there is the visualization of the graphs program(s), that is the hard part, but it is already done in multiple places. Means that one has to write only the possible conversions of file formats from and to the viz tool.

That is the minimal configuration.

Decorations

There are various reasons why one wants to be able to decorate the graphs, locally, as a supplementary thing, but in no way is this needed for the basic process.

Concerning decorations, one needs a file which specifies how to decorate arrows and which are the relations coming from nodes. But this is not part of the computation.

If we want to make it more powerful then it gets more complex because if we want to do symbolic computations of decorations (like elimination of a relation coming from a node) then probably it is better to output a file of decorations of arrows and relations from nodes and input it in a symbolic soft, like mathematica or something free, there is no need to reinvent the wheel.

After the graph rewrite you loose the decoration, that’s part of the fun which makes decorations less interesting and makes supposedly the computation more secure.

But that depends on the choice of decoration rules.

For example, if you decorate with types then you don’t loose the decoration after the move. Yes, there are nodes and arrows which disappear, but outside of the site where the move was applied, the decorations don’t change.

In the particular case of using types as decorations, there is another phenomenon though. If you use the decoration with types for graphs which don’t represent lambda terms then you will get relations between types, relations which are supplementary. a way of saying this is that some graphs are not well typed, meaning that the types form an algebra which is not free (you can’t eliminate all relations). But the algebra you get, albeit not free, is still an interesting object.

So the decoration procedure and the computation (reduction) procedure are orthogonal. You may decorate a fixed graph and you may do symbolic algebraic computations with the decorations, in an algebra generated by the graph, in the same way as a know generates an algebra called quandle. Or you may reduce the graph, irrespective of the decorations, and get another graph. Decorations of the first graph don’t persist, a priori, after the reduction.

An exception is decoration with types, which persists outside the place where the reduction is performed. But there is another problem, that even if the decoration with types satisfies locally (i.e. at the arrows of each node) what is expected from types, many (most) of the graphs don’t generate a free algebra, as it would be expected from algebras of types.

The first chemical interpretation

There is the objection that the reductions can’t be like chemical reactions because the nodes (atoms) can’t appear or disappear, there should be a conservation law for them.

Correct! What then?

The reduction, let’s pick one – the beta move say – is a chemical reaction of the graph (molecule) with an enzyme which in the formalism appears only with the name “beta enzyme” but is not specified as a graph in chemlambda. Then, during the reaction, some nodes may disappear, in the sense that they bond to the beta enzyme and makes it inactive further.

So, the reduction A –>beta B appears as the reaction

A + beta = B + garbage

How moves are performed

Let’s get a bit detailed about what moves (graph rewrites) mean and how they are done. Every move says replace graph_1 with graph_2 , where graph_1, graph_2 are graphs with a small number of nodes and arrows (and also “graph” may well be made only by two arrows, like is the case for graph_2 for the beta move).

So, now you have a graph G. Then the program looks for graph_1 chunks in G and adds some annotation (perhaps in an annotation file it produces). Then there may be script which inputs the graph G and the annotation file into the graph viz tool, which has as effect, for example, that the graph_1 chunk appears phosphorescent on the screen. Or say when you hover with the mouse over the graph_1 chunk then it changes color, or there is an ellipse which encircles it and a tag saying “beta”.

Suppose that the user clicks, giving his OK for performing the move. Then on the screen the graph changes, but the previous version is kept in the memory, in case the user wants to go back (the moves are all reversible, but sometimes, like in the case of the beta move, the graph_2 is too common, is everywhere, so the use of both senses of some moves is forbidden in the formalism, unless it is used in a predefined sequence of moves, called “macro move”).

Another example would be that the user clicks on a button which says “go along with the reductions as long as you can do it before you find a fork in the reduction process”. Then, of course it would be good to keep the intermediate graphs in memory.

Yet another example would be that of a node or arrow of the graph G which turn out to belong to two different interesting chunks. Then the user should be able to choose which reduction to do.

It would be good to have the possibility to perform each move upon request,

plus

the possibility to perform a sequence of moves which starts from a first one chosen by the user (or from the only one available in the graph, as is the case for many graphs coming from lambda terms which are obtained by repeated currying and nesting)

plus

the possibility to define new, composed moves at once, for example you notice that there is a chunk graph_3 which contains graph_1 and after reduction of graph_1 to graph_2 inside graph_3, the graph_3 becomes graph_4; graph_4 contains now a chunk graph_1 of another move, which can be reduced and graph_4 becomes graph_5. Now, you may want to say: I save this sequence of two moves from graph_3 to graph_5 as a new move. The addition of this new move does not change the formalism because you may always replace this new move with a sequence of two old moves

Practically the last possibility means the ability to add new chunks graph_3 and graph_5 in the file which describes the moves and to define the new move with a name chosen by the user.

plus

Finally, you may want to be able to either select a chunk of the input graph by clicking on nodes and arrows, or to construct a graph and then say (i.e. click a button) that from now on that graph will be represented as a new type of node, with a certain arity. That means writing in the file which describes the type of nodes.

You may combine the last two procedures by saying that you select or construct a graph G. Then you notice that you may reduce it in an interesting way (for whatever further purposes) which looks like this:

– before the chain of reduction you may see the graph G as being made by two chunks A and B, with some arrows between some nodes from chunk A and some nodes from chunk B. After the reduction you look at the graph as being made by chunks C, D, E, say.

– Then you “save” your chunks A, B, C, D, E as new types of nodes (some of them may be of course just made by an old node, so no need to save them) and you define a new move which transforms the chunk AB into the chunk CDE (written like this only because of the 1D constraints of writing, but you see what I mean, right?).

The addition of these new nodes and moves don’t change the formalism, because there is a dictionary which transforms each new type of node into a graph made of old nodes and each new move into a succession of old moves.

How can this be done:

– use the definition of new nodes for the separation of G into A, B and for the definition of G’ (the graph after the sequence of reductions) into C,D,E

– save the sequence of moves from G to G’ as new composite move between G and G’

– produce a new move which replaces AB with CDE

That’s interesting how should work properly, probably one should keep both the move AB to CDE and the move G to G’, as well as the translations of G into AB and G’ into CDE.

We’re getting close to actors, but the first purpose of the gui is not to be a sandbox for the distributed computation, that would be another level on top of that.

The value of the sequence of moves saved as a composite move is multiple:

– the graph_3 which is the start of the sequence contains graph_1 which is the start of another move, so it always lead to forks: one may apply the sequence or only the first move

– there may be a possible fork after you do the first reduction in graph_3, in the sense that there may be another chunk of another move which could be applied

GLC actors

The actors are a special kind of decoration which transform (some) moves (at the choice of the user) into interaction devices.

You decorate the nodes of a graph G with actors names (they are just names, for the moment, at your choice). As a convention let’s say that we denote actor names by :a , :b , etc

You also decorate arrows with pairs of names of actors, those coming from the decorations of nodes, with the convention that (:a, :a) is identified (in the user mind) with :a (nothing surprising here, think about the groupoid over a set X, which is the set of “arrows” X \times X and X appears as the set of objects of the groupoid and it identifies with the set of pairs (x,x) with x \in X).

Now, say you have a move from graph_1 to graph_2. Then, as in the boldfaced previous description, but somehow in the opposite sense, you define graphs A, B such that graph_1 is AB and graphs C,D such that graph_2 is CD.

Then you say that you can perform the reduction from graph_1 to graph_2 only if the nodes of A are all decorated with :a and the nodes of :b are decorated with :b, a different name than :a.

After reduction you decorate the nodes of C with :a and the nodes of D with :b .

In this way the actors with identities :a and :b change their state during the reduction (i.e. the graph made by the nodes decorated with :a and the arrows decorated with (:a,:a) change, same for :b).

The reduction can be done for the graph G only at chunks graph_1 which are decorated as explained.

To explain what actor :Bob is doing it matters from which point of view. Also, what is the relation between actors and the chemical interpretation, how they fit there?

So let’s take it methodically.

The point of view of the GUI

If we discuss from the point of view of playing with the gui, then the user of the gui has global, God’s view over what happens. That means the the user of the gui can see the whole graph at one moment, the user has a clock which is like a global notion of time. So from this point of view the user of the gui is the master of space and time. He sees the fates of :Bob, :Alice, :Claire, :Dan simultaneously. The user has the right in the gui world to talk about parallel stuff happening (i.e. “at the same time”) and sequential stuff happening (to the same actor or actors). The user may notice that some reductions are independent, in the sense that wrt to the user’s clock the result is the same if first :Bob interacts with :Alice and then :Claire interacts with :Dan or conversely, which makes the user think that there is some notion more general than parallelism, i.e. concurrency.

If we discuss from the point of view of :Bob, it looks different. More later.

Let’s stay at the user of the gui point of view and think about what actors do. We shall use the user’s clock for reference to time and the user’s point of view about space (what is represented on the screen via the viz tool) to speak about states of actors.

What the user does:

– he defines the graph types an the rules of reduction

– he inputs a graph

– he decorates it with actors names

– he click some buttons from time to time ( deus ex machina quote : “is a plot device whereby a seemingly unsolvable problem is suddenly and abruptly resolved by the contrived and unexpected intervention of some new event, character, ability or object.” )

At any moment the actor :Bob has a state.

Definition: the state of :Bob at the moment t is the graph formed by the nodes decorated with the name :Bob, the arrows decorated by (:Bob, :Bob) and the arrows decorated by (:Bob, :Alice), etc .

Because each node is decorated by an actor name, it follows that there are never shared nodes between different actors, but there may be shared arrows, like an arrow decorated (:Bob, :Alice), which is both belonging to :Bob and :Alice.

The user thinks about an arrow (:Bob, :Alice) as being made of two half arrows:

– one which starts at a node decorated with :Bob and has a free end, decorated with :Alice ; this half arrow belongs to the state of :Bob

– one which ends at a node decorated with :Alice and has a free start, decorated with :Bob ; this half arrow belongs to the state of :Alice

The user also thinks that the arrow decorated by (:Bob, :Alice) shows that :Bob and :Alice are neighbours there. What means “there”? Is like you, Bob, want to park your car (state of Bob) and the your front left tyre is close to the concrete margin (i.e. :Alice), but you may consider also that your back is close to the trash bin also (i.e :Elvis).

We may represent the neighboring relations between arrows as a new graph, which is obtained by thinking about :Bob, :Alice, … as being nodes and by thinking that an arrow decorated (:Bob, :Alice) appears as an arrow from the node which represents :Bob to the node which represents :Alice (of course there may be several such arrows decorated (:Bob, :Alice) ).

This new graph is called “actors diagram” and is something used by the gui user to put order in his head and to explain to others the stuff happening there.

The user calls the actors diagram “space”, because he thinks that space is nothing but the neighboring relation between actors at a moment in time (user’s time). He is aware that there is a problem with this view, which supposes that there is a global time notion and a global simultaneous view on the actors (states), but says “what the heck, I have to use a way to discuss with others about what’s happening in the gui world, but I will show great caution and restraint by trying to keep track of the effects of this global view on my explanation”.

Suppose now that there is an arrow decorated (:Bob, :Alice) and this arrow, along with the node from the start (decorated with :Bob) and the node from the end (decorated with :Alice) is part of the graph_1 of one of the graph rewrites which are allowed.

Even more general, suppose that there is a graph_1 chunk which has the form AB with the sub-chunk A belonging to :Alice and the sub-chunk B belonging to Bob.

Then the reduction may happen there. (Who initiates it? Alice, Bob, the user’s click ? let’s not care about this for a moment, although if we use the user’s point of view then Alice, Bob are passive and the user has the decision to click or not to click.)

This is like a chemical reaction which takes into consideration also the space. How?

Denote by Alice(t) and Bob(t) the respective states of :Alice and :Bob at the moment t. Think about the states as being two chemical molecules, instead of one as previously.

Each molecule has a reaction site: for Alice(t) the reaction site is A and for Bob(t) the reaction site is B.

They enter in the reaction if two conditions are satisfied:

– there is an enzyme (say the beta enzyme, if the reduction is the beta) which can facilitate the reaction (by the user’s click)

– the molecules are close in space, i.e. there is an arrow from A to B, or from B to A

So you see that it may happen that Alice(t) may have inside a chunk graph which looks like A and Bob(t) may have a chunk graph which looks like B, but if the chunks A, B are not connected such that AB forms a chunk which is like the graph_1 of the beta move, then they can’t react because (physical interpretation, say) they are not close in space.

The reaction sites of Alice(t) and Bob(t) may be close in space, but if the user does not click then they can’t react because there is no beta enzyme roaming around to facilitate the reaction.

If they are close and if there is a beta enzyme around then the reaction appears as

Alice(t) + Bob(t) + beta = Alice(t+1) + Bob(t+1) + garbage

Let’s see now who is Alice(t+1) and Bob(t+1). The beta rewrite replaces graph_1 (which is AB) by graph_2 (which is CD). C will belong to Alice(t+1) and D will belong to Bob(t+1). The rest of Alice(t) and Bob(t) is inherited unchanged by Alice(t+1) and Bob(t+1).

Is this true? What about the actors diagram, will it change after the reaction?

Actually graph_1, which is AB, may have (and it usually does) other arrows besides the ones decorated with (:Bob, :Alice). For example A may have arrows from A to the rest of Alice(t), i.e. decorated with (:Alice, :Alice), same for B which may have others arrows from B to the rest of B(t), which are decorated by (:Bob, :Bob).

After the rewrite (chemical reaction) these arrows will be rewired by the replacement of AB by CD, but nevertheless the new arrows which replace those will be decorated by (:Alice, :Alice) (because they will become arrows from C to the rest of Alice(t+1)) and (:Bob, :Bob) (same argument). All in all we see that after the chemical reaction the molecule :Alice and the molecule :Bob may loose or win some nodes (atoms) and they may suffer some internal rewiring (bonds), so this looks like :Alice and :Bob changed the chemical composition.

But they also moved as an effect of the reaction.

Indeed, graph_1, which is AB, may have other arrows besides he ones decorated with (:Bob, :Alice) , (:Bob, :Bob) or (:Alice, :Alice). The chunk A (which belongs to Alice(t)) may have arrows which connect it with :Claire, i.e. there may be arrows from A to another actor, Claire, decorated with (:Alice, :Claire), for example.

After the reaction which consist in the replacement of AB by CD, there are rewiring which happened, which may have as effect the apparition of arrows decorated (:Bob, :Claire), for example. In such a case we say that Bob moved close to Claire. The molecules move this way (i.e. in the sense that the neighboring relations change in this concrete way).

Pit stop

Let’s stop here for the moment, because there is already a lot. In the next message I hope to talk about why the idea of using a Chemical reaction network image is good, but still global, it is a way to replace the user’s deus ex machina clicks by random availability of enzymes, but still using a global time and a global space (i.e. the actors diagrams). The model will be better also than what is usually a CRN based model, where the molecules are supposed to be part of a “well stirred” solution (i.e. let’s neglect space effects on the reaction), or they are supposed to diffuse in a fixed space (i.e let’s make the space passive). The model will allow to introduce global notions of entropy.

Such a CRN based model deserves a study for itself, because it is unusual in the way it describes the space and the chemical reactions of the molecules-actors as aspects of the same thing.

But we want to go even further, towards renouncing at the global pov.

Notes for gamification of chemlambda

Here are some obvious notes  which are filed here about chemlambda graphs in GraphML.  Hope they may be used with … ? … why not Liviz.js  to get quick something OS to play with.

In the previous post The Quantomatic GUI may be useful for chemlambda (NTC vs TC, III) is mentioned the quantomatic gui, which can easily do all this, but is not free. Moreover, the goals for the gui proposed here are  more modest and easily attainable. Don’t care about compatibility with this or that notion from category theory because our approach is different and because the gui is just step 0. So any quick and dirty, but effective code will do, I believe.

______________________________

Recall the idea: gamification of chemlambda.

  • get quick a chemlambda GUI, as described  in  A user interface for GLC  and use it, with a collection of predefined goals to make a gamification of chemlambda.
  • obviously that means choosing a format for representing the graphs and moves, many variants here, in this post is described such a choice — GraphML. The work consists into transforming the definitions of graphs and moves into a chosen format.
  • the various buttons and options from the post describing the gui are simple scripts, perhaps in javascript?  of the kind: find and replace predefined patterns by others, all involving up to 4 nodes and 8 arrows, so almost a no brainer, consisting in writing into the chosen format which are the patterns (i.e. the configurations of at most 4 nodes and 8 arrows ) which are involved in the chemlambda moves and which are the replacement rules (i.e. transforming the definition of chemlambda moves into code)
  • open the possibility to define arbitrary patterns (named here “macros”) and arbitrary moves (named here “macro moves”)
  • transform into code the algorithm described in Conversion of lambda calculus terms into graphs , with the care to use instead chemlambda. This is good for having a button for importing lambda terms into chemlambda  easily.
  • with all this done, pick a visualization tool which is open, looks good and  is easy to use, like the one previously mentioned.
  • write some scripts for converting to and from GraphML (or whatever other choice one likes) to  the input of the viz tool.
  • pick some examples, transform them into challenges and make public the game.

___________________________

For those with a  non functional right hemisphere, here is the GraphML description of chemlambda graphs and what would mean to do a move.

I use this source for GraphML.

A chemlambda graph is  any graph which is directed, with two types of 3-valent nodes and one type of  1-valent node, with the nodes having some attributes. [For mathematicians, is an oriented graph made by trivalent, locally planar nodes, and by some 1-valent nodes, with arrows which may have free starts or free ends, or even loops.]

Moreover, we accept arrows (i.e. directed edges) with free starts, free ends, or both, or with start=end and no node (i.e. loops with no node). For all those we need, in GraphML, to use some “invisible” nodes [multiple variants here, only one is described.]

Here are the trivalent, locally planar nodes:

 

  • application node

<node id=”n0″ parse.indegree=”2″ parse.outdegree=”1″>
<data key=”d0″>green</data>
<port name=”middle_out”/>
<port name=”left_in”/>
<port name=”right_in”/>
</node>

  • fanin node

<node id=”n3″ parse.indegree=”2″ parse.outdegree=”1″>
<data key=”d0″>red</data>
<port name=”middle_out”/>
<port name=”left_in”/>
<port name=”right_in”/>
</node>

  • lambda node

<node id=”n1″ parse.indegree=”1″ parse.outdegree=”2″>
<data key=”d0″>red</data>
<port name=”middle_in”/>
<port name=”left_out”/>
<port name=”right_out”/>
</node>

  • fanout node

<node id=”n2″ parse.indegree=”1″ parse.outdegree=”2″>
<data key=”d0″>green</data>
<port name=”middle_in”/>
<port name=”left_out”/>
<port name=”right_out”/>
</node>

  • termination node

<node id=”n4″ parse.indegree=”1″ parse.outdegree=”0″>
<data key=”d0″>term</data> </node>

(where “term” denotes a color we agree to use for the termination node, in xfig made drawings this node  appears like this  –>–|  )
  • two invisible nodes 1 valent

<node id=”n5″ parse.indegree=”0″ parse.outdegree=”1″>
<data key=”d0″>invisible</data> </node>

<node id=”n6″ parse.indegree=”1″ parse.outdegree=”0″>
<data key=”d0″>invisible</data> </node>

(where “invisible” should be something to  agree to use)

  • invisible node 2 valent

<node id=”n7″ parse.indegree=”1″ parse.outdegree=”1″>
<data key=”d0″>invisible</data> </node>

Uses of  invisible nodes:

  • in chemlambda graphs we admit arrows which start from one of the 3 valent nodes but the end is free. Instead of the free end we may use  the invisible node  with id=”n6″ from before.

<edge source=”n101″ target=”n6″/>

  • There are also accepted arrows which end in a 3 valent node or in a 1 valent termination node, but their start is free. For those we may use the invisible node with id=”n5″ from before.

<edge source=”n5″ target=”n102″/>

  • There are accepted arrows with free starts and ends, so we represent them with the help of invisible nodes with id=”n5″ and id=”n6″.

<edge source=”n5″ target=”n6″/>

  • Finally, we accept also loops, with no nodes, these are represented with the help of the 2 valent invisible nodes line the one with id=”n7″

<edge source=”n7″ target=”n7″>

____________________________________________

 

Examples:
(a) – recognize pattern for beta move
(b) – perform (when called) the beta move

  • (a) recognize pattern for beta move.

– input is a chemlambda graph (in GraphML)

– output is the same graph and some supplementary file of annotations of the graph.

  • This program looks in the input graph for all patterns where the beta move can be applied.  This pattern looks like this in the GraphML convention:

<node id=”n101″ parse.indegree=”2″ parse.outdegree=”1″>
<data key=”d0″>green</data>
<port name=”middle_out”/>
<port name=”left_in”/>
<port name=”right_in”/>
</node>
<node id=”n102″ parse.indegree=”1″ parse.outdegree=”2″>
<data key=”d0″>red</data>
<port name=”middle_in”/>
<port name=”left_out”/>
<port name=”right_out”/>
</node>
<edge source=”n102″ target=”n101″ sourceport=”right_out” targetport=”left_in”/>
<edge source=”n106″ target=”n102″ sourceport=”???_1″ targetport=”middle_in”/>
<edge source=”n102″ target=”n103″ sourceport=”left_out” targetport=”???_2″/>
<edge source=”n105″ target=”n102″ sourceport=”???_3″ targetport=”right_in”/>
<edge source=”n101″ target=”n104″ sourceport=”middle_out” targetport=”???_4″/>

Here “???_i” means any port name.

If such a pattern is found, then the collection of id’s (of edges and nodes) from it is stored in some format in the annotations file which is the output.

  • (b) perform (when called) the beta move

When called, this program takes as input the graph and the annotation file for beta moves pattern and then wait for the user to pick one of these patterns (i.e. perhaps that the patterns are numbered in the annotation file, the user interface shows then on screen and the user clicks on one of them).

When the instance of the pattern is chosen by the user, the program erases from the input graph the pattern (or just comments it out, or makes a copy first of the input graph and then works on the copy, …) and replaces it by the following:

<edge source=”n106″ target=”n104″ sourceport=”???_1″ targetport=”???_4″/>
<edge source=”n105″ target=”n103″ sourceport=”???_3″ targetport=”???_2″/>

It works only when the nodes n101 or n102 are different than the nodes  n103 ,n104, n105, n106,  because if not then when erased leads to trouble. See the post Graphic beta move with details.

As an alternative one may proceed by introducing invisible nodes which serve as connection points for the arrows from n106 to n104 and n101 to n103, then erase the nodes  among n101, n102 which are not among  n103, n104, n105, n106 .

___________________________________________

A concrete example:

reg_1

  • The input graph is the one from the first row of the figure:

<node id=”n1″ parse.indegree=”0″ parse.outdegree=”1″>
<data key=”d0″>invisible</data>
<port name=”out”/>
</node>
<node id=”n2″ parse.indegree=”0″ parse.outdegree=”1″>
<data key=”d0″>invisible</data>
<port name=”out”/>
</node>
<node id=”n3″ parse.indegree=”1″ parse.outdegree=”0″>
<data key=”d0″>invisible</data>
<port name=”in”/>
</node>
<node id=”n4″ parse.indegree=”1″ parse.outdegree=”0″>
<data key=”d0″>invisible</data>
<port name=”in”/>
</node>

[comment: these are the four free ends of arrows which are numbered in the figure by “1”, … , “4”.   So you have a graph with two inputs and two outputs,  definitely not a graph of a  lambda term! ]

<node id=”n105″ parse.indegree=”2″ parse.outdegree=”1″>
<data key=”d0″>green</data>
<port name=”middle_out”/>
<port name=”left_in”/>
<port name=”right_in”/>
</node>
<node id=”n106″ parse.indegree=”2″ parse.outdegree=”1″>
<data key=”d0″>green</data>
<port name=”middle_out”/>
<port name=”left_in”/>
<port name=”right_in”/>
</node>
<node id=”n108″ parse.indegree=”2″ parse.outdegree=”1″>
<data key=”d0″>green</data>
<port name=”middle_out”/>
<port name=”left_in”/>
<port name=”right_in”/>
</node>

[comment: these are 3 application nodes]

<node id=”n107″ parse.indegree=”1″ parse.outdegree=”2″>
<data key=”d0″>red</data>
<port name=”middle_in”/>
<port name=”left_out”/>
<port name=”right_out”/>
</node>
<node id=”n109″ parse.indegree=”1″ parse.outdegree=”2″>
<data key=”d0″>red</data>
<port name=”middle_in”/>
<port name=”left_out”/>
<port name=”right_out”/>
</node>
<node id=”n110″ parse.indegree=”1″ parse.outdegree=”2″>
<data key=”d0″>red</data>
<port name=”middle_in”/>
<port name=”left_out”/>
<port name=”right_out”/>
</node>

[comment: these are 3 lambda abstraction nodes]

<edge source=”n1″ target=”n105″ sourceport=”out” targetport=”right_in”/>
<edge source=”n107″ target=”n105″ sourceport=”left_out” targetport=”left_in”/>
<edge source=”n105″ target=”n106″ sourceport=”middle_out” targetport=”left_in”/>

<edge source=”n2″ target=”n106″ sourceport=”out” targetport=”right_in”/>
<edge source=”n106″ target=”n107″ sourceport=”middle_out”
targetport=”middle_in”/>
<edge source=”n107″ target=”n108″ sourceport=”right_out” targetport=”left_in”/>

<edge source=”n108″ target=”n109″ sourceport=”middle_out”
targetport=”middle_in”/>
<edge source=”n110″ target=”n108″ sourceport=”right_out” targetport=”right_in”/>
<edge source=”n109″ target=”n110″ sourceport=”right_out”
targetport=”middle_in”/>

<edge source=”n109″ target=”n4″ sourceport=”left_out” targetport=”in”/>
<edge source=”n110″ target=”n3″ sourceport=”left_out” targetport=”in”/>

  • This graph reduces via 3 beta reductions to

<node id=”n1″ parse.indegree=”0″ parse.outdegree=”1″>
<data key=”d0″>invisible</data>
<port name=”out”/>
</node>
<node id=”n2″ parse.indegree=”0″ parse.outdegree=”1″>
<data key=”d0″>invisible</data>
<port name=”out”/>
</node>
<node id=”n3″ parse.indegree=”1″ parse.outdegree=”0″>
<data key=”d0″>invisible</data>
<port name=”in”/>
</node>
<node id=”n4″ parse.indegree=”1″ parse.outdegree=”0″>
<data key=”d0″>invisible</data>
<port name=”in”/>
</node>

<edge source=”n1″ target=”n3″ sourceport=”out” targetport=”in”/>
<edge source=”n2″ target=”n4″ sourceport=”out” targetport=”in”/>

____________________________________________

For this choice which consists into using invisible nodes, a new program is needed (which may be useful for other purposes, later)

(c) arrow combing

The idea is that a sequence of arrows  connected via 2-valent invisible nodes should count as an arrow in a chemlambda graph.

So this program does exactly this: if n1001 is different than n1002  then replaces the pattern

<node id=”n7″ parse.indegree=”1″ parse.outdegree=”1″>
<data key=”d0″>invisible</data>
<port name=”in”/>
<port name=”out”/>
</node>
<edge source=”n1001″ target=”n7″ sourceport=”???_1″ targetport=”in”/>
<edge source=”n7″ target=”n1002″ sourceport=”out” targetport=”???_2″/>

by

<edge source=”n1001″ target=”n1002″ sourceport=”???_1″ targetport=”???_2″/>

 

_________________________________________

 

The Quantomatic GUI may be useful for chemlambda (NTC vs TC, III)

Since I discovered Quantomatic (mentioned in NTC vs TC I), I continued to look and it seems that it already has, or almost, the GUI which is the step 0  in the Distributed GLC project.

This is a very good news for me, because I tend to consider that possibly complex tasks are simple, therefore as any lazy mathematician will tell you, it is always good if some piece of work has been  done before.

In the post A user interface for GLC I describe what we would need and this corresponds, apparently, to a part of what the Quantomatic GUI can do.

See Aleks Kissinger Pictures of Processes: Automated Graph Rewriting for Monoidal Categories and Applications to Quantum Computing, DPhil thesis, [arXiv:1203.0202] , Chapter 9.

Without much ado, I shall comment on the differences, with the hope that the similarities are clear.

Differences.  I shall use as an anchor for explaining the differences the homepage of Aleks Kissinger, because is well written and straightforward to read. Of course, this will not mean much if you don’t know what I am talking about concerning NTC vs TC, chemlambda or distributed GLC.

1. Aleks explains

“But “box-and-wire” diagrams aren’t just for physics and multilinear algebra. In fact, they make sense whenever there is a notion a “map”, a procedure for composing maps (i.e. vertical composition), and a procedure for putting two maps “side-by-side” (horizontal composition). That is, this notation makes sense in any monoidal category.”

Here is a first difference from chemlambda, because the chemlambda graphs are open graphs (in the sense that they also have arrows with one or both ends free, as well as loops), but otherwise a chemlambda graph has not any preferred external order of examination. The arrows of the graph are not “wires” and the nodes are not “boxes”. There is no meaning of the vertical or horizontal composition, because there is no vertical and no horizontal.

2. Another quote from Aleks page:

“In Open Graphs and Monoidal Theories, Lucas Dixon and I defined the category of open-graphs and described how double-pushout graph rewriting, as defined in e.g. Ehrig et al5, can be applied to open-graphs.”

This marks the second basic difference, which consists into the effort we make to NOT go global. It is a very delicate boundary between staying local and taking a God’s pov, and algebra is very quickly passing that boundary. Not that it breaks a law, but it adds extra baggage to the formalism, only for the needs to explain things in the algebraic way.

Mind that it is very interesting to algebraize from God’s pov, but it might be also interesting to see how far can one go without taking the global pov.

 

3.  Maybe as an effect of 1. they think in terms of processes, we think in terms of Actors. This is not the same, but OMG how hard is to effect, only via words exchanges,  the brain rewiring which is needed to understand this!

But this is not directly related to the GUI, which is only step 0 for the distributed GLC.

_______________________________________

Putting these differences aside, it is still clear that:

  • chemlambda graphs are what they call “open graphs”
  • there are some basic tools in the Quantomatic gui which we need
  • mixing our minimal, purely local constraint on reasoning with their much more developed global point of view can be only fruitful, even if the goals and applications are different.

_______________________________________

What do you think about this?

_______________________________________

 

 

The example with the marbles

In a discussion about the possible advantages for secure computing with the  GLC actors model, I came up with this analogy, which I want to file here, not to get lost in the flow of exchanges:

Mind that this is only a thought  experiment, which might not be accurate in all aspects in it’s representation of the kind of computation with GLC or more accurately with chemlambda.

Imagine a large pipe, with a diameter of 1 m say, and 3 m long, to have an image. It is full of marbles, all identical in shape. It is so full that if one forces a marble at one end then a marble (or sometimes more) have to get out by the other end.

Say Alice is on one end of the pipe and Bob is at the other end. They agreed previously to communicate in the most primitive manner, namely by the spilling  of a small (say like ten) or a big (for example like 50)   marbles at their respective ends. The pipe contains maybe 10^5   or 10^6 marbles, so both these numbers are small.

There is also Claire who, for some reason, can’t see the ends of Alice and Bob, but the pipe has a window at the middle and Claire can see about 10% of the marbles from the pipe, those which are behind the window.

Let’s see how the marbles interact. Having the same shape, and because the pipe is full of them, they are in a local configuration which minimizes the volume (maybe not all of them, but here the analogy is mum about this). When a marble (or maybe several) is forced at Alice’s end of the pipe, there are lots of movements which accommodate the new marbles with the old ones. The physics of marbles is known, is the elastic contact between them and there is a fact in the platonic sky which says that for any local portion of the pipe the momentum and energy are conserved, as well as the volume of the marbles. The global conservation of these quantities is an effect of those (as anybody versed in media mechanics can confirm to you).

Now, Claire can’t get anything from looking  by the window. At best Claire remarks complex small movements, but there is no clear way how this happens (other than if she looks at a small number of them then she might figure out the local mechanical ballet imposed by the conservation laws), not are Alice’s marbles marching towards Bob’s end.

Claire can easily destroy the communication, for example by opening her window and getting out some buckets of marbles, or even by breaking the pipe. But this is not getting Claire closer to understanding what Alice and Bob are talking about.

Claire could of course claim that i the whole pipe was transparent, she could film the pipe and then reconstruct the communication. But in this case Claire would be the goddess of the pipe and nothing would be hidden to her. Alice and Bob would be her slaves because Claire would be in a position which is equivalent to having a window at each end of the pipe.

__________________________

Comments:

  • each marble is a GLC actor
  • they interact locally, by known and simple rules
  • this is an example of signal transduction
  • which encrypts itself, more  communication makes the decoding harder. It is the same problem which is encountered when observing a living system, for example a cell. You may freeze it (and therefore kill it) and look at it but you won’t see how it functions. You can observe it alive, but it is complex by itself, you never see, or only rare glimpses of meaning.
  • the space (of the pipe) represents  an effect of the local, decentralized, asynchronous interactions.

Beneath under there is just local interaction, via the moves which act on patterns of graphs which are split between actors. But this locality gives space, which is an emergent, global effect of these distinctions which communicate.

Two chemical molecules which react are one composite molecule which reduces itself, splitted between two actors (one per molecule). The molecules react when they are close is the same as saying that their associated actors interact when they are in the neighboring relation.  And the reaction modifies not only the respective molecules, but also the neighboring relation between actors, i.e. the reaction makes the molecules to move through space. The space is transformed as well as the shape of the reactants, which looks from an emergent perspective as if the reactants move through some passive space.

Concretely, each actor has a piece of the big graph, two actors are neighbours if there is an arrow of the big graph which connects their respective pieces, the reduction moves can be applied only on patterns which are splitted between two actors and as an effect, the reduction moves modify both the pieces and the arrows which connect the pieces, thus the neighbouring of actors.

What we do in the distributed GLC project is to use actors to transform the Net into a space. It works exactly because space is an effect of locality, on one side, and of universal simple interactions (moves on graphs) on the other side.

__________________________________________

The tone goes up on the OPEN front

This post has a collection of savory quotes and further comments about the psychological changes which are ongoing, around new ways of dissemination and communication of scientific research.

Aka OPEN …

  • access
  • peer review
  • data
  • notebooks

We are closing to a change, a psychological change, from indifference and disdain from the majority of (more or less established) researchers to a public acknowledgement of the stupidity and immorality of the procedure which is in force, still.

[Rant, jump over if not interested into personal stuff.

Please take into consideration that even if I embrace with full heart these changes, I don’t have any merit or real contribution to these, excepting modest posts here at chorasimilarity, under the tags cost of knowledge and open peer review. More than this, I suffered like probably some of my colleagues by choosing to publish through arXiv mostly and not playing the stupid game, which led to a very damaged career, but unfortunately I did not had the opportunity to create change through participation in teams which now are shaping the future of OPEN whatever. Bravo for them, my best wishes for them, why not sometimes a honest criticism from my small point of view, and thanks for the feeling of revenge which I have, the “I was right” feeling which I hope will grow and grow, because really the research world is damaged to the bones by this incredible stupidity, maybe cupidity and surely lack of competence and care for the future manifested by a majority of leaders.

The second thing I want to mention is that even if I refer to “them”, to a “majority”, all these generalizations have to be nuanced by saying that, as always, as everywhere, the special ones, the creative ones, the salt and pepper of the research world are either excused or completely innocent. They are also everywhere, maybe many of them not in any strong influence position (as in music, for example, the most well known musicians are always never the best, but surely they are among the most hard working), but creating their stuff and possibly not really caring about these social aspects, because they are too deep into the platonic realm. All of them are not the subject or part of any “majority”, they are not “them” in any way.

The third point is that there may be a sloppy use of “young” and “old”. This has nothing to do with physical age. It is true that every old moron was a young moron before. Every old opportunist was a young one some years earlier. Their numbers are continually replenished and we find them everywhere, albeit much more present than the salt and pepper of the research community, and more in the good hard worker, but not really, seriously creative part.  No, young or old refers to the brain quality, not to physical age.

End of rant]

Back to the subject. From timid or rather lonely comments, now we passed to more strong ones.

And the words are harder.

From Causes of the persistence of impact factor mania, by Arturo Casadevall and Ferric C. Fang,

“Science and scientists are currently afflicted by an epidemic of mania manifested by associating the value of research with the journal where the work is published rather than the content of the work itself. The mania is causing profound distortions in the way science is done that are deleterious to the overall scientific enterprise. In this essay, we consider the forces responsible for the persistence of the mania and conclude that it is maintained because it disproportionately benefits elements of the scientific enterprise, including certain well-established scientists, journals, and administrative interests.”

Fully agree with them, besides of this I consider very interesting their explanation that we face a manifestation of the tragedy of the commons.

From Academic self-publishing: a not-so-distant-future, here is a big quote, is too beautiful to crop:

A glimpse into the future
Erin is driving back home from the laboratory with a big smile on her face. After an exciting three-hour brainstorming session discussing the intracranial EEG data from her last experiment, she can’t wait to get her hands back on the manuscript. A new and unexpected interpretation of the findings seems to challenge a popular assumption about the role of sleep in declarative memory consolidation. She had been looking over the figures for more than a month without seeing a clear pattern. But now, thanks to a moment of insight by one of her colleagues, the pieces finally fit together and a new logic is emerging. She realizes it will be hard for the community to accept these new findings, but the methodology is solid and she is now convinced that this is the only reasonable explanation. She is so anxious to see what Axell’s group thinks about new evidence that refutes its theoretical model.

After a week’s hard work, the first draft is ready. All the figures and their long descriptive legends are in place, the literature review is exhaustive, the methodology is clear as a bell, and the conclusions situate the finding in the general context of the role of sleep in memory consolidation. Today, the group had a brief morning meeting to decide which colleagues they will ask to review their draft. Of course, they will ask Axell for his opinion and constructive criticism, but they also agree to invite Barber to confirm that the application of independent component analysis on the data was performed correctly, and Stogiannidis to comment on the modification of the memory consolidation scale. For a review of the general intracranial EEG methodology, the group decides to first approach Favril herself and, if she declines, they will ask Zhang, who recently reviewed the subject for Nature.

After the lunch break, Erin submits the manuscript to the university’s preprint repository that provides a DOI (digital object identifier) and an open attribution licence. When she hits the submit button, she feels a chill running down her spine. More than a year’s hard work is finally freely available to her peers and the public. The next important step is to invite the reviewers. She logs in to her LIBRE profile and inserts the metadata of the manuscript with a hyperlink to the repository version (see LIBRE, 2013). She then clicks the invite reviewer button and writes a quick personal message to Axell, briefly summarizing the main result of the study and why she thinks his opinion is vital for the debate this manuscript will spark. She then invites Stogiannidis to comment on the modification of the memory consolidation scale, and Barber, specifically asking him to check the application of independent component analysis, and also letting him know that all data are freely and openly available at Figshare. After finishing with the formal invitations, Erin tweets the LIBRE link to her followers and sends it as a personal message to specific colleagues from whom she would like to receive general comments. She can now relax. The word is out!

A couple of weeks later, Erin is back at work on the project. Both Favril and Zhang refused to review because of heavy work schedules, but Stogiannidis wrote an excellent report totally approving the modification of her scale. She even suggested a future collaboration to test the new version on a wider sample. Barber also submitted a brief review saying that he doesn’t find any caveats in the analysis and approves the methodology. As Erin expected, Axell didn’t take the new result lightly. He submitted a harsh critique, questioning both the methodology and the interpretation of the main findings. He even mentioned that there is a new paper by his group currently under journal review, reporting on a similar experiment with opposite results. Being pipped to the post and being second to report on this innovative experimental design, he must be really peeved, thinks Erin. She grins. Maybe he will learn the lesson and consider self-publishing next time. Anyway, Erin doesn’t worry too much as there are already two independent colleagues who have marked Axell’s review as biased on LIBRE. Last night, Xiu, Erin’s colleague, finished retouching one of the figures based on a very insightful comment by one of LIBRE’s readers, and today she will upload a new version of the manuscript, inviting some more reviewers.

Two months later, Erin’s paper is now in version number 4.0 and everyone in the group believes it is ready for submission to a journal and further dissemination. The issues raised by seven reviewers have now been adequately addressed, and Axell’s review has received six biased marks and two negative comments. In addition, the paper has attracted a lot of attention in the social media and has been downloaded dozens of times from the institutional repository and has been viewed just over 300 times in LIBRE. The International Journal for the Study of the Role of Sleep in Memory Consolidation has already been in touch with Erin and invited her to submit the paper to them, but everybody in the group thinks the work is of interest to an even wider audience and that it should be submitted to the International Journal for the Study of Memory Consolidation. It charges a little more – 200 euros – but it is slightly more esteemed in the field and well worth the extra outlay. The group is even considering sending the manuscript in parallel to other journals that embrace a broader neuroscience community, now that the group’s copyright and intellectual property rights have been protected. Anyway, what is important (and will count more in the grant proposal Erin plans to submit next year) is that the work has now been openly approved by seven experts in the field. She is also positive that this paper will attract ongoing reviews and that she may even be invited as an expert reviewer herself now that she is more visible in the field. A debate has started in her department about how much the reviewer’s track record should weigh in how future tenure decisions are evaluated, and she has been invited to give a talk on her experience with LIBRE and the versioning of the group’s manuscript, which has now become a dynamic paper (Perakakis et al., 2011).”

I love this, in all details! I consider it among the most well written apology of, particularly, open peer review. [See if you care, also my post Open peer review as a service.]

From Your university is paying too much for journals, by Bjorn Brembs:

“Why are we paying to block public access to research, when we could save billions by allowing access?”

Oh, I’m sure that those in charge with these decisions have their reasons.

From the excellent We have met the enemy: part I, pusillanimous editors, by Mark C. Wilson

“My conclusions, in the absence of further information: senior researchers by and large are too comfortable, too timid, too set in their ways, or too deluded to do what is needed for the good of the research enterprise as a whole. I realize that this may be considered offensive, but what else are the rest of us supposed to think, given everything written above? I have not even touched on the issue of hiring and promotions committees perpetuating myths about impact factors of journals, etc, which is another way in which senior researchers are letting the rest of us down”…

Read also the older, but great We have met the enemy and it is us by Mark Johnston.  I commented about it here.

What is your opinion about all this? It’s getting hotter.

_________________________________________

My first NSF experience and the future of GLC

Just learned that the project “Secure Distributed Computing with Graphic Lambda Calculus” will not be funded by NSF.

I read the reviews and my conclusion is that they are well done. The 6 reviewers all make good points and a good job to detect strong points and weaknesses of the project.

Thank you NSF for this fair process. As the readers of this blog know, I don’t have the habit to hide my opinions about bad reviews, which sometimes may be harsh. Seen from this point of view, my thanks look, I hope, even more sincere.

So, what was the project about? Distributed computing, like in the “GLC actors, artificial chemical connectomes, topological issues and knots”  arXiv:1312.4333 [cs.DC], which was branded as useful for secure computing. The project has been submitted in Jan to Secure and Trustworthy Cyberspace (SaTC) NSF program.

The point was to get funding which allows the study of the Distributed GLC, which is for the moment fundamental research.  There are reasons to believe that distributed GLC may be good for secure computing, principal among them being that GLC (and chemlambda, actually the main focus of research) is not based on the IT paradigm of gates and wires, but instead on something which can be described as signal transduction, see How is different signal transduction from information theory?   There is another reason, now described by the words “no semantics“.

But basically,  this is not naturally a project in secure computing. It may become one, later, but for the moment the project consists into understanding asynchronous, decentralized computations performed by GLC actors and their biological like behaviour. See What is new in distributed GLC?

Together with Louis Kauffman, we are about to study this, he will present at the ALIFE 14 conference our paper  Chemlambda, universality and self-multiplication,   arXiv:1403.8046.

There is much more to tell about this, parts were told already here at chorasimilarity.

From this moment I believe that instead of thinking security and secrecy, the project should be open to anybody who wishes to contribute, to use or to criticize. That’s the future anyway.

______________________________________________________

 

 

How non-commutative geometry does not work well when applied to non-commutative analysis

I expressed several times the belief that sub-riemannian geometry represents an example of a mathematically new phenomenon, which I call “non-commutative analysis”. Repeatedly happened that apparently general results simply don’t work well when applied to sub-riemannian geometry. This “strange” (not for me) phenomenon leads to negative statements, like rigidity results (Mostow, Margulis), non-rectifiability results (like for example the failure of the theory of metric currents for Carnot groups).  And now, to this adds the following,  arXiv:1404.5494 [math.OA]

“the unexpected result that the theory of spectral triples does not apply to the Carnot manifolds in the way one would expect. [p. 11] ”

i.e.

“We will prove in this thesis that any horizontal Dirac operator on an arbitrary Carnot manifold cannot be hypoelliptic. This is a big difference to the classical case, where any Dirac operator is elliptic. [p. 12]”

It appears that the author reduces the problems to the Heisenberg groups. There is a solution, then, to use

R. Beals, P.C. Greiner, Calculus on Heisenberg manifolds, Princeton University Press, 1988

which gives something resembling spectral triples, but not quite all works, still:

“and show how hypoelliptic Heisenberg pseudodifferential operators furnishing a spectral triple and detecting in addition the Hausdorff dimension of the Heisenberg manifold can be constructed. We will suggest a few concrete operators, but it remains unclear whether one can detect or at least estimate the Carnot-Caratheodory metric from them. [p. 12]”

______________________________

This seems to be an excellent article, more than that, because it is a phd dissertation  many things are written clearly.

I am not surprised at all by this, it just means that, as in the case with the metric currents, there is an ingredient in the spectral triples theory which introduces by the backdoor some commutativity, which messes then with the non-commutative analysis  (or calculus).

Instead I am even more convinced than ever that the minimal (!) description of sub-riemannian manifolds, as models of a non-commutative analysis, is given by dilation structures, explained most recently in arXiv:1206.3093 [math.MG].

A corollary of this is: sub-riemannian geometry (i.e. non-commutative analysis of dilation structures)  is more non-commutative than non-commutative geometry .

I’m waiting for a negative result concerning the application of quantum groups to sub-riemannian geometry.

__________________________________________