In this new version of chemlambda appear two new dist moves, DIST-FI and DIST-FO, as well as modifications of FAN-IN and the old DIST moves which use in some places the FOE node instead of FO.
See it in action in the example on the correct self-multiplication of the S combinator, where the FOE node is yellow.
I am preparing a click and play tutorial on that.
You can go already to the gallery of examples. Look at them and play with the nice graphs!
But you can already play with the stuff which makes the graphs!
I shall explain in a moment how to do this. Before that I write a very short description of what is this all about.
Chemlambda is an artificial chemistry like the Alchemy of Fontana and Buss but with rather big differences. They (Fontana and Buss) say basically that http://fontana.med.harvard.edu/www/Documents/WF/Papers/objects.pdf
The dream is the same, though, namely that if not all chemistry, maybe some parts of organic chemistry are used in real life like that, and not like in the bits-and-boolean-expressions- run-by-a-TM-automaton-model.
There is no need for seeing these graphs (molecules) in the plane or in 3D space (well, in 3D they embed anyways, and maybe there are real chemicals which behave like this!) because graphs don’t need to be embedded somewhere to make sense. In particular, these graphs are not constrained to be planar.
_______________________________________
How to play with the visualizer for chemlambda already. Follow these steps:
bash main_viral.sh
and look what happens. You’ll be asked to choose a something.mol file. There are several in the tar.
You can play without being connected to the net.
The input files are called something.mol . I put in the archive some examples. You can write new ones like this. For this you have to read the post about the g-patterns notation.
In a something.mol file there is a list in plain text (with space as separator character in the line and \n as separator character between lines). Is the list of the graphical elements of the g-pattern, but with the [ , ] deleted.
Thus instead of writing A[1,2,3] FO[3,4,5] you write
A 1 2 3
FO 3 4 5
Say you write a new .mol file, which is called blabla.mol. Save it and then type
bash main_viral.mol
There will be a text which appears which asks you to choose a mol file. You type blabla.mol and you hit enter. then you look with look.html, as explained.
For more explanations what it does, just open in a text editor the file check_and_balance* and read there. There are explanations inside.
If you look in the folder you see the appearance of several files with names starting with temp_* Open them to discover answers to some questions you may have.
For the moment, there is really no replacement for reading the chemlambda formalism. No chit-chat will suffice, I tell you from experience.
That is why, although I look for creative and open people to discuss it, I shall not engage in any meaningless stuff.
If you are creative yourself then you’ll understand and you shall not think the following applies to you.
OK, here goes the other part of the post.
I shall ignore naive questions (because I saw that most of the naive questions come from people who don’t like to really understand, so probably they won’t read my answers). Moreover, I shall not respond to any question which contains the word “bot”, because WTF is a bot anyway and what that has to do with more than a hundred posts about chemlambda and several articles? Nothing at all! You want “bots” then don’t waste my time.
However…
-before asking the next most stupid question, let me answer: no man, the graphs are not processes. No! No chance! No, it’s not related to categories. No, it’s not ZX. No, has nothing to do with spiders, quantum diagrams and all this stuff, you know why? because these graphs don’t represent processes.
- if you don’t know what a graph is or if you deeply feel that a graph has to be embedded in some external physical space, then refresh your reading of the definition of a graph. As well you may go and read this post.
- if you don’t know what a graph rewrite is then google it.
- if you don’t know what “local move” or “local graph rewrite” is it surely mean that you have not read anything about chemlambda, but here is the answer: a N-local move which consists into replacing at most N nodes and edges. All moves of chemlambda are at most 10-local.
-if you don’t know what is the reducton strategy used, then congrats, that’s the first intelligent question. For the moment, the strategy of reduction is the most stupid one (I call it like this, but it is brilliant compared to others), described here
Reduction strategy. For the moment I am using a sequential strategy of reduction, with priority choices, like this:
At each reduction step do the following:
The priority choice is called “viral”, meaning that DIST > BETA>LOC PRUNING.
For the moment the moves CO-COMM and CO-ASSOC are not used.
This is not yet distributed GLC, there is a ladder of other strategies to explore. I personally think that they are not a big deal, but I see from experience that this is not obvious. Moreover it is very entertaining to see these strategies in action, all this gives me the occasion to learn new tricks, so in the future I shall add new strategies.
_________________________________
Tell me what you think, and most recommended is to play with it first.
____________________________________
If you want to make your own then go to the explanations page and download and follow the instructions.
There is a gallery of examples now!
UPDATE 2: …phew, the fact that the shell script which launches the gui is called “main_viral.sh” is related to the reduction strategy used, has definitely nothing to do with the shellstock vulnerability.
___________________________________
You can download the awk file
and play with chemlambda with the priority choice “viral”. [see the UPDATE!]
This priority choice privileges the moves which increase the number of nodes in favour of those which decrease it.
More concretely DIST>BETA>LOC-PR. It is one of the priority choices from the post When priority matters.
How to use it:
Look at data_8.mol , which is the file for the initial pattern from the post When priority matters. Here is also data_7.mol, which is the file containing the initial pattern from the post When priority does not matter.
awk -f check_and_balance.awk data_7.mol
to play with data_7.mol. Then type ls to find a number of files, each one starting with “temp_”.
The file temp_nodes_before is basically the same as the input file.
The file temp_proposed_moves has the proposed moves :) , before any priority choice and before any COMB moves.
The file temp_final_nodes has the result after one reduction step.
You may remark the apparition of new nodes, like
FRIN 17
which is a “invisible” node which has only one port (in this case named “17”) which is an “out” port. It signals that port 17 is free (and it appears as a free “in” port, that is why FRIN, which caps it, has to have a paired “out” port).
FROUT 0
which is a invisible node with only one port (named “0” in this case) which is a “in” port. For similar reasons as before, it signals that 0 is a free “out” port.
This may change slightly the aspect of g-patterns, in the only sense that arrow elements with both ends free are replaced by pairs FRIN and FROUT, for example if
Arrow[ 17 , 0 ]
has both ports free, you shall see it in the temp_final_nodes as
FRIN 17
FROUT 0
Otherwise the FRIN-FROUT thing helps the understanding, in the sense that it makes visible the free ports.
awk -f check_and_balance.awk temp_final_nodes
and look again at
temp_nodes_before to see where you start in this reduction step
temp_proposed_moves to see the new moves proposed before any priority choice
temp_final_nodes to see the result.
And so on and so forth.
If you use data_7.mol or data_8.mol (or any g-pattern from this blog which is reduced by the “viral” priority choice) then you should see exactly what is described in the respective posts.
There is a small trick, namely that when DIST moves are done, the script has a way to choose new names for the new edges which appear. The trick is that first it computes the max over existing port names ( that is the variable “tutext”) and then it baptizes the new ports with tutext concatenated with “a”, tutext concatenated with “b”, with “c” and with “d”. This way one can be sure that the new ports don’t have names which conflict with the old ports.
I don’t have yet a visualizer for this, but work (mostly to understand) to use d3 for this.
UPDATE (20.09.2014): I can see my first molecule during reduction, basically using this and the json file produced by the script.
it represents (Lx.y) Omega, where Omega= (Lx.xx) (Lx.xx) ).
I can move and play with it but I have to control the colors, the ports, oriented edges. Soon.
Enjoy! Criticize! Contribute!
_____________________________________________________________
Look again at the move R2 of graphic lambda calculus.
The epsilon and mu are, originally, elements of a commutative group. Suggestions have been made repeatedly that the commutative group can be anything.
The epsilon and mu are port names, just like the red 1, 2, 3 from the figure.
In the more recent drawing conventions (not that that matters for the formalism) the port names are in blue.
Here is again the same move, but without the epsilon and mu.
Of course, the green node is the fanout, in the chemlambda version.
Yes, eventually, everything is related to everything in this open notebook.
In the next posts I shall take it step by step.
________________________________________________________________
Hey, everybody has a limited understanding, here is mine!
TL;DR> The crux of the matter is in this part of any recent CC 4.0 licence: in Section 2/Scope/a. Licence grant/5.
The new trend in academic publishing is:
It matters very much because that is what happens in the dispute between Amazon and Hachette, namely Hachette has the copyright of books but Amazon puts downstream restrictions!
Conclusion: never forget about Doctorow’s first law and always ask for a CC licence from any publisher!
Doctorow’s first law:
“Any time someone puts a lock on something that belongs to you, and won’t give you the key, you can be sure that the lock isn’t there for your benefit.”
This is from the very clear explanation about the Amazon and Hachette dispute by Cory Doctorow in Locus.
______________________________________________________________
Evidence now.
I made this post on G+, asking for info. I collect here the stuff:
Other things:
UPDATE 16.09.2014: See the post AAAS vies for the title the “Darth Vadar of publishing” by longpd. “They claim to support open access. They redefine it to be a pay for publishing charge (APC) of $3,000 USD and that restricts the subsequent use of the information in the article preventing commercial reuses such as publication on some educational blogs, incorporation into educational material, as well the use of this information by small to medium enterprises. If you really meant open access, the way the rest of world defines it, you’ll have to pay a surcharge of an additional $1,000. But it gets worse.”
_____________________________________________________________
Principle. I am using the g-patterns formalism to separate the reduction of molecules from the visual representation of them. For those who want to know more about here is the definition of g-patterns and here is the definition of moves in terms of g-patterns.
Reduction strategy. For the moment I am using a sequential strategy of reduction, with priority choices, like this:
At each reduction step do the following:
There is a subtlety concerning the use of CO-COMM and CO-ASSOC moves, related to the fact that I don’t want to use them directly, and related to the goal which I have, which may be one of those:
Where I am now. I wrote some shell scripts, using awk and perl, to do the first strategy and I know what to do to have the second strategy as well.
Mind that I learn as I do, so probably the main shell script should be named frankenstein.sh.
What I need next. The format of g-patterns can be easily turned into a format (I lean towards json) which can be then visualized as a force directed d3 graph. I need help here, I know there are lots of things already done, the main idea is that it should be something which doesn’t use java, has to be free, eventually has to need only the program I prepare (which will be freely available) and a browser.
What I need after. Several things:
That’s it for the moment, I APPRECIATE USEFUL HELP, thank you.
______________________________________________________________
In that post we see two possible reductions, depending on the PRIORITY CHOICE, either BETA>DIST or DIST>BETA.
In the case BETA>DIST the reduction stops quickly.
On the contrary, in the case DIST>BETA the reduction does not stop, because it enters in a cyclic process which produces an unbounded number of bubbles (i.e. loops graphical elements).
Moreover, we start from the g-pattern form of the combinator (Lx.xx)(Lx.xx).
Or, this may lead to the false impression that somehow this has anything to do with the choice between normal order reduction and applicative order reduction from lambda calculus.
Yes, because a standard example of the difference between these reduction strategies is the following one.
Let’s denote by Omega the combinator (Lx.xx)(Lx.xx). Consider then the term
(Lx.z) Omega
Under the normal order reduction this term suffers one beta reduction
(Lx.z) Omega –BETA–> z
and that’s all, the reduction stops.
On the contrary, under the applicative order reduction strategy, the reduction never stops, because we first try to reduce Omega, leading to a cycle
(Lx.z) Omega –BETA–> (Lx.z) Omega
The question is: is there any connection between these two phenomena?
No, not the slightest.
In order to prove this I shall reduce in chemlambda with the sequential strategy the g-pattern which represents the term (Lx.z) Omega. Let’s see what happens, but first let me remind what we do.
See the 1st part and 2nd part of the description of the conversion of lambda terms into g-patterns.
The sequential strategy is described by the following algorithm. I write it again because the g-pattern of Lx.z brings a termination node “T”, therefore we have to consider also the local pruning moves LOC-PR.
See the post about definition of moves with g-patterns.
The algorithm of the sequential reduction strategy is this. At each reduction step do the following:
The PRIORITY CHOICE means a predefined choice between doing one of the two moves in conflict. The conflict may be between BETA and DIST, between BETA and LOC-PR or between DIST and LOC-PR.
In the following we shall talk about the PRIORITY CHOICE only if needed.
In the first picture we see, in the upper side, the g-pattern which represents the term (Lx.z) Omega, then the first reduction step.
I kept the same names for the ports from the last post and I added new names for the ports of the new graphical elements.
First, remark that the g-pattern which represents (Lx.z) is
L[z,n2,n1] T[n2]
I named by “z” one of the ports of the lambda node L, which would correspond to the variable z of the term Lx.z. But recall that chemlambda does not use variable names, so the name “z” is there only by my choice of names for the port variables, could be anything instead (which was not used before in one of the g-pattern’s ports).
Then, A[n1,1,0] corresponds to the application of something linked to the port n1 (namely the g-pattern of (Lx.z), i.e. L[z,n2,n1] T[n2]) to something linked to the port 1 (i.e. the g-pattern of Omega, which was discussed in the post “When priority matters”).
Nice! What happened?
So, in a sense, this looks like the result of the normal order reduction, but no priority choice was involved!
However, the chemlambda sequential reduction continues, like explained in the picture of the 2nd reduction step.
OK, the Arrow[z,0] still exists after the reduction step, and a LOC-PR move appear.
Let’s see what happens in the 3rd reduction step.
The reduction stops here. There is nothing more to do, according to the sequential reduction strategy.
Differently from the reduction of Omega alone, explained in the previous post, this time there is NO PRIORITY CHOICE NEEDED.
Ergo, the priority choice has nothing to do here. The sequential chemlambda reduction of the g-pattern corresponding to (Lx.z) Omega stops after 3 steps, no matter which was the PRIORITY CHOICE made before the start of the computation.
_________________________________________________________
The goal is to see how the g-pattern of the combinator
(Lx.xx) (Lx.xx)
reduces in chemlambda with the sequential strategy.
See the 1st part and 2nd part of the description of the conversion of lambda terms into g-patterns.
The simple sequential strategy is the following: at each reduction step do the following:
The PRIORITY CHOICE means a predefined choice between doing one of the two moves in conflict.
In this post it will be about the priority between BETA and DIST moves.
Mind that the PRIORITY CHOICE is fixed before the start of the computation.
However, in the following I shall mention the choice when it will be needed.
OK, so let’s start with the g-pattern which represents the well known combinator (Lx.xx) (Lx.xx). Is clear that as a lambda term it has no normal form, because it transforms into itself by a beta reduction (so is a sort of a quine, if quines would have an interesting definition in lambda calculus).
As previously, you shall see that we depart quickly from the lambda calculus realm, and we go to some straightforward directions, nevertheless.
The first figure describes the first reduction step.
The g-pattern obtained after this first step is the one which appears as the starting point of the Metabolism of loops post.
The 2nd step is described in the next picture:
Technically we are already outside lambda calculus, because of the fanin node FI[15,12,6]. (We don’t split the computation into pure reduction and pure self-multiplication.)
Let’s see the 3rd step.
Look well at the g-pattern which we get after the 3rd step, you’ll see it again, maybe!
The 4th step is the one which will prepare the path to conflict.
In the 5th step we have conflict:
The 5th step finishes in a different manner, depending on the PRIORITY CHOICE (which is fixed from the beginning of the computation).
Let’s suppose that we choose DIST over BETA. Then the 5th step looks like this:
Wow, the g-pattern after the 5th step is the same as the g-pattern after the 3rd step, with a loop graphical element added.
This means that further the computation will look alike the 4th step, then 5th step again (with the same priority choice, which is fixed!). A new loop will be generated and the computation will never stop, producing an endless string of loops.
Bubbles!
Now, let’s see what happens if the PRIORITY CHOICE is BETA over DIST.
Then the 5th step looks like this:
The 5th step produced 2 loops and the shortest ouroboros, a fanout node with one out port connected to the in port, namely FO[13,1,13].
The computation then stops!
______________________________________________________
So, depending on the priority choice, we have either a computation which produces bubbles without end, or a computation which stops.
It is logical. Indeed, if the priority choice is DIST over BETA, this induces the choice of increasing the number of nodes of the g-pattern. From here, it may happen, as it is the case in this example, that a cyclic behaviour is induces.
On the other side, the priority choice BETA over DIST decreases the number of nodes, thus increasing the chances for a computation which stops eventually.
Both choices are good, it depends on what we want to do with them. If we want to compute with graphs resembling chemlambda quines, because they look like living organisms with a metabolism, then the choice DIST over BETA is a good one.
If we want to have a computation which stops (dies, would say a biologist) then BETA over DIST seems better.
_____________________________________________________
In chemlambda with the sequential strategy, a quine is a g-pattern with the property that after one reduction step it transforms into another g-pattern which is the same as the initial one, up to renaming of the port variables.
Therefore: we start with a g-pattern “P”. Then
We obtain a g-pattern, let’s call it P’.
If there is a renaming of the port variables of P’ such that, after renaming, P’ is identical with P, then P is a chemlambda quine.
Otherwise said, if P’ is identical with P as graphs, then P is a quine.
___________________________________________
Let’s think a bit: a DIST move adds 2 nodes, a BETA or a FAN-IN move remove 2 nodes, therefore, in order to hope to have a quine, we need to have the possibility to do at least a DIST move. That means that a quine has to contain at least the RIGHT g-pattern of a DIST move. Implies that a quine must have at least 4 nodes.
A quick inspection shows that the two RIGHT g-patterns of the two DIST moves cannot be made into quines.
Therefore a quine must have at least 5 nodes. Among the nodes have to be L, A, FO, FI. But in order to reconstruct the L node and the A node one needs two DIST moves, which gives a lower bound of 8 nodes for a quine.
I believe that there is no quine with less than 9 nodes, such that the reductions never involve a choice of priority of moves.
__________________________________________
Here is now a bigger quine:
It’s a walker from the ouroboros series, walking on a circular train track with only one pair of nodes L and FO.
It has 28 nodes and 42 edges.
Can you find a smaller quine?
_________________________________________________________
UPDATE: Here is a small quine with 9 nodes and 14 edges:
_________________________________________________________
The regularity of the train track is corrupted by a bit of food (appearing as a L node connected to a termination node), see the next (BIG) picture. It is at the right of the walker.
You can see (maybe if you click on the image to make it bigger) that the walker ingests the food. The ingested part travels through the walker organism and eventually is expelled as a pair L and A nodes.
Perhaps, by clever modifications of the walker (and some experiments with its food) one can make a Turing machine.
This would give a direct proof that chemlambda with the sequential strategy is universal as well. (Well, that’s only of academic interest, to build trust as well, before going to the really nice part, i.e. distrbuted, decentralized, alife computations.)
_____________________________________________
That is because there is a walking machine in those graphs.
Explanations follow.
Recall the reduction strategy:
In the drawings the COMB moves are not figured explicitly.
Let’s come back to the walking machine. You can see it in the following figure.
In the upper side of the figure we see one of the graphs from the reduction of the “ouroboros predecessor”, taken fom the last post.
In the lower side there is a part of this graph which contains the walking machine, with the same port names as in the upper side graph.
What I claim is that in a single reduction step the machine “goes to the right” on the train track made by pairs of FO and A nodes. That is why some of the reduction steps from the last post look alike.
One reduction step will involve:
Let’s start afresh, with the walking machine on tracks, with new port names (numbers).
For the sake of explanations only, I shall do first the two BETA and the two FAN-IN moves, then will follow the four DIST moves. There is nothing restrictive here, because the moves are all independent, moreover, according to the reduction strategy, these are all the moves which can be done in this step, and they can be done at once.
OK, what do we see? In the upper side of this figure there is the walking machine on tracks, with a new numbering of ports. We notice some patterns:
In the lower part of the figure we see what the graph looks like after the application of the 2 BETA moves and the 2 FAN-IN moves which are possible.
Let’s look closer. In the next figure is taken the graph from the lower part of the previous figure. Beneath it is the same graph, only arranged on the page such that it becomes simpler to see the patterns. Here is this figure:
Recall that we are working with graphs (called g-patterns, or molecules), not with particular embeddings of the graphs in the plane. The two graphs are the same, only the drawings on the plane are different. Chemlambda does not matter about, nor uses embeddings. This is only for you, the reader, to help you see things better.
OK, what do we see:
… but all these patterns are not the old ones, but new ones!
The 4 train cars made by DIST patterns are missing! Well, they appear again after we do the remaining 4 DIST moves.
In the next figure we see the result of these 4 DIST moves. I did not numbered the new edges which appear.
I also did the COMB moves, if you look closer you will see that now any arrow either has one or no number on it. The arrows without numbers are those appeared after the DIST moves.
Let’s compare the initial and final graphs, in the next figure.
We see that indeed, the walking machine went to the right! It did not move, but instead the walking machine dismembered itself and reconstructed itself again.
This is of course like the guns from the Game of Life, but with a big difference: here there is no external grid!
Moreover, the machine destroyed 8 nodes and 16 arrows (by the BETA, FAN-IN and COMB moves) and reconstructed 8 nodes and 16 arrows by the DIST moves. But look, the old arrows and nodes migrated inside and outside of the machine, assembling in the same patterns.
This is like a metabolism…
____________________________________________________________
The signal for the healing is given by the beta reduction
L[59,59,23] A[23,27,14] –beta–>
Arrow[59,14] Arrow[27,59]
The COMB moves are not figured. But they go like this in this case:
Arrow [59,14] Arrow[27,59] –COMB–>
Arrow[27,14]
and then
A[52,54,27] Arrow[27,14] –COMB–>
A[52,54,27]
In the third graph we see the element:
A[19,54,27]
which comes from yet another COMB move
A[52,54,27] Arrow[19,52] –COMB–>
A[19,54,27]
where the Arrow[19,52] comes from the FAN-IN move
FI[6,19,2] FO[2,52,53] –FAN-IN–>
Arrow[6,53] Arrow[19,52]
There are 8 rewrites per reduction step, starting from the 2nd figure. The repeating patterns are:
The number of nodes, from the 2nd to the 5th figure is the same.
What will happen next?
__________________________________________________________
I make an ouroboros from something like the Pred 8:
We’re in the middle of the computation, what will give eventually, can you guess?
Next time!
__________________________________________________________
In the post What reduction is this? I used chemlambda with the stupid sequential reduction strategy stated in Y again: conflict!, namely:
… And there is no conflict in the predecessor reduction.
In the post “What reduction is this?” I asked some questions, let me answer:
This is a streamlined version of the reduction hidden in
PRED(3) –> 2
where numbers appear as stacks of pair FO and A nodes. They are “bare” numbers, in the sense that all the currying has been eliminated.
Admire the mechanical, or should I say chemical precision of the process of reduction (in chemlambda, stupid sequential strategy). In the following figure I eliminated all the unnecessary nodes and arrows and we are left now with the pure phenomenon.
I find amazing that it works even with this stupidest strategy. Shows that chemlambda is much better than anything on the market.
Let me tell again: this is outside IT fundamental assumption that everything is reduced at signals send through wires, then processed by gates.
It is how nature works.
____________________________________________
“MU Panel 2. Future of Publishing
Date & Time : 18:00 – 19:30, August 19 (Tue), 2014
Moderator: Jean-Pierre Bourguignon, European Research Council, Belgium
Panelists:
Rajendra Bhatia, Indian Statistical Institute, New Delhi, India and Sungkyunkwan University, Suwon, Korea
Jean-Pierre Demailly, Institut Fourier, France
Chris Greenwell, Elsevier, The Netherlands
Thomas Hintermann, European Mathematical Society Publishing House, Switzerland
+Nalini Joshi, University of Sydney, Australia
Ravi Vakil, Stanford University, USA
======================================
http://youtu.be/RbIBrE0vepM“
I am extremely intrigued about this part:
“E[lsevier?] does pay its editors-in-chief (=academics) and sometimes associate editors – doesn’t go all the way to reimburse them for the time they spend. Q from floor: where are these figures published? A: “We don’t generally make that available, mostly because the individual editors probably don’t want their colleagues to know” (~http://youtu.be/RbIBrE0vepM?t=1h14m30s) Q: this is unfair A: depends on editors. There’s nothing in the contract stopping them from telling people. Most of them probably wouldn’t want to tell you. Averages out at about $100 per paper handled.”
This practice may be OK from the point of view of the publisher, but, in my opinion, the paid editors HAVE to tell in order to avoid a conflict of interest.
The conflict of interest appears when an editor is in a jury, or otherwise in any process which rewards publication in journals like the ones where the guy is a paid editor (hiring, phd supervising, grants dispensing). This is something which is worth discussing, I guess. Is not specific to math.
It is not a matter of the editor “wouldn’t want to tell you”, as cynically put by the E[lsevier?] speaker. It is a matter of being honest.
Recall in this context the post
We have met the enemy: part I, pusillanimous editors, by Mark C. Wilson
“My conclusions, in the absence of further information: senior researchers by and large are too comfortable, too timid, too set in their ways, or too deluded to do what is needed for the good of the research enterprise as a whole. I realize that this may be considered offensive, but what else are the rest of us supposed to think, given everything written above? I have not even touched on the issue of hiring and promotions committees perpetuating myths about impact factors of journals, etc, which is another way in which senior researchers are letting the rest of us down”…
Are we living in a research banana republic?
Apparently (some of) the publishers think we are morons, because they secured collaboration of (some of) the academic bosses.
I think there is no difference between this situation and the one of a medical professional who has to disclose payment by pharmaceutical companies.
What do you think?
_____________________________________________________
Can you guess what is this? (click on the big image to see it better)
As you see, you may ask:
_______________________________________________
Then, in the post Y again:compete! I took in parallel the two possible outcomes of the conflict. The contenders have been branded as fast shooting cowboys, offering a show.
Surprisingly, both possible paths of reduction ended in a very simple version of the Y combinator.
Only that the very simple version is not one coming from lambda calculus!
Indeed, let’s recall who is the Y combinator, seen a g-pattern in chemlambda.
In lambda calculus the Y combinator is
Lx.( (Ly.(x(yy)) (Ly.(x(yy)) )
As a molecule, it looks like this.
As g-pattern, it looks like this (see this post and this post for the conversion of lambda terms into g-patterns):
L[a,x,o] A[b,c,a] FO[x,y,z]
L[e,d,b] FO[d,f,g] A[f,g,h] A[y,h,e]
L[j,i,c] FO[i,l,m] A[l,m,k] A[z,k,j]
Applied to something means we add to this g-pattern the following:
A[o,p,u]
with the meaning that Y applies to whatever links to the port “p”. (But mind that in chemlambda there is no variable or term passing or evaluation! so this is a way to speak in the lambda calculus realm, only).
The two mentioned posts about Y again led to the conclusion that the g-pattern “Y applied to something” behaves (eventually, after several reductions) as the far more simple g-pattern:
A[o,p,u] (i.e. “applied to something at port “p”)
L[b,a,o]
FO[a,c,d] A[c,d,b]
Now, this means that the Y combinator g-pattern may be safely replaced in computations by
L[b,a,o]
FO[a,c,d] A[c,d,b]
or, in graphical version, by
But this is outside lambda calculus.
So what?
It is far simpler than the Y combinator from lambda calculus.
The same happens with other lambda terms and reductions.(see for example the post Actors for the Ackermann machine, for another example. Incidentally, the analysis of the Ackermann machine, i.e. the graph which behaves like the Ackermann function, gave me the idea of using the actor model with GLC. This evolved into arXiv:1312.4333.).
This shows the fact that chemlambda, even with the dumbest sequential reduction strategy (ok, enhanced in obvious ways so it solves conflicts), can do more with less fuss than lambda calculus.
By looking on the net (recall that I’m a mathematician, therefore excuse my ignorance in CS well known people, I’m working on this), I can’t but wonder what chemlambda would give in relation with, for example:
Of course, the dream is to go much, much further. Why? Because of the List of Ayes/Noes of the artificial chemistry chemlambda.
__________________________________________________________
Conflict means that the same graphical element appears in two LEFT g-patterns (see, in the series of expository posts the part II for the g-patterns and the part III for the moves) .
In the next figure we see this conflict, in the upper part (that’s where we were left in the previous post), followed by a fork: in the lower left part of the figure we see what we get if we apply the beta move and in the lower right part we see what happens if we apply the DIST move.
Recall that (or look again at the upper side of the picture) the conflict was between LEFT patterns of a beta move and of a DIST move.
I rearranged the drawing of the g-patterns a bit (mind that this is not affecting in any way the graphs, because the drawings on paper or screen are one thing and the graphs another thing, excuse me for being trivially obvious). In this pretty picture we see well that already the Y gun shot a second pair of nodes A and FO.
The differences now:
Which way is the best? What to do?
Let’s make them compete! Who shoots faster?
Imagine the scene: in the Lambda Saloon, somewhere in the Wide Wild West, enter two new cowboys.
“Who called these guys?” asks the sheriff of the Functional City, where the reputed saloon resides.
“They are strangers! They both look like Lucky Luke, though…”
The cowboys and cowgirls from the saloon nod in approval: they all remember what happened when Lucky Luke — the one, the single cowboy who shoots faster than his shadow — was put between the two parallel mirrors from the saloon stage. What a show! That day, the Functional City got a reputation, and everybody knows that reputation is something as good as gold. Let the merchants from Imperative County sell windows panes for the whole US, nobody messes with a cowboy, or cowgirl from the Functional City. Small, but elegant. Yes sir, style is the right word…
Let’s go back to the two strangers.
“I’m faster than Master Y” says the stranger from the right.
“I’m MUCH faster than Master Y” says the one from the left, releasing from his cigar a loop of smoke.
“Who the … is Master Y?” asks the sheriff.
“Why, it’s Lucky Luke. He trained us both, then sent us to Functional City. He says hello to you and tells you that he got new tricks to show” says one of the strangers.
“… things not learned from church…” says the other.
“I need to see how fast are you, or else I call you both liars” shouts one of the bearded, long haired cowboys.
The stranger from the right started first. What a show!
He first makes a clever DIST move (not that there was anything else to do). Then he is presented with 4 simultaneous moves to do (FAN-IN, 2 betas and a DIST). He shoots and freezes. Nothing moves, excepting two loops of smoke, rising from his gun.
“I could continue like this forever, but I stopped, to let my colleague show you what he’s good at.” said the stranger from the right.
“Anyway, I am a bit slow with the first shot, but after that I am faster.” he continued.
“Wait, said the sheriff, you sure you really shot?”
“Yep, sure, look better how I stand”, said the stranger from the right, only slightly modifying his posture, so that everybody could clearly see the shot:
“Wow, true!” said the cowboys.
“I’m fast from the first shoot!” said the stranger from the left. “Look!”
“I only did a DIST move.” said the stranger from the left, freezing in his posture.
“Nice show, guys! Hey, look, I can’t tell now which is which, they look the same. I got it, the guy from the right is a bit slower at the first shoot (however he is dazzlingly fast) but then he is as good as his fellow from the left.”
“Hm, said the sheriff, true. Only one thing: I have never seen in the Lambda Saloon anything like this. It’s fast, but it does not seem to belong here.”
___________________________________________________________
This time let’s not care about staying in lambda calculus and let’s take the simplest reduction strategy, to see what happens.
We posit in the frame of g-patterns from the expository posts part I and part II (definition of g-patterns) and part III (definition of moves) and part IV (g-patterns for lambda terms 1st part) and part V (g-patterns for lambda terms 2nd part) and part VI (about self-multiplication of g-patterns) and part VII (an example of reduction) .
We take the following reduction strategy:
What’s conflict? We shall see one.
Mind that this is a very stupid and simplistic strategy, which does not guarantee that if we start with a g-pattern which represents a lambda term then we stop by having a g-pattern of a lambda term.
It does have it’s advantages, though.
OK, so let us start with the g-pattern of Y applied to something.
In general, with g-patterns we can say many things. As any combinator molecule, when represented by a g-pattern, the Y combinator has only one free port, let’s call it “b”. Thus Y appears as a g-pattern which we denote by Y[b].
Suppose we want to tart the reduction from Y applied to something. This means that we shall start with the g-pattern
A[b,a,out] Y[b]
OK!
Look what happens when we apply the mentioned strategy.
(Advice: big picture, click on it to see it clearly and to travel along it.)
Here is a conflict: at one step we have two LEFT patterns, in this case
L[o,p,i] A[i,p,v] , which is good for a beta move
and
A[i.p.v] FO[v,q,a1] , which is good for a DIST move.
The patterns contain a common graphical element, in this case A[i,p,v], which will be deleted during the respective moves.
CONCLUSION: with this strategy we have a gun which shoots one pair of FO and A nodes, but then it got wrecked.
What to do then?
The human way is to apply
When in trouble or in doubt
Run in circles, scream and shout
for a moment, then acknowledge that this is a stupid reduction strategy, then find some qualities of this strategy, then propose another which has those qualities but works better, then reformulate the whole problem and give it an unexpected turn.
The AI way is to wait for somebody to change the reduction strategy.
__________________________________________________________
Here is not more than what is in this ephemeral google+ post, but is enough to get the idea.
And it’s controversial, although obvious.
“I just got hooked by github.io . Has everything, is a dream came true. Publishing? arXiv? pfff…. I know, everybody knows this already, let me enjoy the thought, for the moment. Then it will be some action.
- GitHub’s success is not just about openness, but also a prestige economy that rewards valuable content producers with credit and attention
-Open Science efforts like arXiv and PLoS ONE should follow GitHub’s lead and embrace the social web”
I am aware about the many efforts about publishing via github, I only wonder if that’s not like putting a horse in front of a rocket.
On the other side, there is so much to do, now that I feel I’ve seen rock solid proof that academia, publishing and all that jazz is walking dead, with the last drops of arterial blood splatting around from the headless body. “
Negative Coase cost?
__________________________________________________
I hope to make this presentation self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)
_________________________________________________________
This series of posts may be used as a longer, more detailed version of sections
from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication, arXiv:1403.8046 [cs.AI], presented by Louis Kauffman in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York. Here is a link to the published article, free, at MIT Press.
_________________________________________________________
Tags. I shall use the name “tag” instead of “actor” or “type”, because is more generic (and because in future developments we shall talk more about actors and types, continuing from the post Actors as types in the beta move, tentative).
Every port of a graphical element (see part II) and the graphical element itself can have tags, denoted by :tagname.
There is a null tag “null” which can be omitted in the g-patterns.
As an example, we may see, in the most ornate way, graphical elements like this one:
L[x:a,y:b,z:c]:d
where of course
L[x:null,y:null,z:null]:null means L[x,y,z]
The port names are tags, in particular “in” out” “middle” “left” and “right” are tags.
Any concatenation of tags is a tag. Concatenation of tags is denoted by a dot, for example “left.right.null.left.in”. By the use of “null” we have
a.null –concat–> a
null.a –concat–> a
I shall not regard concat as a move in itself (maybe I should, but that is for later).
Further in this post I shall not use tags for nodes.
Moves with tags. We can use tags in the moves, according to a predefined convention. I shall take several examples.
1. The FAN-IN move with tags. If the tags a and b are different then
FI[x:a, y:b, z:c] FO[z:c,u:b, v:a]
–FAN-IN–>
Arrow[x:a,v:a] Arrow[y:b,u:b]
Remark that the move is not reversible.
It means that you can do FAN-IN only if the right tags are there.
2. COMB with tags.
L[x:a, y:b, z:c] Arrow[y:b, u:d]
–COMB–>
L[x:a, u:d,z:c]
and so on for all the comb moves which involve two graphical elements.
3. DIST with tags. There are two DIST moves, here with tags.
A[x:a,y:b,z:c] FO[z:c,u:d,v:e]
–DIST–>
FO[x:a, w:left.d, p:right.e] FO[y:b, s:left.d, t:right.e]
A[w:left.d, s:left.d, u:d] A[p:right.e, t:right.e, v:e]
In graphical version
and the DIST move for the L node:
L[y:b, x:a, z:c] FO[z:c, u:d, v:e]
–DIST–>
FI[p:right, w:left, x:a] FO[y:b, s:left, t:right]
L[s:left, w:left,u:d] L[t:right, p:right, v:e]
In graphical version:
4. SHUFFLE. This move replaces CO-ASSOC, CO-COMM. (It can be done as a sequence of CO-COMM and CO-ASSOC; conversely, CO-COMM and CO-ASSOC can be done by SHUFFLE and LOC PRUNING, explanations another time.)
FO[x:a, y:b, z:c] FO[y:b, w:left, p:right] FO[z:c, s:left, t:right]
–SHUFFLE–>
FO[x:a, y:left, z:right] FO[y:left, w, s] FO[z:right, p, t]
In graphical version:
____________________________________________________________
Marius Buliga, Gery de Saxce, A symplectic Brezis-Ekeland-Nayroles principle
You can find here the slides of two talks given in Lille and Paris a while ago, where the article has been announced.
UPDATE: The article appeared, as arXiv:1408.3102
This is, we hope, an important article! Here is why.
The Brezis-Ekeland-Nayroles principle appeared in two articles from 1976, the first by Brezis-Ekeland, the second by Nayroles. These articles appeared too early, compared to the computation power of the time!
We call the principle by the initials of the names of the inventors: the BEN principle.
The BEN principle asserts that the curve of evolution of a elasto-plastic body minimizes a certain functional, among all possible evolution curves which are compatible with the initial and boundary conditions.
This opens the possibility to find, at once the evolution curve, instead of constructing it incrementally with respect to time.
In 1976 this was SF for the computers of the moment. Now it’s the right time!
Pay attention to the fact that a continuous mechanics system has states belonging to an infinite dimensional space (i.e. has an infinite number of degrees of freedom), therefore we almost never hope to find, nor need the exact solution of the evolution problem. We are happy for all practical purposes with approximate solutions.
We are not after the exact evolution curve, instead we are looking for an approximate evolution curve which has the right quantitative approximate properties, and all the right qualitative exact properties.
In elasto-plasticity (a hugely important class of materials for engineering applications) the evolution equations are moreover not smooth. Differential calculus is conveniently and beautifully replaced by convex analysis.
Another aspect is that elasto-plastic materials are dissipative, therefore there is no obvious hope to treat them with the tools of hamiltonian mechanics.
Our symplectic BEN principle does this: one principle covers the dynamical, dissipative evolution of a body, in a way which can be reasonably easy amenable to numerical applications.
_______________________________________
Then we do emergent algebra moves instead.
Look, instead of the beta move (see here all moves with g-patterns)
L[a,d,k] A[k,b,c]
<–BETA–>
Arrow[a,c] Arrow[b,d]
lets do for an epsilon arbitrary the epsilon beta move
Remark that I don’t do the beta move, really. In g-patterns the epsilon beta move does not replace the LEFT pattern by another, only it ADDS TO IT.
L[a,d,k] A[k,b,c]
– epsilon BETA –>
FO[a,e,f] FO[b,g,h]
L[f,i,k] A[k,h,j]
epsilon[g,i,d] epsilon[e,j,c]
Here, of course, epsilon[g,i,d] is the new graphical element corresponding to a dilation node of coefficient epsilon.
Now, when epsilon=1 then we may apply only ext2 move and LOC pruning (i.e. emergent algebra moves)
and we get back the original g-pattern.
But if epsilon goes to 0 then, only by emergent algebra moves:
that’s it the BETA MOVE is performed!
What is the status of the first reduction from the figure? Hm, in the figure appears a node which has a “0” as decoration. I should have written instead a limit when epsilon goes to 0… For the meaning of the node with epsilon=0 see the post Towards qubits: graphic lambda calculus over conical groups and the barycentric move. However, I don’t take the barycentric move BAR, here, as being among the allowed moves. Also, I wrote “epsilon goes to 0″, not “epsilon=0″.
__________________________________________________________
epsilon can be a complex number…
__________________________________________________________
Questions/exercises:
__________________________________________________________
List of ayes
__________________________________________________________
Example: decorations of S,K,I combinators in simply typed GLC
In the chemlambda version, the decoration with types for the lambda and application graphical elements is this:
or with g-patterns:
L[x:b, y:a, z:a->b]
A[x:a->b, y:a, z:b]
Recall also that there is a magma associated to any graph (or g-pattern) which is easy to define. If the magma is free then we say that the g-pattern is well typed (not that we need “well” further).
Let’s mix this with actors. We make the attribution of the port variables of a g-pattern to actors (id’s) and we write that the port variable x belongs to the actor a like this
x:a
I don’t want to define an operation -> for actors id’s, like if they are types. Instead I shall use the Arrow graphical element and the COMB move (see the moves of chemlambda in terms of g-patterns here).
Here is a COMB move, a bit modified:
L[x:b, y:a, z:d] –COMB–> L[x:b, y:a, u:a] Arrow[u:b, z:d]
which says something like
:d should be :a->:b
The same for the application
A[z:d, v:a, w:b] –COMB–> Arrow[z:d, s:a] A[s:b , v:a, w:b]
which says something like
:d should be :a->:b
which, you agree, is totally compatible with the decorations from the first figure of the post.
Notice the appearance of port variables u:a, u:b and s:a, s:b, which play the role a->b.
We allow the usual COMB moves only if the repeating variables have the same actors.
What about the beta move?
The LEFT g-pattern of the beta move is, say with actors:
L[x:a, y:d, z:c] A[z:c, v:b, w:e]
Apply the two new COMB moves;
L[x:a, y:d, z:c] A[z:c, v:b, w:e]
–2 COMB–>
L[x:a, y:d, u:d]
Arrow[u:a, z:c] Arrow[z:c, s:b]
A[s:e , v:b, w:e]
An usual COMB move applies here:
L[x:a, y:d, u:d]
Arrow[u:a, z:c] Arrow[z:c, s:b]
A[s:e , v:b, w:e]
<–COMB–>
L[x:a, y:d, u:d]
Arrow[u:a, s:b]
A[s:e , v:b, w:e]
and now the new beta move would be:
L[x:a, y:d, u:d]
Arrow[u:a, s:b]
A[s:e , v:b, w:e]
–BETA–>
Arrow[x:a, w:e]
Arrow[v:b, y:d]
This form of the beta move resembles with the combination of CLICK and ZIP from zipper logic.
Moreover the Arrow elements could be interpreted as message passing.
________________________________________________________
presented at ALIFE14.
Both articles look great and the ideas are very close to my actual interests. Here is why:
The chemlambda and distributed GLC project also has a paper there: M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication see the better arXiv version because it has links inside: arXiv:1403.8046.
The resemblance with the mentioned papers and the chemlambda and distributed GLC is that our (fully theoretical, helas) model is also about distributed computing, using actors, lambda calculus and space.
The differences are many, though, and I hope that these might lead to interesting interactions.
Further I describe the main difference, with the understanding that all this is very new for me, a mathematician, so I might be wrong in my grasping of the MFM (please correct me if so).
In the MFM the actors are atoms in a passive (i.e. predefined) space. In the distributed GLC the actors have as states graphs called molecules (more precisely g-patterns).
[Here is the moment to thank, first, to Stephen P. King who noticed me about the two articles. Second, Stephen works on something which may be very similar to the MFM, as far as I understand, but I have to strongly stress that the distributed GLC does NOT use a botnet, nor the actors are nodes in a chemlambda graph!]
In distributed GLC the actors interact by message passing to others actors with a known ID. Such message passing provokes a change in the states of the actors which corrsponds to one of the graph rewrites (moves of chemlambda). As an effect the connectivities between the actors change (where connectivity between an actor :alice and :bob means that :alice has as state a g-pattern with one of the free ports decorated with :bob ID). Here the space is represented by these connectivities and it is not passive, but an effect of the computation.
In the future I shall use and cite, of course, this great research subject which was unknown to me. For example the article Lance R. Williams Robust Evaluation of Expressions by Distributed Virtual Machines already uses actors! What more I am not aware of? Please tell, thanks!
_______________________________________________________________
This will NOT be made public, only by private mail messages.
If you want to hear more:
then mail me at chorasimilarity@gmail.com and let’s talk about parts you don’t get clearly.
Looking forward to hear from you,
Marius Buliga
__________________________________________________________
Example: from this post
L[a,x,b] A[b,x,a] <–eta–>
Arrow[b,b] loop <–comb–>
loop loop
or
L[a,x,b] A[b,x,a] <–beta–>
Arrow[a,a] Arrow[x,x] <–2comb–>
loop loop
Then why not
L[a,x,b] A[u,y,a] <–eta–> Arrow[u,b] Arrow[x,y]
which is exactly alike the FAN-IN
FO[a,x,b] FI[u,y,a] <–FAN-IN–> Arrow[u,b] Arrow[x,y]
Taking this seriously, the beta move should have a hidden companion
FO[a,x,b] FI[b,y,c] <–betahide–> Arrow[y,x] Arrow[a,c]
… which brings us to a symmetrized version of chemlambda which is very close to the interaction nets of Yves Lafont.
We present chemlambda (or the chemical concrete machine), an artificial chemistry with the following properties: (a) is Turing complete, (b) has a model of decentralized, distributed computing associated to it, (c) works at the level of individual (artificial) molecules, subject of reversible, but otherwise deterministic interactions with a small number of enzymes, (d) encodes information in the geometrical structure of the molecules and not in their numbers, (e) all interactions are purely local in space and time. This is part of a larger project to create computing, artificial chemistry and artificial life in a distributed context, using topological and graphical languages.
DOI: http://dx.doi.org/10.7551/978-0-262-32621-6-ch079
Pages 490-497
Supplementary material:
____________________________________________________________
I hope to make this presentation self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)
_________________________________________________________
This series of posts may be used as a longer, more detailed version of sections
from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication, arXiv:1403.8046 [cs.AI], which is accepted in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York, (go see the presentation of Louis Kauffman if you are near the event.) Here is a link to the published article, free, at MIT Press.
_________________________________________________________
In this post I take a simple example which contains beta reduction and self-multiplication.
Maybe self-multiplication is a too long word. A short one would be “dup”, any tacit programming language has it. However, chemlambda is only superficially resembling to tacit programming (and it’s not a language, arguably, but a GRS, nevermind).
Or “self-dup” because chemlambda has no “dup”, but a mechanism of self-multiplication, as explained in part VI.
Enough with the problem of the right denomination, because
“A rose by any other name would smell as sweet”
as somebody wrote, clearly not believing that the limit of his world is the limit of his language.
Let’s consider the lambda term (Lx.xx)(Ly.yz). In lambda calculus there is the following string of reductions:
(Lx.xx)(Ly.yz) -beta-> (Ly.yz) (Lu.uz) -beta-> (Lu.uz) z -beta-> zz
What we see? Let’s take it slower. Denote by C=xx and by B= Ly.yz. Then:
(Lx.C)B -beta-> C[x:=B] = (xx)[x:=B] = (x)[x:=B] (x)[x:=B] = BB = (Ly.yz) B -beta-> (yz)[y:=B] = (y)[y:=B] (z)[y:=B] = Bz = (Lu.uz)z -beta=> (uz)[u:=z] = (u)[u:=z] (z)[u:=z] = zz
Now, with chemlambda and its moves performed only from LEFT to RIGHT.
The g-pattern which represents (Lx.xx)(Ly.yz) is
L[a1,x,a] FO[x,u,v] A[u,v,a1] A[a,c,b] L[w,y,c] A[y,z,w]
We can only do a beta move:
L[a1,x,a] FO[x,u,v] A[u,v,a1] A[a,c,b] L[w,y,c] A[y,z,w]
<–beta–>
Arrow[a1,b] Arrow[c,x] FO[x,u,v] A[u,v,a1] L[w,y,c] A[y,z,w]
We can do two COMB moves
Arrow[a1,b] Arrow[c,x] FO[x,u,v] A[u,v,a1] L[w,y,c] A[y,z,w]
2 <–COMB–>
FO[c,u,v] A[u,v,b] L[w,y,c] A[y,z,w]
Now look, that is not a representation of a lambda term, because of the fact that FO[c,u,v] is “in the middle”, i.e. the middle.in port of the FO[c,u,v] is the out port of B, i.e. the right.out port of the lambda node L[w,y,c]. On the same time, the out ports of FO[c,u,v] are the in ports of A[u,v,b].
The only move which can be performed is DIST, which starts the self-dup or self-multiplication of B = L[w,y,c] A[y,z,w] :
FO[c,u,v] A[u,v,b] L[w,y,c] A[y,z,w]
<–DIST–>
FI[e,f,y] FO[w,g,h] L[h,e,v] L[g,f,u] A[u,v,b] A[y,z,w]
This is still not a representation of a lambda term. Notice also that the g-pattern which represents B has not yet self-multiplied. However, we can already perform a beta move for L[g,f,u] A[u,v,b] and we get (after 2 COMB moves as well)
FI[e,f,y] FO[w,g,h] L[h,e,v] L[g,f,u] A[u,v,b] A[y,z,w]
<–beta–>
FI[e,f,y] FO[w,g,h] L[h,e,v] Arrow[g,b] Arrow[v,f] A[y,z,w]
2 <–COMB–>
FI[e,f,y] FO[w,b,h] L[h,e,f] A[y,z,w]
This looks like a weird g-pattern. Clearly is not a g-pattern coming from a lambda term, because it contains the fanin node FI[e,f,y]. Let’s write again the g-pattern as
L[h,e,f] FI[e,f,y] A[y,z,w] FO[w,b,h]
(for our own pleasure, the order of the elements in the g-pattern does not matter) and remark that A[y,z,w] is “conjugated” by the FI[e,f,y] and FO[w,b,h].
We can apply another DIST move
L[h,e,f] FI[e,f,y] A[y,z,w] FO[w,b,h]
<–DIST–>
A[i,k,b] A[j,l,h] FO[y,i,j] FO[z,k,l] FI[e,f,y] L[h,e,f]
and now there is only one move which can be done, namely a FAN-IN:
A[i,k,b] A[j,l,h] FO[y,i,j] FO[z,k,l] FI[e,f,y] L[h,e,f]
<–FAN-IN–>
Arrow[e,j] Arrow[f,i] A[i,k,b] A[j,l,h] FO[z,k,l] L[h,e,f]
which gives after 2 COMB moves:
Arrow[e,j] Arrow[f,i] A[i,k,b] A[j,l,h] FO[z,k,l] L[h,e,f]
2 <–COMB–>
A[f,k,b] A[e,l,h] FO[z,k,l] L[h,e,f]
The g-pattern
A[f,k,b] A[e,l,h] FO[z,k,l] L[h,e,f]
is a representation of a lambda term, finally: the representation of (Le.ez)z. Great!
From here, though, we can apply only a beta move at the g-pattern A[f,k,b] L[h,e,f]
A[f,k,b] A[e,l,h] FO[z,k,l] L[h,e,f]
<–beta–>
Arrow[h,b] Arrow[k,e] A[e,l,h] FO[z,k,l]
2 <–COMB–>
FO[z,k,l] A[k,l,b]
which represents zz.
_____________________________________________________
Indeed, compare the non-combat stance of Episciences.org
The project proposes an alternative to existing economic models, without competing with traditional publishers.
with the one of EPI-IAM:
The driving force for this project is the take-over of the best journals in the field by the scientific communities, organised in thematic executive committees (so-called epicommittees) gathering international experts.
This project is intended for:
existing journals wishing to be liberated from a commercial editorial environment or already open-access journals in search of shared support services
newly created journals looking for a simple and highly visible editing environment
“IAM” stands for “Informatics and Applied Mathematics”, great! perhaps the first initiative towards new styles of communication of research, among those from mathematics and hard sciences (well, arXiv excluded, of course), which has a chance to compare with the much more advanced, already functioning ones, from biology and medicine.
In a previous post I wrote that in particular the episciences project looks dead to me. I am happy to be proven wrong!
This is what we need (a dire need in math), not any of the flawed projects which involve gold OA, friends recommendations networks, opaque peer-review and dislike of comments on articles, authority medals dispensed by journals.
It is a revolution, very much alike to the one 100 years ago in art, which led to an explosion of creativity.
The ball is on our side (and recall that we are not going to get any help from the academic management and colleagues adapted to the old ways).
Congrats EPI-IAM, a development to follow!
_________________________________________________________
How can this be done? Here is sketch, mind you that I propose things which I believe are possible from a chemical perspective, but I don’t have any chemistry knowledge. If you do, and if you are interested to make a chemical concrete machine for graphic lambda calculus, then please contact me.
(1) What has been achieved in one year? (2) What will happen next?
(1) More than 100 posts in the chorasimilarity open notebook cover, with lots of details, everything which will be mentioned further.
I am most grateful for the collaboration with Louis Kauffman. This was a dream for me since I wrote Computing with space: a tangle formalism for chora and difference. Via the continuous enthusiastic social web connector Stephen P. King, we started to work together and we are now in position, after a year, to take a big leap. We wrote two articles GLC actors, artificial chemical connectomes, topological issues and knots , which is for the moment a not very well understood hidden treasure of a distributed computing model, and Chemlambda, universality and self-multiplication, which will be presented at ALIFE 14, concentrating on the self-multiplication phenomenon (see the last post of the thread of expository posts on this here). These works are embedded into hundreds of hours of discussions with many people. These discussions helped at least as motivations for well explaining things.
In parallel the chemlambda paper was published on figshare: Chemical concrete machine. Follwed by Zipper logic, another piece of the puzzle.
We had a NSF proposal which was centered around cybersecurity, perhaps too early in the stage of development of the project. However, the theoretical part of the project has been appreciated beyond my expectations, what is needed is the practical implementation.
(2) More and more I become convinced that the distributed, decentralized computing project based on chemlambda would be possible today, provided is done in the right place and frame. The most recent thoughts are about the use of the semantic web tools like RDF and N3logic for this (although I strongly believe in the no semantics slogan).
I shall write much more in a part II post, right now I have a very bad connection…
UPDATE: … so, imagine that chemlambda molecules are RDF datasets, accesible via the respective URI. If you want to run a computation then you need to impersonate the actors (because the initial actor diagram is already in the structure of the RDF dataset) and to specify a model of computation (i.e. to specify the reduction rules decorated with actors, along with the actors behaviours, all in N3).
Well designed computations could then have their URIs.
Then, imagine that you want to endow your computer with a microbiome OS, just follow the links.
Another, related direction of future research concerns the IoT, things and space ….
__________________________________________________________
separation of form from content: The principle that one should represent separately the essence of a document and the style with which it is presented.
Applied to decentralized computing, this means no semantics.
[One more confirmation of my impression that logic is something from the 21st century disguised in 19th century clothes.]
___________________________________________________________
I hope to make this presentation self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)
_________________________________________________________
This series of posts may be used as a longer, more detailed version of sections
from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication, arXiv:1403.8046 [cs.AI], which is accepted in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York, (go see the presentation of Louis Kauffman if you are near the event.) Here is a link to the published article, free, at MIT Press.
_________________________________________________________
In this post I want to concentrate on the mechanism of self-multiplication for g-patterns coming from lambda terms (see part IV where the algorithm of translation from lambda terms to g-patterns is explained).
Before that, please notice that there is a lot to talk about an important problem which shall be described later in detail. But here is it, to keep an eye on it.
Chemlambda in itself is only a graph rewriting system. In part V is explained that the beta reduction from lambda calculus needs an evaluation strategy in order to be used. We noticed that in chemlambda the self-multiplication is needed in order to prove that one can do beta reduction as the beta move.
We go towards the obvious conclusion that in chemlambda reduction (i.e. beta move) and self-multiplication are just names used for parts of the computation. Indeed, the clear conclusion is that there is a computation which can be done with chemlambda, which has some parts where we use the beta move (and possibly some COMB, CO-ASSOC, CO-COMM, LOC PRUNING) and some other parts we use DIST and FAN-IN (and possibly some of the moves COMB, CO-ASSOC, CO-COMM, LOC PRUNING). These two parts have as names reduction and self-multiplication respectively, but in the big computation they mix into a whole. There are only moves, graphs rewrites applied to a molecule.
Which brings the problem: chemlambda in itself is not sufficient for having a model of computation. We need to specify how, where, when the reductions apply to molecules.
There may be many variants, roughly described as: sequential, parallel, concurrent, decentralized, random, based on chemical reaction network models, etc
Each model of computation (which can be made compatible with chemlambda) gives a different whole when used with chemlambda. Until now, in this series there has been no mention of a model of computation.
There is another aspect of this. It is obvious that chemlambda graphs form a larger class than lambda terms, and also that the graph rewrites apply to more general situations than beta reduction (and eventually an evaluation strategy). It means that the important problem of defining a model of computation over chemlambda will have influences over the way chemlambda molecules “compute” in general.
The model of computation which I prefer is not based on chemical reaction networks, nor on process calculi, but on a new model, inspired from the Actor Model, called the distributed GLC. I shall explain why I believe that the Actor Model of Hewitt is superior to those mentioned previously (with respect to decentralized, asynchronous computation in the real Internet, and also in the real world), I shall explain what is my understanding of that model and eventually the distributed GLC proposal by me and Louis Kauffman will be exposed in all details.
4. Self-multiplication of a g-pattern coming from a lambda term.
For the moment we concentrate on the self-multiplication phenomenon for g-patterns which represent lambda terms. In the following is a departure from the ALIFE 14 article. I shall not use the path which consists into going to combinators patterns, nor I shall discuss in this post why the self-multiplication phenomenon is not confined in the world of g-patterns coming from lambda terms. This is for a future post.
In this post I want to give an image about how these g-patterns self-multiply, in the sense that most of the self-multiplication process can be explained independently on the computing model. Later on we shall come back to this, we shall look outside lambda calculus as well and we shall explore also the combinator molecules.
OK, let’s start. In part V has been noticed that after an application of the beta rule to the g-pattern
L[a,x,b] A[b,c,d] C[c] FOTREE[x,a1,...,aN] B[a1,...,aN, a]
we obtain (via COMB moves)
C[x] FOTREE[x,a1,...,aN] B[a1,...,aN,d]
and the problem is that we have a g-pattern which is not coming from a lambda term, because it has a FOTREE in the middle of it. It looks like this (recall that FOTREEs are figured in yellow and the syntactic trees are figured in light blue)
The question is: what can happen next? Let’s simplify the setting by taking the FOTREE in the middle as a single fanout node, then we ask what moves can be applied further to the g-pattern
C[x] FO[x,a,b]
Clearly we can apply DIST moves. There are two DIST moves, one for the application node, the other for the lambda node.
There is a chain of propagation of DIST moves through the syntactic tree of C, which is independent on the model of computation chosen (i.e. on the rules about which, when and where rules are used), because the syntactic tree is a tree.
Look what happens. We have the propagation of DIST moves (for the application nodes say) first, which produce two copies of a part of the syntactic tree which contains the root.
At some point we arrive to a pattern which allows the application of a DIST move for a lambda node. We do the rule:
We see that fanins appear! … and then the propagation of DIST moves through the syntactic tree continues until eventually we get this:
So the syntactic tree self-multiplied, but the two copies are still connected by FOTREEs which connect to left.out ports of the lambda nodes which are part of the syntactic tree (figured only one in the previous image).
Notice that now (or even earlier, it does not matter actually, will be explained rigorously why when we shall talk about the computing model, for the moment we want to see if it is possible only) we are in position to apply the FAN-IN move. Also, it is clear that by using CO-COMM and CO-ASSOC moves we can shuffle the arrows of the FOTREE, which is “conjugated” with a fanin at the root and with fanouts at the leaves, so that eventually we get this.
The self-multiplication is achieved! It looks strikingly like the anaphase [source]
followed by telophase [source]
____________________________________________________
I hope to make this presentation self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)
_________________________________________________________
This series of posts may be used as a longer, more detailed version of sections
from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication, arXiv:1403.8046 [cs.AI], which is accepted in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York, (go see the presentation of Louis Kauffman if you are near the event.) Here is a link to the published article, free, at MIT Press.
_________________________________________________________
2. Lambda calculus terms as seen in chemlambda continued.
Let’s look at the structure of a molecule coming from the process of translation of a lambda term described in part IV.
Then I shall make some comments which should be obvious after the fact, but useful later when we shall discuss about the relation between the graphic beta move (i.e. the beta rule for g-patterns) and the beta reduction and evaluation strategies.
That will be a central point in the exposition, it is very important to understand it!
So, a molecule (i.e. a pattern with the free ports names erased, see part II for the denominations) which represents a lambda term looks like this:
In light blue is the part of the molecule which is essentially the syntactic tree of the lambda term. The only peculiarity is in the orientation of the arrows of lambda nodes.
Practically this part of the molecule is a tree, which has as nodes the lambda and application ones, but not fanouts, nor fanins.
The arrows are directed towards the up side of the figure. There is no need to draw it like this, i.e. there is no global rule for the edges orientations, contrary to the ZX calculus, where the edges orientations are deduced from from the global down-to-up orientation.
We see a lambda node figured, which is part of the syntactic tree. It has the right.out port connecting to the rest of the syntactic tree and the left.out port connecting to the yellow part of the figure.
The yellow part of the figure is a FOTREE (fanout tree). There might be many FOTREEs, in the figure appears only one. By looking at the algorithm of conversion of a lambda term into a g-pattern, we notice that in the g-patterns which represent lambda terms the FOTREEs may appear in two places:
As a consequence of this observation, here are two configurations of nodes which NEVER appear in a molecule which represents a lambda term:
Notice that these two patterns are EXACTLY those which appear as the LEFT side of the moves DIST! More about this later.
Remark also the position of the the insertion points of the FOTREE which comes out of the left.out port of the figured lambda node: the out ports of the FOTREE connect with the syntactic tree somewhere lower than where the lambda node is. This is typical for molecules which represent lambda terms. For example the following molecule, which can be described as the g-pattern L[a,b,c] A[c,b,d]
(but with the port variables deleted) cannot appear in a molecule which corresponds to a lambda term.
Let’s go back to the first image and continue with “TERMINATION NODE (1)”. Recall that termination nodes are used to cap the left.out port of a lambda lode which corresponds to a term Lx.A with x not occurring in A.
Finally, “FREE IN PORTS (2)” represents free in ports which correspond to the free variables of the lambda term. As observed earlier, but not figured in the picture, we MAY have free in ports as ones of a FANOUT tree.
I collect here some obvious, in retrospect, facts:
_______________________________________________________
3. The beta move. Reduction and evaluation.
I explain now in what sense the graphic beta move, or beta rule from chemlambda, corresponds to the beta reduction in the case of molecules which correspond to lambda terms.
Recall from part III the definition of he beta move
“
L[a1,a2,x] A[x,a4,a3] <–beta–> Arrow[a1,a3] Arrow[a4,a2]
or graphically
If we use the visual trick from the pedantic rant, we may depict the beta move as:
i.e. we use as free port variables the relative positions of the ports in the doodle. Of course, there is no node at the intersection of the two arrows, because there is no intersection of arrows at the graphical level. The chemlambda graphs are not planar graphs.”
The beta reduction in lambda calculus looks like this:
(Lx.B) C –beta reduction–> B[x:=C]
Here B and C are lambda terms and B[x:=C] denotes the term which is obtained from B after we replace all the occurrences of x in B by the term C.
I want to make clear what is the relation between the beta move and the beta reduction. Several things deserve to be mentioned.
It is of course expected that if we translate (Lx.B)C and B[x:=C] into g-patterns, then the beta move transforms the g-pattern of (Lx.B)C into the g-pattern of B[x:=C]. This is not exactly true, in fact it is true in a more detailed and interesting sense.
Before that it is worth mentioning that the beta move applies even for patterns which don’t correspond to lambda terms. Hence the beta move has a range of application greater than the beta reduction!
Indeed, look at the third figure from this post, which can’t be a pattern coming from a lambda term. Written as a g-pattern this is L[a,b,c] A[c,b,d]. We can apply the beta move and it gives:
L[a,b,c] A[c,b,d] <-beta-> Arrow[a,d] Arrow[b,b]
which can be followed by a COMB move
Arrow[a,d] Arrow[b,b] <-comb-> Arrow[a,d] loop
Graphically it looks like that.
In particular this explains the need to have the loop and Arrow graphical elements.
In chemlambda we make no effort to stay inside the collection of graphs which represent lambda terms. This is very important!
Another reason for this is related to the fact that we can’t check if a pattern comes from a lambda term in a local way, in the sense that there is no local (i.e. involving an a priori bound on the number of graphical elements used) criterion which describes the patterns coming from lambda terms. This is obvious from the previous observation that FOTREEs connect to the syntactic tree lower than their roots.
Or, chemlambda is a purely local graph rewrite system, in the sense that the is a bound on the number of graphical elements involved in any move.
This has as consequence: there is no correct graph in chemlambda. Hence there is no correctness enforcement in the formalism. In this respect chemlambda differs from any other graph rewriting system which is used in relation to lambda calculus or more general to functional programming.
Let’s go back to the beta reduction
(Lx.B) C –beta reduction–> B[x:=C]
Translated into g-patterns the term from the LEFT looks like this:
L[a,x,b] A[b,c,d] C[c] FOTREE[x,a1,...,aN] B[a1,...,aN, a]
where
The beta move does not need all this context, but we need it in order to explain in what sense the beta move does what the beta reduction does.
The beta move needs only the piece L[a,x,b] A[b,c,d]. It is a local move!
Look how the beta move acts:
L[a,x,b] A[b,c,d] C[c] FOTREE[x,a1,...,aN] B[a1,...,aN, a]
<-beta->
Arrow[a,d] Arrow[c,x] FOTREE[x,a1,...,aN] B[a1,...,aN, a]
and then 2 comb moves:
Arrow[a,d] Arrow[c,x] C[c] FOTREE[x,a1,...,aN] B[a1,...,aN, a]
<-2 comb->
C[x] FOTREE[x,a1,...,aN] B[a1,...,aN,d]
Graphically this is:
The graphic beta move, as it looks on syntactic trees of lambda terms, has been discovered in
Wadsworth, Christopher P. (1971). Semantics and Pragmatics of the Lambda Calculus. PhD thesis, Oxford University
This work is the origin of the lazy, or call-by-need evaluation in lambda calculus!
Indeed, the result of the beta move is not B[x:=C] because in the reduction step is not performed any substitution x:=C.
In the lambda calculus world, as it is well known, one has to supplement the lambda calculus with an evaluation strategy. The call-by-need evaluation explains how to do in an optimized way the substitution x:=C in B.
From the chemlambda point of view on lambda calculus, a very interesting thing happens. The g-pattern obtained after the beta move (and obvious comb moves) is
C[x] FOTREE[x,a1,...,aN] B[a1,...,aN,d]
or graphically
As you can see this is not a g-pattern which corresponds to a lambda term. That is because it has a FOTREE in the middle of it!
Thus the beta move applied to a g-pattern which represents a lambda term gives a g-patterns which can’t represent a lambda term.
The g-pattern which represents the lambda term B[x:=C] is
C[a1] …. C[aN] B[a1,...,aN,d]
or graphically
In graphic lambda calculus, or GLC, which is the parent of chemlambda we pass from the graph which correspond to the g-pattern
C[x] FOTREE[x,a1,...,aN] B[a1,...,aN,d]
to the g-pattern of B[x:=C]
C[a1] …. C[aN] B[a1,...,aN,d]
by a GLOBAL FAN-OUT move, i.e. a graph rewrite which looks like that
if C[x] is a g-pattern with no other free ports than “x” then
C[x] FOTREE[x, a1, ..., aN]
<-GLOBAL FAN-OUT->
C[a1] …. C[aN]
As you can see this is not a local move, because there is no a priori bound on the number of graphical elements involved in the move.
That is why I invented chemlambda, which has only local moves!
The evaluation strategy needed in lambda calculus to know when and how to do the substitution x:C in B is replaced in chemlambda by SELF-MULTIPLICATION.
Indeed, this is because the g-pattern
C[x] FOTREE[x,a1,...,aN] B[a1,...,aN,d]
surely has places where we can apply DIST moves (and perhaps later FAN-IN moves).
That is for the next post.
___________________________________________________
I hope to make this presentation self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)
_________________________________________________________
This series of posts may be used as a longer, more detailed version of sections
from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication, arXiv:1403.8046 [cs.AI], which is accepted in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York, (go see the presentation of Louis Kauffman if you are near the event.) Here is a link to the published article, free, at MIT Press.
_________________________________________________________
2. Lambda calculus terms as seen in chemlambda .
In this post is explained how to associate to any untyped lambda calculus term a g-pattern.
Important. Not any g-pattern, i.e. not any pattern, and not any molecule from chemlambda is associated to a lambda term!
Recall first what is a(n untyped) lambda term.
<lambda term> ::= <variable> | ( <lambda term> <lambda term> ) | ( L <variable> . <lambda term>)
The operation which associates to a pair of lambda terms A and B the term AB is called application.
The operation which associates to a variable x and a term A the term Lx.A is called (lambda) abstraction.
Every variable which appears in a term A is either bound or free. The variable x is bound if it appears under the scope of an abstraction, i.e. there is a part of A with the form Lx. B .
It it allowed to rename the bound variables of a term. This is called alpha renaming or alpha conversion. Two terms which differ only by alpha renaming are considered to be the same one.
It is then possible to rename the bound variables of a term such that if x is a bound variable then it appears under the scope of only one abstraction and moreover it does not appear as a free variable.
Further is an algorithm which transforms a lambda term, in this form which eliminates the ambiguities of the names of bound variables, into a g-pattern. See the post Conversion of lambda calculus terms into graphs for an algorithm which transforms a general lambda term into a GLC graph.
In this algorithm, a variable is said to be “fresh” if it does not appear before the step of the algorithm in question.
We start from declaring that we shall use (lambda terms) variables as port variables.
Let Trans[a,A] be the translation operator, which has as input a variable and a lambda term and as output a mess (see part II for the definition of a mess: “A mess is any finite multiset of graphical elements in grammar version.”)
The algorithm defines Trans.
We start from an initial pair a0, A0 , such that a0 does not occur in A0.
Then we define Trans recursively by
Practically, Trans gives a version of the syntactic tree of the term, with some peculiarities related to the use of the grammar version of the graphical elements instead of the usual gates notation for the two operations, and also the strange orientation of the arrow of the lambda node which is decorated by the respective bound variable.
Trans[a0,A0] is a mess and not a g-pattern because there may be (port) variables which occur more than twice. There are two possible cases for this:
Let’s see examples:
As you see the port variable x appears 3 times, once as an out port variable, in L[a1,x,a] , and twice as an in port variable, in Arrow[x,a2] Arrow[x,a3] .
In this case the port variable z does not occur as a out port variable, but it appears twice as a in port variable, in Arrow[z,a4] Arrow[z,a6].
To pass from a mess to a g-pattern is easy now: we shall introduce fanout nodes.
Indeed, an FO tree with free in port a and free out ports a1, a2, …, aN is, by definition, ANY g-pattern formed by the rules:
Remark that by a sequence of CO-COMM and CO-ASSOC moveswe can pass from any FO tree with free in port variable a and free out port variables a1, …, aN to any other FO tree with the same free in or out port variables.
We shall not choose a canonical FO tree associated to a pair formed by one free in port variable and a finite set of free out port variables, for this reason. (However, in any algorithm where FO trees have to be constructed, such a choice will be embedded in the respective algorithm.]
In order to transform the mess which is outputted by the Trans operator, we have to solve the cases (a), (b) explained previously.
(a) Suppose that there is a port variable x which satisfies the description for (a), namely that x occurs once as an out port variable and more than once as an in port variable. Remark that, because of the definition of the Trans operator, the port variable x will appear at least twice in a list Arrow[x,a1] … Arrow[x,aN] and only once somewhere in a node L[b,x,c].
Pick then an FO tree FOTREE[x,a1,...,aN] with the only free in port variable x and the only free out port variables a1, …, aN. Erase then from the mess outputted by Trans the collection Arrow[x,a1] … Arrow[x,aN] and replace it by FOTREE[x,a1,...,aN].
In this way the port variable x will occur only once in a out port, namely in L[b,x,c] and only once in a in port, namely the first FO[x,...] element of the FO tree FOTREE[x,a1,...,aN].
Let’s see for our example, we have
Trans[a, Lx.xx] = L[a1,x,a] A[a2,a3,a1] Arrow[x,a2] Arrow[x,a3]
so the variable x appears at an out port in the node L[a1,x,a] and at in ports in the list Arrow[x,a2] Arrow[x,a3] .
There is only one FO tree with the free in port x and the free out ports a2, a3, namely FO[x,a2,a3]. Delete the list Arrow[x,a2] Arrow[x,a3] and replace it by FO[x,a2,a3]. This gives
L[a1,x,a] A[a2,a3,a1] FO[x,a2,a3]
which is a g-pattern! Here is what we do, graphically:
(b) Suppose that there is a port variable x which satisfies the description for (b), namely that x does not occur as an out port variable but it occurs more than once as an in port variable. Because of the definition of the Trans operator, it must be that x will appear at least twice in a list Arrow[x,a1] … Arrow[x,aN] and nowhere else.
Pick then a FO tree FOTREE[x,a1,...,aN] with the only free in port variable x and the only free out port variables a1, …, aN.
Delete Arrow[x,a1] … Arrow[x,aN] and replace it by FOTREE[x,a1,...,aN] .
In this way the variable x will appear only once, as a free in port variable.
For our example, we have
Trans[a,(xz)(yz)] = A[a1,a2,a] A[a3,a4,a1] Arrow[x,a3] Arrow[z,a4] A[a5,a6,a2] Arrow[y,a5] Arrow[z,a6]
and the problem is with the port variable z which does not occur in any out port, but it does appear twice as an in port variable, namely in Arrow[z,a4] Arrow[z,a6] .
We delete Arrow[z,a4] Arrow[z,a6] and replace it by FO[z,a4,a6] and we get the g-pattern
A[a1,a2,a] A[a3,a4,a1] Arrow[x,a3] FO[z,a4,a6] A[a5,a6,a2] Arrow[y,a5]
In graphical version, here is what has been done:
OK, we are almost done.
It may happen that there are out port variables which appear from Lx.A with x not occuring in A (i.e. free). For example let’s start with a0=a and A0 = Lx.(Ly. x) . Then Trans[a,Lx.(Ly.x)] = L[a1,x,a] Trans[a1, Ly.x] = L[a1,x,a] L[a2,y,a1] Trans[a2,x] = L[a1,x,a] Arrow[x,a2] L[a2,y,a1].
There is the port variable y which appears only as an out port variable in a L node, here L[a2,y,a1], and not elsewhere.
For those port variables x which appear only in a L[a,x,b] we add a termination node T[x].
In our example L[a1,x,a] Arrow[x,a2] L[a2,y,a1] becomes L[a1,x,a] Arrow[x,a2] L[a2,y,a1] T[y]. Graphically this is
We may still have Arrow elements which can be absorbed into the nodes ports, therefore we close the conversion algorithm by:
Apply the COMB moves (see part III) in the + direction and repeat until there is no place to apply them any more.
Exercice: Consider the Y combinator
Y = Lf.( (Lx. f(xx)) (Ly. f(yy)) )
Find it’s conversion as a g-pattern.
________________________________________________________________
Here is the portrait of the ideal collaborator:
Oh, and can discuss over the border with quick learning mathematicians.
ALTERNATIVELY:
If interested please call and let’s make stuff that counts!
______________________________________________________________
I hope to make this presentation self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)
_________________________________________________________
This series of posts may be used as a longer, more detailed version of sections
from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication, arXiv:1403.8046 [cs.AI], which is accepted in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York, (go see the presentation of Louis Kauffman if you are near the event.) Here is a link to the published article, free, at MIT Press.
_________________________________________________________
1. The chemlambda formalism continued: graph rewrites.
Now we have all we need to talk about graph rewrites.
For clarity, see part II for the notion of “pattern”. Its meaning depends on what we use: the graphical version of the grammar version. In the graphical version a pattern is a chemlambda graph with the free ports and invisible nodes decorated with port variables. In the grammar version we have the equivalent notion of a g-pattern, which is a way to write a pattern as a multiset of graphical elements.
It is allowed to rename the port variables in a g-pattern, such that after the renaming we still get a g-pattern. That means that if M is a g-pattern and f is any one-to-one function from V(M) to another set of port variables A, then we may replace any port variable x from V(M) by f(x). We shall not think about this (sort of alpha renaming for g-patterns) as being a graph rewrite.
I shall use the following equivalent names further:
In simple words, a graph rewrite is a rule which says: “replace the pattern by the pattern “.
Let’s see more precisely then what is a graph rewrite. (Technically this is a simple form of graph rewrite, which is not dependent on context, later we may speak about more involved forms. First let’s understand exactly this simple form!)
In order to define a graph rewrite, or move, we need two g-patterns, call them and , such that (perhaps after a renaming of port variables):
A move is a pair of such g-patterns. The is called the LEFT pattern of the move, the is called the RIGHT pattern of the move.
The move can be performed from LEFT to RIGHT, called sometimes the “+” direction: replace the LEFT pattern by the RIGHT pattern.
Likewise, the move can be performed from RIGHT to LEFT, called sometimes the “-” direction: replace the RIGHT pattern with the LEFT pattern.
Technically, what I describe here can be made fully explicit as a DPO graph rewriting.
Even if the moves are reversible (they can be performed in the + or – direction), there is a preference to use only the “+” direction (and to embed, if needed, a move performed in the “-” direction into a sequence of moves, called “macro”, more about this later).
The “+” direction is not arbitrarily defined.
_________________________________________________________
OK, enough with these preparations, let’s see the moves.
We shall write the moves in two ways, which are equivalent.
When expressed with g-patterns, they are written as
LEFT pattern <–name of move–> RIGHT pattern
When expressed with patterns (i.e graphical), they appear as
The port names appear in blue. The name of the move appears in blue, the LEFT is on the left, the RIGHT is on the right, the move is figured by a blue arrow.
Pedantic, but perhaps useful rant. For some reason, there are people who confuse graphs (which are clearly defined mathematical objects) with their particular representations (i.e. doodles), taking them “literally”. Graphs are graphs and doodles are doodles. When people use doodles for reasoning with graphs, this is for economy of words reasons, the famous “a picture is worth a thousand words”. There is nothing wrong with using doodles for reasoning with graphs, as long as you know the convention used. Perhaps the convention is so intuitive that it would need 1000000 words to make it clear (for a machine), but however there is a simple criterion which helps those who don’t trust their sight: you got it right if you understand what the doodle means at the graph level.
Look again at the previous picture, which shows you what a generic move looks like. The move (from LEFT to RIGHT) consists into:
How simple is that?
To make it even more simple, we use the following visual trick: use the relative placements of the free ports in the doodle as the port variables.
If look carefully at the previous picture, then you notice that you may redraw it (without affecting what the doodle means at the graph level) by representing the free ports of the RIGHT in the same relative positions as the free ports from the left.
The drawing would then look like this:
Then you may notice that you don’t need to write the port variables on the doodles, because they have the same relative positions, so you may as well describe the move as:
This is the convention used everywhere in the doodles from this blog (and it’s nothing special, it’s used everywhere).
I shall close the pedantic rant by saying that there is a deep hypocrisy in the claim that there is ANY need to spend so much words to make clear things clear, like the distinction between graphs and doodles, and relative positions and so on. I ask those who think that text on a page is clear and (a well done) doodle is vague: do you attach to your text a perhaps sound introduction which explains that you are going to use latin letters, that no, the letter and it’s image in the mirror are not the same, that words are to be read from left to right, that space is that character which separates two words, that if you hit the end of a text line then you should pass to the line from behind, that a line is a sequence of characters separated by an invisible character eol, …..? All this is good info for making a text editor, but you don’t need to program a text editor first in order to read a book (or to program a text editor). It would be just crazy, right? Our brains use exactly the same mechanisms to parse a doodle as a text page and a doodle as a depiction of a graph. Our brains understand very well that if you change the text fonts then you don’t change the text, and so on. A big hypocrisy, which I believe has big effects in the divide between various nerd subcultures, like IT and geometers, with a handicapping effect which manifests into the real life, under the form of the products the IT is offering. Well, end of rant.
Combing moves. These moves are not present in the original chemlambda formalism, because they are needed at the level of the g-patterns. Recall from part I that Louis Kauffman proposed to use commutative polynomials as graphical elements, which brings the need to introduce the Arrow element A[x,y]. This is the same as introducing invisible nodes in the chemlambda molecules (hence the passage from molecules to patterns). The combing moves are moves which eliminate (or add) invisible nodes in patterns. This corresponds in the graphical version to decorations (of those invisible nodes) on arrows of the molecules.
A combing move eliminates an invisible node (in the + direction) or adds an invisible node (in the – direction).
A first combing move is this:
Arrow[x,y] A rrow[y,z] <–comb–> Arrow[x,z]
or graphically remove (or add) a (decoration of an) invisible node :
Another combing move is:
Arrow[x,x] <–comb–> loop
or graphically an arrow with the in and out ports connected is a loop.
Another family of combing moves is that if you connect an arrow to a port of a node then you can absorb the arrow into the port:
L[x,y,z] Arrow[u,x] <–comb–> L[u,y,z]
L[x,y,z] Arrow[y,u] <–comb–> L[x,u,z]
L[x,y,z] Arrow[z,u] <–comb–> L[x,y,u]
______________________________________
FO[x,y,z] Arrow[u,x] <–comb–> FO[u,y,z]
FO[x,y,z] Arrow[y,u] <–comb–> FO[x,u,z]
FO[x,y,z] Arrow[z,u] <–comb–> FO[x,y,u]
______________________________________
A[x,y,z] Arrow[u,x] <–comb–> A[u,y,z]
A[x,y,z] Arrow[u,y] <–comb–> A[x,u,z]
A[x,y,z] Arrow[z,u] <–comb–> A[x,y,u]
______________________________________
FI[x,y,z] Arrow[u,x] <–comb–> FI[u,y,z]
FI[x,y,z] Arrow[u,y] <–comb–> FI[x,u,z]
FI[x,y,z] Arrow[z,u] <–comb–> FI[x,y,u]
______________________________________
Now, more interesting moves.
The beta move. The name is inspired from the beta reduction of lambda calculus (explanations later)
L[a1,a2,x] A[x,a4,a3] <–beta–> Arrow[a1,a3] Arrow[a4,a2]
or graphically
If we use the visual trick from the pedantic rant, we may depict the beta move as:
i.e. we use as free port variables the relative positions of the ports in the doodle. Of course, there is no node at the intersection of the two arrows, because there is no intersection of arrows at the graphical level. The chemlambda graphs are not planar graphs.
The FAN-IN move. This is a move which resembles the beta move.
FI[a1,a4,x] FO[x,a2,a3]
<–FAN-IN–>
Arrow[a1,a3] Arrow[a4,a2]
(I wrote it like this because it does not fit in one line)
Graphically, with the obvious convention from the pedantic rant, the move is this:
The FAN-OUT moves. There are two moves: CO-COMM (because it resembles with a diagram which expresses co-commutativity) and CO-ASSOC (same reason, but for co-associativity).
FO[x,a1,a2] <–CO-COMM–> FO[x,a2,a1]
and
FO[a1,u,a2] FO[u,a3,a4]
<-CO-ASSOC->
FO[a1,a3,v] FO[v,a4,a2]
or graphically:
The DIST moves. These are called distributivity moves. Remark that the LEFT pattern is simpler than the RIGHT pattern in both moves.
A[a1,a4,u] FO[u,a2,a3]
<–DIST–>
FO[a1,a,b] FO[a4,c,d] A[a,c,a2] A[b,d,a3]
and
L[a1,a4,u] FO[u,a2,a3]
<–DIST–>
FI[a1,a,b] FO[a4,c,d] L[c,b, a2] L[d,a,a3]
or graphically:
The LOCAL PRUNING moves. These are used with the termination node. There are four moves:
FO[a1,a2,x] T[x] <–LOC-PR–> Arrow[a1,a2]
L[a1,x,y] T[x] T[y] <–LOC-PR–> T[a1]
FI[a1,a2,x] T[x] <–LOC-PR–> T[a1] T[a2]
A[a1,a2,x] T[x] <–LOC-PR–> T[a1] T[a2]
or graphically
____________________________________________________________
“The proceedings of ALIFE 14 are now available from MIT Press. The full proceedings, as well as individual papers, are freely available under Creative Commons licenses.
http://mitpress.mit.edu/books/artificial-life-14
“
Great!
Here is a link to our published article.
______________________________________________________
I hope to make this presentation self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)
_________________________________________________________
This series of posts may be used as a longer, more detailed version of sections
from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication, arXiv:1403.8046 [cs.AI], which is accepted in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York, (go see the presentation of Louis Kauffman if you are near the event.) Here is a link to the published article, free, at MIT Press.
_________________________________________________________
1. The chemlambda formalism continued: molecules, patterns and g-patterns.
Chemlambda works with graphs which are called “molecules”. In the last post was proposed also a grammar version of molecules, based on the idea that a molecule is made by a finite number of graphical elements, each graphical element having ports, the ports are marked with “port variables”; two ports connect (in the graphical version) is the same as the repetition of a port variable in the grammar version of a molecule.
Here are the graphical elements, along with their grammar versions:
There is only one loop element. The orientation of the loop, as represented in a drawing, is not relevant!
_________________________________________________________
A chemlambda graph is any graph made by a finite number of graphical elements, such that there is no conflict of orientation of arrows (i.e. the ports named “in” may connect only with the ports named “out”).
By convention, an arrow graphical element which has no free port (i.e. which has the middle.in port and the middle.out port connected) is represented as an arrow between “invisible nodes”. The ports of an arrow element which are connected are called “invisible ports”.
A molecule is a chemlambda graph without invisible nodes.
By convention, we identify a chemlambda graph with the molecule obtained by erasing the invisible nodes.
A pattern is a chemlambda graph with the free ports and invisible nodes decorated with different port variables.
Let’s give a name for the grammar version of a pattern.
A mess is any finite multiset of graphical elements in grammar version. The port variables of a mess is the set of port variables which appear as arguments in the graphical elements of the mess. The set of port variables of a mess M is denoted by V(M).
A g-pattern is a mess with the properties:
Simple examples:
The set of free variables of a g-pattern M is the set of port variables which appear only once. This set is denoted by FV(M) and it decomposes into a disjoint union of FV_in (M) and FV_out(M) of free port variable which appear in a “in” port and free port variables which appear in a “out” port.
There are g-patterns which have an empty set of port variables: for example V(loop) is the empty set.
The set of invisible variables of a g-pattern M, denoted by Inv(M), is made by those variables of M which are not free and they appear in one of the ports of an arrow element.
As an illustration, consider this:
_________________________________________________________
Now, here is possibly a better idea. To explore. One which connects to a thread which is not developed for the moment (anybody interested? contact me) neural type computation with chemlambda and GLC .
The idea is that once the initial configuration of actors and their initial states are set, then why not move the actors around and make the possible reductions only if the actors :Alice and :Bob are in the same synapse server.
Because the actor IS the state of the actor, the rest of stuff a GLC actor knows to do is so trivially easy so that it is not worthy do dedicate one program per actor running some place fixed. This way, a synapse server can do thousands of reductions on different actors datagrams (see further) in the same time.
Instead:
There is so much place for the artificial chemistry chemlambda at the bottom of the Internet layers that one can then add some learning mechanisms to the synapse servers. One is for example this: suppose that a synapse server matches two actors datagrams and finds there are more than one possible reductions between them. Then the synapse server asks his neighbour synapse servers (which perhaps correspond to a virtual neuroglia) if they encouter this configuration. It chooses then (according to a simple algorithm) which reduction to make based on the info coming from its neighbours in the same glial domain and tags the packets which result after the reduction (i.e. adds to them in some field) a code for the mode which was made. Successful choices are those which have descendants which are active, say after more than $n$ reductions.
Plenty of possibilities, plenty of room at the bottom.
I hope to make this presentation as easy to follow as possible, particularly by trying to make is self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)
_________________________________________________________
This series of posts may be used as a longer, more detailed version of sections
from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication, arXiv:1403.8046 [cs.AI], which is accepted in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York, (go see the presentation of Louis Kauffman if you are near the event.) Here is a link to the published article, free, at MIT Press.
_________________________________________________________
1. The chemlambda formalism.
Chemlambda has been introduced with the name “chemical concrete machine” (as an allusion to Berry and Boudol CHAM) in the article M. Buliga, Chemical concrete machine, arXiv:1309.6914 [cs.FL] , also available on figshare here: (doi).
Chemlambda is a graph-rewrite system. From the linked wiki page, only slightly edited:
“Graph transformation, or graph rewriting, concerns the technique of creating a new graph out of an original graph algorithmically.
Graph transformations can be used as a computation abstraction. The basic idea is that the state of a computation can be represented as a graph, further steps in that computation can then be represented as transformation rules on that graph. Such rules consist of an original graph, which is to be matched to a subgraph in the complete state, and a replacing graph, which will replace the matched subgraph.
Formally, a graph rewriting system usually consists of a set of graph rewrite rules of the form , with being called pattern graph (or left-hand side) and being called replacement graph (or right-hand side of the rule). A graph rewrite rule is applied to the host graph by searching for an occurrence of the pattern graph (pattern matching) and by replacing the found occurrence by an instance of the replacement graph.”
In order to define chemlambda we need two ingredients:
A chemlambda graph (aka a molecule) is any graph made by a finite number of the following graphical elements:
A BNF form would be:
<graphical-element> ::= <lambda> | <fanout> | <appl> | <fanin> | <arrow> | <loop> | <termin>
<middle.in>::= port variable
<middle.out>::= port variable
<left.in> ::= port variable
<left.out> ::= port variable
<right.in> ::= port variable
<right.out>::= port variable
<lambda>: := L[<middle.in>,<left.out>,<right.out>]
<fanout>::= FO[<middle.in>,<left.out>,<right.out>]
<appl>::= A[<left.in>,<right.in>,<middle.out>]
<fanin>::=FI[<left.in>,<right.in>,<middle.out>]
<arrow>::= Arrow[<middle.in>,<middle.out>]
<loop>::= loop
<termin>::= T[<middle.in>]
This notation is inspired from Louis Kauffman proposal to use commutative polynomials for the graphical elements (then, the variables of the polynomials play the roles of the ports). Louis hacked on July 3rd 2014 the Mathematica symbolic reduction for polynomial commutative algebra in order to reduce the fixed point combinator in GLC automatically. I take his approach and use it here for the grammar notation of chemlambda molecules.
Let’s see first the names of the elements, then let’s discuss more details about them.
A shorter, geometrical way to say all this is that the 3valent nodes are all locally planar.
A chemlambda graph is called “molecule”, and it is made by a finite number of graphical elements, i.e.
<molecule> ::= <graphical-element> | <molecule> <molecule>
with the convention that:
If we use a grammar for this (some might like the 1000 words instead of the picture) that means that a <molecule> is well written if:
Question: why in graphical notation are needed only two colours for the 3valent nodes?
Answer: because the lambda node and the fanout node have the same type, therefore we colour the first red and the second green. Likewise the fanin node and the application node have the same type, so we colour the first red and the second green.
Let’s see some simple examples. (You can click on the figure to make it bigger.)
First row: at the left is a molecule in graphical notation. At the right is the same molecule in grammar notation. Look at the arrow which appears vertical in the graphical notation from the left, where is it in the grammar notation? Well look at the port variable “e”. It appears in the right.out port of the node lambda L[a,b,e] and also in the left.in port of the node application A[e,d,c].
In the grammar notation the same graph may be written as
L[a,b,e] Arrow[e,f] A[f,d,c]
but this is for the next time, when we shall talk about the combing rewrites. (What will happen is that the Arrow[e,f] will disappear because it connects the out port named with the variable “e” with the middle.in port of the Arrow[e,f], which makes the Arrow element redundant.)
Second row: At the left a graph made by two arrows and two loops. Notice two things:
At the right we see the grammar notation of the same graph.
Third row: Here is a more consistent graph (at the left) and it’s grammar notation at the right. There are 3 nodes, fanout, lambda and termination nodes. Notice that the port variable “a” appears only in L[a,b,a] which corresponds to the fact that the right.out port and the middle,in port of that lambda node are connected (both being tagged with the port variable “a”.
_________________________________________________________
Here are the 1000 words which are needed to properly explain the notion of a molecule in chemlambda (i.e. if you don’t trust your sight). Such a description is of course useful for writing programs.
After this straightforward description, perhaps extremely boring one, because it can be immediately deduced from the short definition, next time we shall see the graph rewrites of chemlambda.
_________________________________________________________
Here are some quotes:
“Jean-Claude Bradley was one of the most influential open scientists of our time. He was an innovator in all that he did, from Open Education to bleeding edge Open Science; in 2006, he coined the phrase Open Notebook Science. His loss is felt deeply by friends and colleagues around the world.“
“Science, and science communication is in crisis. We need bold, simple visions to take us out of this, and Open Notebook Science (ONS) does exactly that. It:
Every word is true!
This is the future of the research communication. Or at least the beginning of it. ONS has open, perpetual peer review as a subset.
Personal notes. Look at the left upper corner of this page, it reads:
chorasimilarity | computing with space | open notebook.
Yay! the time is coming! the weirdos who write on arXiv, now figshare, who use open notebooks, all as a replacement for legacy publication, will soon be mainstream :)
Now, seriously, let’s put some gamification into it, so those who ask “what IS a notebook?” can play too. They ARE the future. Hope that soon the Game of Research and Review, aka playing MMORPG games at the knowledge frontier, will emerge.
There are obvious reasons for that:
See also Notes for “Internet of Things not Internet of Objects”.
_________________________________________________
Please contribute, contradict and dispute me here or by private communications, of course under the constraints of long attention span and reasonable understanding of the subject.
_________________________________
If you want a category for chemlambda, then is the category with arrows being the graph rewrites and with objects chemlambda graphs.
That is the sloppy formulation. Here is one more precise. But before, let’s be sure you know what a chemlambda graph is and what a move (or graph rewrite) is.
A chemlambda graph (aka a molecule) is any graph made by a finite number of the following graphical elements:
The 3valent nodes are all locally planar. What is the meaning of this? Read the following comic strip (click on the image to make it bigger).
The list of moves (graph rewrites) of chemlambda is given in several places, for a short clear description read for example
M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication, arXiv:1403.8046 , accepted in the
ALIFE 14: The Fourteenth International Conference on the Synthesis and Simulation of Living Systems, July 30th – Aug 2nd 2014, New York
Let’s take one move, the graphic beta move, how it functions? See the following image.
________________________________
OK then, what is a category theory approach to chemlambda?
… (and, more importantly, to asynchronous decentralized computations with chemlambda?)
One possible formulation is the following.
1. You know that you can define a groupoid only in terms of arrows, as a family of items (arrows) with a partially defined composition operation which takes a pair of arrows from the domain of definition and gives an arrow, with a totally defined inverse operation which takes an arrow and gives its inverse.
There are some axioms satisfied by these two operations.
You identify the objects of the groupoid as arrows .
2. Take then a graph in chemlambda (call it G) and associate to it the groupoid R(G) which is generated by all the graph rewrites which can be done on G. It is a groupoid, because all graph rewrites are reversible (this gives the inverse) and because the composition of graph rewrites is the composition of arrows.
3. Remark that the objects of this groupoid are not simply the graphs which can be related to G by a finite sequence of reductions, but more. An object appears to be a graph with a selected subgraph which is the subject of the move.
4. There is more structure. Because each arrow is a graph rewrite, and each graph rewrite has a name, it follows that you have a decorated groupoid, with arrows decorated by a word in the free group generated by the graph
rewrites names.
5. You may want to privilege some directions of moves (move is a shorter name for graph rewrite) over the others. For example the beta move in the usual direction over the beta move in the opposite direction. But you may want to keep other directions as likely to be used, for example like in the
CO-COMM (co-commutativity move) and CO-ASSOC (co-associativity move).
All in all this may give several variants of supplementary structures, all coming actually from supplementary structure over the free group generated by the moves names. Among them:
5.1. Partial order relations, like: an object A is smaller than an object B if there is an arrow from A to B decorated only by positive or neutral moves.
5.2. Give to each move name a probability such that the probability for the move in the positive direction is greater than the probability for the move in the opposite direction, and 50-50 for the neutral moves. Then define random walks on the groupoid.
6. Now, there is more structure, but it may be misleading. Add a lattice structure to the objects, coming from the fact that a disjoint union of two chemlambda graphs is a chemlambda graph. This will produce more objects than before, because until now the objects are a graph with one place selected for a rewrite. You get a bigger groupoid, which admits as objects chemlambda graphs with more than one place selected for rewrites and new arrows which can be described as parallel composition of other arrows.
From this point it matters very much what you choose to do. Because it starts to suck, here is why.
At point 6 is already introduced a global point of view, namely that there is any meaning to parallel composition (the problem is that in the real world there is no meaning of this composition in the absence of a global point of view).
Another critic I have against such an approach is that the groupoid R(G) and most of the other structure introduced is global, in the sense that it is needed only for explaining the model by following the category fashion. As an outcome we obtain a God’s view of the model.
You don’t need this structure to define what a local move is.
Even worse, let’s make an analogy between chemlambda graphs and you, me, any other user of the net or any living being in the world, and between moves and any interactions between you, me and anybody else.
Then you don’t need to know how, in China, that donkey stumbled upon that rock while we have a meaningful conversation. Heck, you can’t even define what means that as we have this conversation that donkey in China stumbled upon that rock, unless you use some external, unneeded reference.
Even less sense makes to model this by saying that it does not matter because our conversation is the same in the case the donkey stumbled before you read this post or after you read this post.
Because such an explanation of a local, asynchronous interaction introduces by the back door that there is a global reference needed, but independent means the succession relations with respect to this global reference do not matter.
_______________________________________________
Read carefully. It has a fractal structure.
Basics about chemlambda graphs and the GUI
The chemlambda graphs (molecules) are not flowcharts. One just has to specify certain (known) graphs with at most 4 nodes and how to replace them with other known simple graphs. That is all.
That means that one needs:
- a file which specifies what kind of graphs are used (by giving the type of nodes and arrows)
- a file which specifies which are the patterns (i.e. graphs) and which are the rewrite moves
- and a program which takes these files as input and a graph and does things, like checking if the graph is of the kind described in file 1, if there are any patterns from the file 2 and do the rewrite in a place where the user wants, do a sequence of rewrites until it forks, if the user wants, take as input a lambda expression given by the user and transform it into a graph.
- then there is the visualization of the graphs program(s), that is the hard part, but it is already done in multiple places. Means that one has to write only the possible conversions of file formats from and to the viz tool.
That is the minimal configuration.
Decorations
There are various reasons why one wants to be able to decorate the graphs, locally, as a supplementary thing, but in no way is this needed for the basic process.
Concerning decorations, one needs a file which specifies how to decorate arrows and which are the relations coming from nodes. But this is not part of the computation.
If we want to make it more powerful then it gets more complex because if we want to do symbolic computations of decorations (like elimination of a relation coming from a node) then probably it is better to output a file of decorations of arrows and relations from nodes and input it in a symbolic soft, like mathematica or something free, there is no need to reinvent the wheel.
After the graph rewrite you loose the decoration, that’s part of the fun which makes decorations less interesting and makes supposedly the computation more secure.
But that depends on the choice of decoration rules.
For example, if you decorate with types then you don’t loose the decoration after the move. Yes, there are nodes and arrows which disappear, but outside of the site where the move was applied, the decorations don’t change.
In the particular case of using types as decorations, there is another phenomenon though. If you use the decoration with types for graphs which don’t represent lambda terms then you will get relations between types, relations which are supplementary. a way of saying this is that some graphs are not well typed, meaning that the types form an algebra which is not free (you can’t eliminate all relations). But the algebra you get, albeit not free, is still an interesting object.
So the decoration procedure and the computation (reduction) procedure are orthogonal. You may decorate a fixed graph and you may do symbolic algebraic computations with the decorations, in an algebra generated by the graph, in the same way as a know generates an algebra called quandle. Or you may reduce the graph, irrespective of the decorations, and get another graph. Decorations of the first graph don’t persist, a priori, after the reduction.
An exception is decoration with types, which persists outside the place where the reduction is performed. But there is another problem, that even if the decoration with types satisfies locally (i.e. at the arrows of each node) what is expected from types, many (most) of the graphs don’t generate a free algebra, as it would be expected from algebras of types.
The first chemical interpretation
There is the objection that the reductions can’t be like chemical reactions because the nodes (atoms) can’t appear or disappear, there should be a conservation law for them.
Correct! What then?
The reduction, let’s pick one – the beta move say – is a chemical reaction of the graph (molecule) with an enzyme which in the formalism appears only with the name “beta enzyme” but is not specified as a graph in chemlambda. Then, during the reaction, some nodes may disappear, in the sense that they bond to the beta enzyme and makes it inactive further.
So, the reduction A –>beta B appears as the reaction
A + beta = B + garbage
How moves are performed
Let’s get a bit detailed about what moves (graph rewrites) mean and how they are done. Every move says replace with , where , are graphs with a small number of nodes and arrows (and also “graph” may well be made only by two arrows, like is the case for for the beta move).
So, now you have a graph G. Then the program looks for chunks in G and adds some annotation (perhaps in an annotation file it produces). Then there may be script which inputs the graph G and the annotation file into the graph viz tool, which has as effect, for example, that the chunk appears phosphorescent on the screen. Or say when you hover with the mouse over the chunk then it changes color, or there is an ellipse which encircles it and a tag saying “beta”.
Suppose that the user clicks, giving his OK for performing the move. Then on the screen the graph changes, but the previous version is kept in the memory, in case the user wants to go back (the moves are all reversible, but sometimes, like in the case of the beta move, the is too common, is everywhere, so the use of both senses of some moves is forbidden in the formalism, unless it is used in a predefined sequence of moves, called “macro move”).
Another example would be that the user clicks on a button which says “go along with the reductions as long as you can do it before you find a fork in the reduction process”. Then, of course it would be good to keep the intermediate graphs in memory.
Yet another example would be that of a node or arrow of the graph G which turn out to belong to two different interesting chunks. Then the user should be able to choose which reduction to do.
It would be good to have the possibility to perform each move upon request,
plus
the possibility to perform a sequence of moves which starts from a first one chosen by the user (or from the only one available in the graph, as is the case for many graphs coming from lambda terms which are obtained by repeated currying and nesting)
plus
the possibility to define new, composed moves at once, for example you notice that there is a chunk which contains and after reduction of to inside , the becomes ; contains now a chunk of another move, which can be reduced and becomes . Now, you may want to say: I save this sequence of two moves from to as a new move. The addition of this new move does not change the formalism because you may always replace this new move with a sequence of two old moves
Practically the last possibility means the ability to add new chunks and in the file which describes the moves and to define the new move with a name chosen by the user.
plus
Finally, you may want to be able to either select a chunk of the input graph by clicking on nodes and arrows, or to construct a graph and then say (i.e. click a button) that from now on that graph will be represented as a new type of node, with a certain arity. That means writing in the file which describes the type of nodes.
You may combine the last two procedures by saying that you select or construct a graph G. Then you notice that you may reduce it in an interesting way (for whatever further purposes) which looks like this:
- before the chain of reduction you may see the graph G as being made by two chunks A and B, with some arrows between some nodes from chunk A and some nodes from chunk B. After the reduction you look at the graph as being made by chunks C, D, E, say.
- Then you “save” your chunks A, B, C, D, E as new types of nodes (some of them may be of course just made by an old node, so no need to save them) and you define a new move which transforms the chunk AB into the chunk CDE (written like this only because of the 1D constraints of writing, but you see what I mean, right?).
The addition of these new nodes and moves don’t change the formalism, because there is a dictionary which transforms each new type of node into a graph made of old nodes and each new move into a succession of old moves.
How can this be done:
- use the definition of new nodes for the separation of G into A, B and for the definition of G’ (the graph after the sequence of reductions) into C,D,E
- save the sequence of moves from G to G’ as new composite move between G and G’
- produce a new move which replaces AB with CDE
That’s interesting how should work properly, probably one should keep both the move AB to CDE and the move G to G’, as well as the translations of G into AB and G’ into CDE.
We’re getting close to actors, but the first purpose of the gui is not to be a sandbox for the distributed computation, that would be another level on top of that.
The value of the sequence of moves saved as a composite move is multiple:
- the which is the start of the sequence contains which is the start of another move, so it always lead to forks: one may apply the sequence or only the first move
- there may be a possible fork after you do the first reduction in , in the sense that there may be another chunk of another move which could be applied
GLC actors
The actors are a special kind of decoration which transform (some) moves (at the choice of the user) into interaction devices.
You decorate the nodes of a graph G with actors names (they are just names, for the moment, at your choice). As a convention let’s say that we denote actor names by :a , :b , etc
You also decorate arrows with pairs of names of actors, those coming from the decorations of nodes, with the convention that (:a, :a) is identified (in the user mind) with :a (nothing surprising here, think about the groupoid over a set X, which is the set of “arrows” and X appears as the set of objects of the groupoid and it identifies with the set of pairs (x,x) with ).
Now, say you have a move from to . Then, as in the boldfaced previous description, but somehow in the opposite sense, you define graphs A, B such that is AB and graphs C,D such that is CD.
Then you say that you can perform the reduction from to only if the nodes of A are all decorated with :a and the nodes of :b are decorated with :b, a different name than :a.
After reduction you decorate the nodes of C with :a and the nodes of D with :b .
In this way the actors with identities :a and :b change their state during the reduction (i.e. the graph made by the nodes decorated with :a and the arrows decorated with (:a,:a) change, same for :b).
The reduction can be done for the graph G only at chunks which are decorated as explained.
To explain what actor :Bob is doing it matters from which point of view. Also, what is the relation between actors and the chemical interpretation, how they fit there?
So let’s take it methodically.
The point of view of the GUI
If we discuss from the point of view of playing with the gui, then the user of the gui has global, God’s view over what happens. That means the the user of the gui can see the whole graph at one moment, the user has a clock which is like a global notion of time. So from this point of view the user of the gui is the master of space and time. He sees the fates of :Bob, :Alice, :Claire, :Dan simultaneously. The user has the right in the gui world to talk about parallel stuff happening (i.e. “at the same time”) and sequential stuff happening (to the same actor or actors). The user may notice that some reductions are independent, in the sense that wrt to the user’s clock the result is the same if first :Bob interacts with :Alice and then :Claire interacts with :Dan or conversely, which makes the user think that there is some notion more general than parallelism, i.e. concurrency.
If we discuss from the point of view of :Bob, it looks different. More later.
Let’s stay at the user of the gui point of view and think about what actors do. We shall use the user’s clock for reference to time and the user’s point of view about space (what is represented on the screen via the viz tool) to speak about states of actors.
What the user does:
- he defines the graph types an the rules of reduction
- he inputs a graph
- he decorates it with actors names
- he click some buttons from time to time ( deus ex machina quote : “is a plot device whereby a seemingly unsolvable problem is suddenly and abruptly resolved by the contrived and unexpected intervention of some new event, character, ability or object.” )
At any moment the actor :Bob has a state.
Definition: the state of :Bob at the moment t is the graph formed by the nodes decorated with the name :Bob, the arrows decorated by (:Bob, :Bob) and the arrows decorated by (:Bob, :Alice), etc .
Because each node is decorated by an actor name, it follows that there are never shared nodes between different actors, but there may be shared arrows, like an arrow decorated (:Bob, :Alice), which is both belonging to :Bob and :Alice.
The user thinks about an arrow (:Bob, :Alice) as being made of two half arrows:
- one which starts at a node decorated with :Bob and has a free end, decorated with :Alice ; this half arrow belongs to the state of :Bob
- one which ends at a node decorated with :Alice and has a free start, decorated with :Bob ; this half arrow belongs to the state of :Alice
The user also thinks that the arrow decorated by (:Bob, :Alice) shows that :Bob and :Alice are neighbours there. What means “there”? Is like you, Bob, want to park your car (state of Bob) and the your front left tyre is close to the concrete margin (i.e. :Alice), but you may consider also that your back is close to the trash bin also (i.e :Elvis).
We may represent the neighboring relations between arrows as a new graph, which is obtained by thinking about :Bob, :Alice, … as being nodes and by thinking that an arrow decorated (:Bob, :Alice) appears as an arrow from the node which represents :Bob to the node which represents :Alice (of course there may be several such arrows decorated (:Bob, :Alice) ).
This new graph is called “actors diagram” and is something used by the gui user to put order in his head and to explain to others the stuff happening there.
The user calls the actors diagram “space”, because he thinks that space is nothing but the neighboring relation between actors at a moment in time (user’s time). He is aware that there is a problem with this view, which supposes that there is a global time notion and a global simultaneous view on the actors (states), but says “what the heck, I have to use a way to discuss with others about what’s happening in the gui world, but I will show great caution and restraint by trying to keep track of the effects of this global view on my explanation”.
Suppose now that there is an arrow decorated (:Bob, :Alice) and this arrow, along with the node from the start (decorated with :Bob) and the node from the end (decorated with :Alice) is part of the of one of the graph rewrites which are allowed.
Even more general, suppose that there is a chunk which has the form AB with the sub-chunk A belonging to :Alice and the sub-chunk B belonging to Bob.
Then the reduction may happen there. (Who initiates it? Alice, Bob, the user’s click ? let’s not care about this for a moment, although if we use the user’s point of view then Alice, Bob are passive and the user has the decision to click or not to click.)
This is like a chemical reaction which takes into consideration also the space. How?
Denote by Alice(t) and Bob(t) the respective states of :Alice and :Bob at the moment t. Think about the states as being two chemical molecules, instead of one as previously.
Each molecule has a reaction site: for Alice(t) the reaction site is A and for Bob(t) the reaction site is B.
They enter in the reaction if two conditions are satisfied:
- there is an enzyme (say the beta enzyme, if the reduction is the beta) which can facilitate the reaction (by the user’s click)
- the molecules are close in space, i.e. there is an arrow from A to B, or from B to A
So you see that it may happen that Alice(t) may have inside a chunk graph which looks like A and Bob(t) may have a chunk graph which looks like B, but if the chunks A, B are not connected such that AB forms a chunk which is like the of the beta move, then they can’t react because (physical interpretation, say) they are not close in space.
The reaction sites of Alice(t) and Bob(t) may be close in space, but if the user does not click then they can’t react because there is no beta enzyme roaming around to facilitate the reaction.
If they are close and if there is a beta enzyme around then the reaction appears as
Alice(t) + Bob(t) + beta = Alice(t+1) + Bob(t+1) + garbage
Let’s see now who is Alice(t+1) and Bob(t+1). The beta rewrite replaces (which is AB) by (which is CD). C will belong to Alice(t+1) and D will belong to Bob(t+1). The rest of Alice(t) and Bob(t) is inherited unchanged by Alice(t+1) and Bob(t+1).
Is this true? What about the actors diagram, will it change after the reaction?
Actually , which is AB, may have (and it usually does) other arrows besides the ones decorated with (:Bob, :Alice). For example A may have arrows from A to the rest of Alice(t), i.e. decorated with (:Alice, :Alice), same for B which may have others arrows from B to the rest of B(t), which are decorated by (:Bob, :Bob).
After the rewrite (chemical reaction) these arrows will be rewired by the replacement of AB by CD, but nevertheless the new arrows which replace those will be decorated by (:Alice, :Alice) (because they will become arrows from C to the rest of Alice(t+1)) and (:Bob, :Bob) (same argument). All in all we see that after the chemical reaction the molecule :Alice and the molecule :Bob may loose or win some nodes (atoms) and they may suffer some internal rewiring (bonds), so this looks like :Alice and :Bob changed the chemical composition.
But they also moved as an effect of the reaction.
Indeed, , which is AB, may have other arrows besides he ones decorated with (:Bob, :Alice) , (:Bob, :Bob) or (:Alice, :Alice). The chunk A (which belongs to Alice(t)) may have arrows which connect it with :Claire, i.e. there may be arrows from A to another actor, Claire, decorated with (:Alice, :Claire), for example.
After the reaction which consist in the replacement of AB by CD, there are rewiring which happened, which may have as effect the apparition of arrows decorated (:Bob, :Claire), for example. In such a case we say that Bob moved close to Claire. The molecules move this way (i.e. in the sense that the neighboring relations change in this concrete way).
Pit stop
Let’s stop here for the moment, because there is already a lot. In the next message I hope to talk about why the idea of using a Chemical reaction network image is good, but still global, it is a way to replace the user’s deus ex machina clicks by random availability of enzymes, but still using a global time and a global space (i.e. the actors diagrams). The model will be better also than what is usually a CRN based model, where the molecules are supposed to be part of a “well stirred” solution (i.e. let’s neglect space effects on the reaction), or they are supposed to diffuse in a fixed space (i.e let’s make the space passive). The model will allow to introduce global notions of entropy.
Such a CRN based model deserves a study for itself, because it is unusual in the way it describes the space and the chemical reactions of the molecules-actors as aspects of the same thing.
But we want to go even further, towards renouncing at the global pov.
In the previous post The Quantomatic GUI may be useful for chemlambda (NTC vs TC, III) is mentioned the quantomatic gui, which can easily do all this, but is not free. Moreover, the goals for the gui proposed here are more modest and easily attainable. Don’t care about compatibility with this or that notion from category theory because our approach is different and because the gui is just step 0. So any quick and dirty, but effective code will do, I believe.
______________________________
Recall the idea: gamification of chemlambda.
___________________________
For those with a non functional right hemisphere, here is the GraphML description of chemlambda graphs and what would mean to do a move.
I use this source for GraphML.
A chemlambda graph is any graph which is directed, with two types of 3-valent nodes and one type of 1-valent node, with the nodes having some attributes. [For mathematicians, is an oriented graph made by trivalent, locally planar nodes, and by some 1-valent nodes, with arrows which may have free starts or free ends, or even loops.]
Moreover, we accept arrows (i.e. directed edges) with free starts, free ends, or both, or with start=end and no node (i.e. loops with no node). For all those we need, in GraphML, to use some “invisible” nodes [multiple variants here, only one is described.]
Here are the trivalent, locally planar nodes:
<node id=”n0″ parse.indegree=”2″ parse.outdegree=”1″>
<data key=”d0″>green</data>
<port name=”middle_out”/>
<port name=”left_in”/>
<port name=”right_in”/>
</node>
<node id=”n3″ parse.indegree=”2″ parse.outdegree=”1″>
<data key=”d0″>red</data>
<port name=”middle_out”/>
<port name=”left_in”/>
<port name=”right_in”/>
</node>
<node id=”n1″ parse.indegree=”1″ parse.outdegree=”2″>
<data key=”d0″>red</data>
<port name=”middle_in”/>
<port name=”left_out”/>
<port name=”right_out”/>
</node>
<node id=”n2″ parse.indegree=”1″ parse.outdegree=”2″>
<data key=”d0″>green</data>
<port name=”middle_in”/>
<port name=”left_out”/>
<port name=”right_out”/>
</node>
<node id=”n4″ parse.indegree=”1″ parse.outdegree=”0″>
<data key=”d0″>term</data> </node>
<node id=”n5″ parse.indegree=”0″ parse.outdegree=”1″>
<data key=”d0″>invisible</data> </node>
<node id=”n6″ parse.indegree=”1″ parse.outdegree=”0″>
<data key=”d0″>invisible</data> </node>
(where “invisible” should be something to agree to use)
<node id=”n7″ parse.indegree=”1″ parse.outdegree=”1″>
<data key=”d0″>invisible</data> </node>
Uses of invisible nodes:
<edge source=”n101″ target=”n6″/>
<edge source=”n5″ target=”n102″/>
<edge source=”n5″ target=”n6″/>
<edge source=”n7″ target=”n7″>
____________________________________________
Examples:
(a) – recognize pattern for beta move
(b) – perform (when called) the beta move
- input is a chemlambda graph (in GraphML)
- output is the same graph and some supplementary file of annotations of the graph.
<node id=”n101″ parse.indegree=”2″ parse.outdegree=”1″>
<data key=”d0″>green</data>
<port name=”middle_out”/>
<port name=”left_in”/>
<port name=”right_in”/>
</node>
<node id=”n102″ parse.indegree=”1″ parse.outdegree=”2″>
<data key=”d0″>red</data>
<port name=”middle_in”/>
<port name=”left_out”/>
<port name=”right_out”/>
</node>
<edge source=”n102″ target=”n101″ sourceport=”right_out” targetport=”left_in”/>
<edge source=”n106″ target=”n102″ sourceport=”???_1″ targetport=”middle_in”/>
<edge source=”n102″ target=”n103″ sourceport=”left_out” targetport=”???_2″/>
<edge source=”n105″ target=”n102″ sourceport=”???_3″ targetport=”right_in”/>
<edge source=”n101″ target=”n104″ sourceport=”middle_out” targetport=”???_4″/>
Here “???_i” means any port name.
If such a pattern is found, then the collection of id’s (of edges and nodes) from it is stored in some format in the annotations file which is the output.
When called, this program takes as input the graph and the annotation file for beta moves pattern and then wait for the user to pick one of these patterns (i.e. perhaps that the patterns are numbered in the annotation file, the user interface shows then on screen and the user clicks on one of them).
When the instance of the pattern is chosen by the user, the program erases from the input graph the pattern (or just comments it out, or makes a copy first of the input graph and then works on the copy, …) and replaces it by the following:
<edge source=”n106″ target=”n104″ sourceport=”???_1″ targetport=”???_4″/>
<edge source=”n105″ target=”n103″ sourceport=”???_3″ targetport=”???_2″/>
It works only when the nodes n101 or n102 are different than the nodes n103 ,n104, n105, n106, because if not then when erased leads to trouble. See the post Graphic beta move with details.
As an alternative one may proceed by introducing invisible nodes which serve as connection points for the arrows from n106 to n104 and n101 to n103, then erase the nodes among n101, n102 which are not among n103, n104, n105, n106 .
___________________________________________
A concrete example:
<node id=”n1″ parse.indegree=”0″ parse.outdegree=”1″>
<data key=”d0″>invisible</data>
<port name=”out”/>
</node>
<node id=”n2″ parse.indegree=”0″ parse.outdegree=”1″>
<data key=”d0″>invisible</data>
<port name=”out”/>
</node>
<node id=”n3″ parse.indegree=”1″ parse.outdegree=”0″>
<data key=”d0″>invisible</data>
<port name=”in”/>
</node>
<node id=”n4″ parse.indegree=”1″ parse.outdegree=”0″>
<data key=”d0″>invisible</data>
<port name=”in”/>
</node>
[comment: these are the four free ends of arrows which are numbered in the figure by "1", ... , "4". So you have a graph with two inputs and two outputs, definitely not a graph of a lambda term! ]
<node id=”n105″ parse.indegree=”2″ parse.outdegree=”1″>
<data key=”d0″>green</data>
<port name=”middle_out”/>
<port name=”left_in”/>
<port name=”right_in”/>
</node>
<node id=”n106″ parse.indegree=”2″ parse.outdegree=”1″>
<data key=”d0″>green</data>
<port name=”middle_out”/>
<port name=”left_in”/>
<port name=”right_in”/>
</node>
<node id=”n108″ parse.indegree=”2″ parse.outdegree=”1″>
<data key=”d0″>green</data>
<port name=”middle_out”/>
<port name=”left_in”/>
<port name=”right_in”/>
</node>
[comment: these are 3 application nodes]
<node id=”n107″ parse.indegree=”1″ parse.outdegree=”2″>
<data key=”d0″>red</data>
<port name=”middle_in”/>
<port name=”left_out”/>
<port name=”right_out”/>
</node>
<node id=”n109″ parse.indegree=”1″ parse.outdegree=”2″>
<data key=”d0″>red</data>
<port name=”middle_in”/>
<port name=”left_out”/>
<port name=”right_out”/>
</node>
<node id=”n110″ parse.indegree=”1″ parse.outdegree=”2″>
<data key=”d0″>red</data>
<port name=”middle_in”/>
<port name=”left_out”/>
<port name=”right_out”/>
</node>
[comment: these are 3 lambda abstraction nodes]
<edge source=”n1″ target=”n105″ sourceport=”out” targetport=”right_in”/>
<edge source=”n107″ target=”n105″ sourceport=”left_out” targetport=”left_in”/>
<edge source=”n105″ target=”n106″ sourceport=”middle_out” targetport=”left_in”/>
<edge source=”n2″ target=”n106″ sourceport=”out” targetport=”right_in”/>
<edge source=”n106″ target=”n107″ sourceport=”middle_out”
targetport=”middle_in”/>
<edge source=”n107″ target=”n108″ sourceport=”right_out” targetport=”left_in”/>
<edge source=”n108″ target=”n109″ sourceport=”middle_out”
targetport=”middle_in”/>
<edge source=”n110″ target=”n108″ sourceport=”right_out” targetport=”right_in”/>
<edge source=”n109″ target=”n110″ sourceport=”right_out”
targetport=”middle_in”/>
<edge source=”n109″ target=”n4″ sourceport=”left_out” targetport=”in”/>
<edge source=”n110″ target=”n3″ sourceport=”left_out” targetport=”in”/>
<node id=”n1″ parse.indegree=”0″ parse.outdegree=”1″>
<data key=”d0″>invisible</data>
<port name=”out”/>
</node>
<node id=”n2″ parse.indegree=”0″ parse.outdegree=”1″>
<data key=”d0″>invisible</data>
<port name=”out”/>
</node>
<node id=”n3″ parse.indegree=”1″ parse.outdegree=”0″>
<data key=”d0″>invisible</data>
<port name=”in”/>
</node>
<node id=”n4″ parse.indegree=”1″ parse.outdegree=”0″>
<data key=”d0″>invisible</data>
<port name=”in”/>
</node>
<edge source=”n1″ target=”n3″ sourceport=”out” targetport=”in”/>
<edge source=”n2″ target=”n4″ sourceport=”out” targetport=”in”/>
____________________________________________
For this choice which consists into using invisible nodes, a new program is needed (which may be useful for other purposes, later)
(c) arrow combing
The idea is that a sequence of arrows connected via 2-valent invisible nodes should count as an arrow in a chemlambda graph.
So this program does exactly this: if n1001 is different than n1002 then replaces the pattern
<node id=”n7″ parse.indegree=”1″ parse.outdegree=”1″>
<data key=”d0″>invisible</data>
<port name=”in”/>
<port name=”out”/>
</node>
<edge source=”n1001″ target=”n7″ sourceport=”???_1″ targetport=”in”/>
<edge source=”n7″ target=”n1002″ sourceport=”out” targetport=”???_2″/>
by
<edge source=”n1001″ target=”n1002″ sourceport=”???_1″ targetport=”???_2″/>
_________________________________________
This is a very good news for me, because I tend to consider that possibly complex tasks are simple, therefore as any lazy mathematician will tell you, it is always good if some piece of work has been done before.
In the post A user interface for GLC I describe what we would need and this corresponds, apparently, to a part of what the Quantomatic GUI can do.
See Aleks Kissinger Pictures of Processes: Automated Graph Rewriting for Monoidal Categories and Applications to Quantum Computing, DPhil thesis, [arXiv:1203.0202] , Chapter 9.
Without much ado, I shall comment on the differences, with the hope that the similarities are clear.
Differences. I shall use as an anchor for explaining the differences the homepage of Aleks Kissinger, because is well written and straightforward to read. Of course, this will not mean much if you don’t know what I am talking about concerning NTC vs TC, chemlambda or distributed GLC.
1. Aleks explains
“But “box-and-wire” diagrams aren’t just for physics and multilinear algebra. In fact, they make sense whenever there is a notion a “map”, a procedure for composing maps (i.e. vertical composition), and a procedure for putting two maps “side-by-side” (horizontal composition). That is, this notation makes sense in any monoidal category.”
Here is a first difference from chemlambda, because the chemlambda graphs are open graphs (in the sense that they also have arrows with one or both ends free, as well as loops), but otherwise a chemlambda graph has not any preferred external order of examination. The arrows of the graph are not “wires” and the nodes are not “boxes”. There is no meaning of the vertical or horizontal composition, because there is no vertical and no horizontal.
2. Another quote from Aleks page:
“In Open Graphs and Monoidal Theories, Lucas Dixon and I defined the category of open-graphs and described how double-pushout graph rewriting, as defined in e.g. Ehrig et al^{5}, can be applied to open-graphs.”
This marks the second basic difference, which consists into the effort we make to NOT go global. It is a very delicate boundary between staying local and taking a God’s pov, and algebra is very quickly passing that boundary. Not that it breaks a law, but it adds extra baggage to the formalism, only for the needs to explain things in the algebraic way.
Mind that it is very interesting to algebraize from God’s pov, but it might be also interesting to see how far can one go without taking the global pov.
3. Maybe as an effect of 1. they think in terms of processes, we think in terms of Actors. This is not the same, but OMG how hard is to effect, only via words exchanges, the brain rewiring which is needed to understand this!
But this is not directly related to the GUI, which is only step 0 for the distributed GLC.
_______________________________________
Putting these differences aside, it is still clear that:
_______________________________________
What do you think about this?
_______________________________________
Mind that this is only a thought experiment, which might not be accurate in all aspects in it’s representation of the kind of computation with GLC or more accurately with chemlambda.
Imagine a large pipe, with a diameter of 1 m say, and 3 m long, to have an image. It is full of marbles, all identical in shape. It is so full that if one forces a marble at one end then a marble (or sometimes more) have to get out by the other end.
Say Alice is on one end of the pipe and Bob is at the other end. They agreed previously to communicate in the most primitive manner, namely by the spilling of a small (say like ten) or a big (for example like 50) marbles at their respective ends. The pipe contains maybe 10^5 or 10^6 marbles, so both these numbers are small.
There is also Claire who, for some reason, can’t see the ends of Alice and Bob, but the pipe has a window at the middle and Claire can see about 10% of the marbles from the pipe, those which are behind the window.
Let’s see how the marbles interact. Having the same shape, and because the pipe is full of them, they are in a local configuration which minimizes the volume (maybe not all of them, but here the analogy is mum about this). When a marble (or maybe several) is forced at Alice’s end of the pipe, there are lots of movements which accommodate the new marbles with the old ones. The physics of marbles is known, is the elastic contact between them and there is a fact in the platonic sky which says that for any local portion of the pipe the momentum and energy are conserved, as well as the volume of the marbles. The global conservation of these quantities is an effect of those (as anybody versed in media mechanics can confirm to you).
Now, Claire can’t get anything from looking by the window. At best Claire remarks complex small movements, but there is no clear way how this happens (other than if she looks at a small number of them then she might figure out the local mechanical ballet imposed by the conservation laws), not are Alice’s marbles marching towards Bob’s end.
Claire can easily destroy the communication, for example by opening her window and getting out some buckets of marbles, or even by breaking the pipe. But this is not getting Claire closer to understanding what Alice and Bob are talking about.
Claire could of course claim that i the whole pipe was transparent, she could film the pipe and then reconstruct the communication. But in this case Claire would be the goddess of the pipe and nothing would be hidden to her. Alice and Bob would be her slaves because Claire would be in a position which is equivalent to having a window at each end of the pipe.
__________________________
Comments:
Beneath under there is just local interaction, via the moves which act on patterns of graphs which are split between actors. But this locality gives space, which is an emergent, global effect of these distinctions which communicate.
Two chemical molecules which react are one composite molecule which reduces itself, splitted between two actors (one per molecule). The molecules react when they are close is the same as saying that their associated actors interact when they are in the neighboring relation. And the reaction modifies not only the respective molecules, but also the neighboring relation between actors, i.e. the reaction makes the molecules to move through space. The space is transformed as well as the shape of the reactants, which looks from an emergent perspective as if the reactants move through some passive space.
Concretely, each actor has a piece of the big graph, two actors are neighbours if there is an arrow of the big graph which connects their respective pieces, the reduction moves can be applied only on patterns which are splitted between two actors and as an effect, the reduction moves modify both the pieces and the arrows which connect the pieces, thus the neighbouring of actors.
What we do in the distributed GLC project is to use actors to transform the Net into a space. It works exactly because space is an effect of locality, on one side, and of universal simple interactions (moves on graphs) on the other side.
__________________________________________
Aka OPEN …
We are closing to a change, a psychological change, from indifference and disdain from the majority of (more or less established) researchers to a public acknowledgement of the stupidity and immorality of the procedure which is in force, still.
[Rant, jump over if not interested into personal stuff.
Please take into consideration that even if I embrace with full heart these changes, I don't have any merit or real contribution to these, excepting modest posts here at chorasimilarity, under the tags cost of knowledge and open peer review. More than this, I suffered like probably some of my colleagues by choosing to publish through arXiv mostly and not playing the stupid game, which led to a very damaged career, but unfortunately I did not had the opportunity to create change through participation in teams which now are shaping the future of OPEN whatever. Bravo for them, my best wishes for them, why not sometimes a honest criticism from my small point of view, and thanks for the feeling of revenge which I have, the "I was right" feeling which I hope will grow and grow, because really the research world is damaged to the bones by this incredible stupidity, maybe cupidity and surely lack of competence and care for the future manifested by a majority of leaders.
The second thing I want to mention is that even if I refer to "them", to a "majority", all these generalizations have to be nuanced by saying that, as always, as everywhere, the special ones, the creative ones, the salt and pepper of the research world are either excused or completely innocent. They are also everywhere, maybe many of them not in any strong influence position (as in music, for example, the most well known musicians are always never the best, but surely they are among the most hard working), but creating their stuff and possibly not really caring about these social aspects, because they are too deep into the platonic realm. All of them are not the subject or part of any "majority", they are not "them" in any way.
The third point is that there may be a sloppy use of "young" and "old". This has nothing to do with physical age. It is true that every old moron was a young moron before. Every old opportunist was a young one some years earlier. Their numbers are continually replenished and we find them everywhere, albeit much more present than the salt and pepper of the research community, and more in the good hard worker, but not really, seriously creative part. No, young or old refers to the brain quality, not to physical age.
End of rant]
Back to the subject. From timid or rather lonely comments, now we passed to more strong ones.
And the words are harder.
From Causes of the persistence of impact factor mania, by Arturo Casadevall and Ferric C. Fang,
“Science and scientists are currently afflicted by an epidemic of mania manifested by associating the value of research with the journal where the work is published rather than the content of the work itself. The mania is causing profound distortions in the way science is done that are deleterious to the overall scientific enterprise. In this essay, we consider the forces responsible for the persistence of the mania and conclude that it is maintained because it disproportionately benefits elements of the scientific enterprise, including certain well-established scientists, journals, and administrative interests.”
Fully agree with them, besides of this I consider very interesting their explanation that we face a manifestation of the tragedy of the commons.
From Academic self-publishing: a not-so-distant-future, here is a big quote, is too beautiful to crop:
“
A glimpse into the future
Erin is driving back home from the laboratory with a big smile on her face. After an exciting three-hour brainstorming session discussing the intracranial EEG data from her last experiment, she can’t wait to get her hands back on the manuscript. A new and unexpected interpretation of the findings seems to challenge a popular assumption about the role of sleep in declarative memory consolidation. She had been looking over the figures for more than a month without seeing a clear pattern. But now, thanks to a moment of insight by one of her colleagues, the pieces finally fit together and a new logic is emerging. She realizes it will be hard for the community to accept these new findings, but the methodology is solid and she is now convinced that this is the only reasonable explanation. She is so anxious to see what Axell’s group thinks about new evidence that refutes its theoretical model.
After a week’s hard work, the first draft is ready. All the figures and their long descriptive legends are in place, the literature review is exhaustive, the methodology is clear as a bell, and the conclusions situate the finding in the general context of the role of sleep in memory consolidation. Today, the group had a brief morning meeting to decide which colleagues they will ask to review their draft. Of course, they will ask Axell for his opinion and constructive criticism, but they also agree to invite Barber to confirm that the application of independent component analysis on the data was performed correctly, and Stogiannidis to comment on the modification of the memory consolidation scale. For a review of the general intracranial EEG methodology, the group decides to first approach Favril herself and, if she declines, they will ask Zhang, who recently reviewed the subject for Nature.
After the lunch break, Erin submits the manuscript to the university’s preprint repository that provides a DOI (digital object identifier) and an open attribution licence. When she hits the submit button, she feels a chill running down her spine. More than a year’s hard work is finally freely available to her peers and the public. The next important step is to invite the reviewers. She logs in to her LIBRE profile and inserts the metadata of the manuscript with a hyperlink to the repository version (see LIBRE, 2013). She then clicks the invite reviewer button and writes a quick personal message to Axell, briefly summarizing the main result of the study and why she thinks his opinion is vital for the debate this manuscript will spark. She then invites Stogiannidis to comment on the modification of the memory consolidation scale, and Barber, specifically asking him to check the application of independent component analysis, and also letting him know that all data are freely and openly available at Figshare. After finishing with the formal invitations, Erin tweets the LIBRE link to her followers and sends it as a personal message to specific colleagues from whom she would like to receive general comments. She can now relax. The word is out!
A couple of weeks later, Erin is back at work on the project. Both Favril and Zhang refused to review because of heavy work schedules, but Stogiannidis wrote an excellent report totally approving the modification of her scale. She even suggested a future collaboration to test the new version on a wider sample. Barber also submitted a brief review saying that he doesn’t find any caveats in the analysis and approves the methodology. As Erin expected, Axell didn’t take the new result lightly. He submitted a harsh critique, questioning both the methodology and the interpretation of the main findings. He even mentioned that there is a new paper by his group currently under journal review, reporting on a similar experiment with opposite results. Being pipped to the post and being second to report on this innovative experimental design, he must be really peeved, thinks Erin. She grins. Maybe he will learn the lesson and consider self-publishing next time. Anyway, Erin doesn’t worry too much as there are already two independent colleagues who have marked Axell’s review as biased on LIBRE. Last night, Xiu, Erin’s colleague, finished retouching one of the figures based on a very insightful comment by one of LIBRE’s readers, and today she will upload a new version of the manuscript, inviting some more reviewers.
Two months later, Erin’s paper is now in version number 4.0 and everyone in the group believes it is ready for submission to a journal and further dissemination. The issues raised by seven reviewers have now been adequately addressed, and Axell’s review has received six biased marks and two negative comments. In addition, the paper has attracted a lot of attention in the social media and has been downloaded dozens of times from the institutional repository and has been viewed just over 300 times in LIBRE. The International Journal for the Study of the Role of Sleep in Memory Consolidation has already been in touch with Erin and invited her to submit the paper to them, but everybody in the group thinks the work is of interest to an even wider audience and that it should be submitted to the International Journal for the Study of Memory Consolidation. It charges a little more – 200 euros – but it is slightly more esteemed in the field and well worth the extra outlay. The group is even considering sending the manuscript in parallel to other journals that embrace a broader neuroscience community, now that the group’s copyright and intellectual property rights have been protected. Anyway, what is important (and will count more in the grant proposal Erin plans to submit next year) is that the work has now been openly approved by seven experts in the field. She is also positive that this paper will attract ongoing reviews and that she may even be invited as an expert reviewer herself now that she is more visible in the field. A debate has started in her department about how much the reviewer’s track record should weigh in how future tenure decisions are evaluated, and she has been invited to give a talk on her experience with LIBRE and the versioning of the group’s manuscript, which has now become a dynamic paper (Perakakis et al., 2011).”
I love this, in all details! I consider it among the most well written apology of, particularly, open peer review. [See if you care, also my post Open peer review as a service.]
From Your university is paying too much for journals, by Bjorn Brembs:
“Why are we paying to block public access to research, when we could save billions by allowing access?”
Oh, I’m sure that those in charge with these decisions have their reasons.
From the excellent We have met the enemy: part I, pusillanimous editors, by Mark C. Wilson
“My conclusions, in the absence of further information: senior researchers by and large are too comfortable, too timid, too set in their ways, or too deluded to do what is needed for the good of the research enterprise as a whole. I realize that this may be considered offensive, but what else are the rest of us supposed to think, given everything written above? I have not even touched on the issue of hiring and promotions committees perpetuating myths about impact factors of journals, etc, which is another way in which senior researchers are letting the rest of us down”…
Read also the older, but great We have met the enemy and it is us by Mark Johnston. I commented about it here.
What is your opinion about all this? It’s getting hotter.
_________________________________________
I read the reviews and my conclusion is that they are well done. The 6 reviewers all make good points and a good job to detect strong points and weaknesses of the project.
Thank you NSF for this fair process. As the readers of this blog know, I don’t have the habit to hide my opinions about bad reviews, which sometimes may be harsh. Seen from this point of view, my thanks look, I hope, even more sincere.
So, what was the project about? Distributed computing, like in the “GLC actors, artificial chemical connectomes, topological issues and knots” arXiv:1312.4333 [cs.DC], which was branded as useful for secure computing. The project has been submitted in Jan to Secure and Trustworthy Cyberspace (SaTC) NSF program.
The point was to get funding which allows the study of the Distributed GLC, which is for the moment fundamental research. There are reasons to believe that distributed GLC may be good for secure computing, principal among them being that GLC (and chemlambda, actually the main focus of research) is not based on the IT paradigm of gates and wires, but instead on something which can be described as signal transduction, see How is different signal transduction from information theory? There is another reason, now described by the words “no semantics“.
But basically, this is not naturally a project in secure computing. It may become one, later, but for the moment the project consists into understanding asynchronous, decentralized computations performed by GLC actors and their biological like behaviour. See What is new in distributed GLC?
Together with Louis Kauffman, we are about to study this, he will present at the ALIFE 14 conference our paper Chemlambda, universality and self-multiplication, arXiv:1403.8046.
There is much more to tell about this, parts were told already here at chorasimilarity.
From this moment I believe that instead of thinking security and secrecy, the project should be open to anybody who wishes to contribute, to use or to criticize. That’s the future anyway.
______________________________________________________
“the unexpected result that the theory of spectral triples does not apply to the Carnot manifolds in the way one would expect. [p. 11] “
i.e.
“We will prove in this thesis that any horizontal Dirac operator on an arbitrary Carnot manifold cannot be hypoelliptic. This is a big difference to the classical case, where any Dirac operator is elliptic. [p. 12]“
It appears that the author reduces the problems to the Heisenberg groups. There is a solution, then, to use
R. Beals, P.C. Greiner, Calculus on Heisenberg manifolds, Princeton University Press, 1988
which gives something resembling spectral triples, but not quite all works, still:
“and show how hypoelliptic Heisenberg pseudodifferential operators furnishing a spectral triple and detecting in addition the Hausdorff dimension of the Heisenberg manifold can be constructed. We will suggest a few concrete operators, but it remains unclear whether one can detect or at least estimate the Carnot-Caratheodory metric from them. [p. 12]“
______________________________
This seems to be an excellent article, more than that, because it is a phd dissertation many things are written clearly.
I am not surprised at all by this, it just means that, as in the case with the metric currents, there is an ingredient in the spectral triples theory which introduces by the backdoor some commutativity, which messes then with the non-commutative analysis (or calculus).
Instead I am even more convinced than ever that the minimal (!) description of sub-riemannian manifolds, as models of a non-commutative analysis, is given by dilation structures, explained most recently in arXiv:1206.3093 [math.MG].
A corollary of this is: sub-riemannian geometry (i.e. non-commutative analysis of dilation structures) is more non-commutative than non-commutative geometry .
I’m waiting for a negative result concerning the application of quantum groups to sub-riemannian geometry.
__________________________________________