The picture is a screenshot of the chemlambda gui available for download from this page.
The .mol file of the walker is available at this link.
What is this: is a walker as described in the ouroboros predecessor post, which walks on a train track generated by a y combinator pair A-FOE.
But the strange thing which makes me stare at the picture is that I see a hexagonal structure connected to a quadrilateral one, or I recall that I have seen such structures, see for example the following crop from this source.
So it’s like the thymine-adenine pair (well, there is no pure adenine side, is only half of it).
Then, I recall seeing all these structures before with the space view, including the one representing the connection between guanine and cytosine, but I have to look at the files to recover them (I have plenty of mol files already, hundreds, I need to classify all the interesting stuff I learned).
Is this a coincidence or not? I don’t know, it is tempting to make the connection between quines in chemlambda, as it is the ouroboros, and self-mantaining molecules like DNA.
______________________________________________
http://www.youtube.com/watch?v=ZPzsSpkl8Fw
At this link you can download the tar archive which contains the gui I play with.
I would be grateful if as many as possible download it and spread it in as many places, not only google or other clouds.
Please send me a mail, or post a comment here if you did it, thank you! (mind that if it is your first comment then it is moderated, so don’t worry if it does not appear immediately, maybe I sleep or something like that)
It works surely with firefox, I have been told that it does not work with safari or chromium, because, I learned, those browsers don’t allow to open local files.
The knitting is made of abstraction (lambda) nodes, in the middle, bordered by fanin nodes.
The knitting molecule appears after 3 steps, it can be then seen at the top of the figure.
The white small molecules are loops.
This pattern appeared after playing a bit more with switchers. Here is a video about switchers
________________________________
because it gives many ideas, among them the following:
Ergo, the physical brain we see is the concrete embodiment of an universal computing seed, taking physical shape according to physical constraints.
That’s why I like this picture, because it gives ideas.
Here is the first realisation of a (mutant) Turing neuron in chemlambda
which can be seen as a continuation of the old, but open thread which started with this teaser and continued up to this post on B type neurons.
UPDATE: compare with
which uses the same mechanism for another purpose, the one of assembling a DNA like tape.
I know people hate seeing two ideas in the same post, but here is the second one:
If you combine the two ideas from this post, what gives?
_________________________________
_______________________
I put this stuff on G+ some days ago and now I can find it only if I look for it on purpose. Older and newer posts, those I can see. I can see colored lobsters, funny nerd jokes, propaganda pro and con legacy publishing, propaganda hidden behind granpa’s way of communicating science, half baked philosophical ideas, but not my post which I made only two days ago. On my page, I mean, not elsewhere.
Thank you G+ for this, once again. (Note not for humans: this was ironic.) Don’t forget to draw another box next time when you think about a new algorithm.
A non-ironic thanks though for the very rare occasions when I did met very interesting people and very interesting ideas there.
OK, to the matter, now. But really, G+, what kind of algorithm you use which keeps a lobster on my page but deletes a post about the Ackermann function?
That which provokes the disease is a lack of balance in the use of the various ingredients of the cartesian method. The disbalance is provoked by the effort to fit what is, what we understand and what we communicate about it into the mould of the era when the method has been invented. It just does not fit, therefore it overflows in unexpected ways.
What is. Better is to say what is more, compared with the time when the cartesian metod was invented. There is a new virtual world in the making. Huge quantities of structured data, alternative worlds evolved from seeds created by programmers, a whole new world of the Internet of Things in the making and, further away but really close though, an unification of the real world (defined as the one where Descartes lived and died) with these new, emerging ones.
The territory, suddenly, got much bigger.
What we understand. Better to say what we understand more than before. A huge body of scientific facts and discoveries which don’t quite fit ones with the others. Quantum mechanics with general Relativity has become an example for old boys and girls. We have models of the parts but we don’t understand C. Elegans with its 302 neurons. There is though a building understanding of the fact that we do understand much more, but the tools offer, each, only a limited point of view. There are looming suspicions that data alone (what is) has more to tell us than data filtered by a theory. We do understand, or starting to understand that semantics is only a tool itself, a map maker, not the territory of what is.
The many maps we have don’t fit, we understand.
What we communicate. Better is to say that we communicate today in unparalleled ways than before. But what we communicate is a very small, rigidly formatted part of what we understand. It is hard to communicate science, and the channel constraints are damaging the message.
We have so much to communicate and the semantic maps don’t serve well this purpose.
Going back at the time of the cartesian method, we see that it has been made as a tool for isolated minds and very limited data inflow. More than this, the cartesian method is a collection of prescriptions about how to better understand and how to better communicate the understanding, which makes the rational choice for those times, but an irrational one for our times:
The book is then supposed to be distributed and multiplied by means which are not of interest for the author, and then to hit other minds in an unidirectional way.
Descartes writes a book, then somebody else writes another book where he challenges or supports the ideas of a Dead Descartes Book, then yet another one writes a new book which contains references to the Dead Somebody Else Book. That’s the way of the science and Descartes proposed a wonderful path which ensures that the various Books are well written and contain Text as a sort of a program which can be easily debugged.
Descartes rules apply in an indiscriminate way to what is, what we understand and what we communicate about it.
Evidence and details in these two posts:
UPDATE: The site is up again here and I made a second video with a gun which shoots in two directions
_______________________________________________
UPDATE: Here the first video about it:
Continuing with the post
The first is a screenshot of the initial tape with bits on it (the tape contains “1010” as an example):
And here is what you get after 6 reductions done by the algorithm:
I’ll explain in a moment what I did.
First I wrote the tape.mol file which represents the initial molecule.
Then I used bash_main_viral_foe_bubbles.sh which can be downloaded from the explanations/downloads page of the visualizer.
The script waits me to choose a .mol file, which I did by writing at the prompter tape.mol.
Then I typed firefox look.html & (use whatever browser with javascript enabled) to see the results.
UPDATE: Attention, just found out that in some versions of safari there is a problem with working with local files. I suspect, but if you know more then please tell me, that even if safari does handle the file// protocol and opens the look.html, it does not handle well the part where the look.html opens the json files file_0.json … file_10.json.
These json files are produced by the scripts, then they are turned into d3 force graphs by look.html.
So, it may happen that safari opens the file look.html, but when you click on the buttons to see the molecules in action, then safari fails to open the json files which look.html needs, so nothing happens further.
I don’t know yet any solution for this, other than “use firefox” for example. There should be an elegant one.
UPDATE 2: solved! just download playall.tar.gz .
_____
Now, everything (the scripts, libraries, the file look.html) are available at the said downloads page. This specific .mol file can be saved from the link provided.
I clicked on the “initial” button” and I got a whirling molecule. I let it settle a bit and then I used the click and drag to position some of the atoms in a fashion more understandable if there’s no move, in a picture.
Then I took the screenshot with a generic soft.
The same for the second picture, which shows what you get at step 6. I combed the two replica of the tape (by click and drag) so that it is obvious that the replication went well.
Took a screenshot again.
And that’s it!
This is not yet part of the gallery of examples, which I recommend in particular for getting other mol files.
The bit which I used (i.e. the green atoms molecule which is on the “tape” in some places) appears in the example named “the bit propagation“.
__________________________________________________
Today I took it methodically and found two bugs which create that behaviour. One was that the priority choice was not covering all the possibilities in a correct way, the other was a pure programming bug.
So I corrected everything and checked that on the examples from the gallery it works as expected, and also I checked it on many other examples which I have and it works well.
Of course, only to check a program on examples means nothing without a proof that the algorithm works well. I have done that, now the priority choice is indeed well implemented.
What is funny is that by trying to correct the priority choice part I have found the other stupid bug.
OK, so now all the new tar files are marked with 06_10_2014.
And anyway, as everything is open here, you may compare the versions and arrive to your own conclusions.
Mind that in a sense there is still a bug in the main_viral from the “play” archive: it may happen that at some point the json file which is seen with look.html may be empty. This happens because in the “play” algorithm the loops are erased, and it is possible that a molecule reduces to a collection of loops, hence an empty json file eventually.
No problem with that, by using the “add_new” tar you have an algorithm which does not remove the loops (and also has the FOE node and its new moves). This one works perfectly now.
On a different subject: the artificialagora.wordpress.com will launch soon. I don’t know yet if chorasimilarity and artificialagora will coexist, or if artificialagora will be the luxury version of chorasimilarity (hope not).
But soon enough we’ll see this.
Chorasimilarity site is still read in a strange way for a blog, because old posts receive comparable hits with new ones. Not all posts are good, for example posts as this one, where I indulge into telling about updates and preparations.
______________________________
There are two different possible applications of chemlambda, each having a different answer for this question. By confusing these two applications we arrive at the confusion about the conception of space in chemlambda.
Application 1 concerns real chemistry and biology. It is this: suppose there exist real chemical molecules which in reaction with real other substances (which play the role of the enzymes for the moves, invisible in chemlambda). Then, from the moment these real molecules and real enzymes are identified, we get *for free* a chemical computer, if we think small. If we think big, then we may hope that the real molecules are ubiquitous in biochemistry and once identified the chemical reactions which represent the chemlambda moves, then we get for free a computational interpretation of big parts of biochemistry. Thinking big, this would mean that we arrive to grasp a fundamental manifestation of computation in biochemistry, which has nothing at all to do with numbers, or bits, or boolean gates, or channels and processes, all this garbage we carry from the experience (very limited historically) we have with computation until now.
In this application 1 space is no mystery, is the well known 3d space, the vessel where real molecules roam. The interest is here not in “what is space”, but “is life in some definite clear way a computational thing?”.
Application 2 resembles more to physics than biochemistry. It aims to answer to the question what is space? Ironically from neuroscience we know that clearly living brains don’t relate with space in any way which involves coordinates and crunching numbers. However, the most fundamental physics never escaped the realm of coordinates and implicit assumptions about backgrounds.
Until now. The idea proposed by application 2 of chemlambda is that space is nothing but a sort of a program.
I try to make this clear by using emergent algebras, and will continue this path, but here is the quick and dirty argument, which appears not to use emergent algebras, that chemlambda can explain space as a program.
(it does use them but this is a detail, pay attention to the main line.)
OK, so the artificial molecules in chemlambda are graphs. As graphs, they don’t need any space to exist, because everybody knows that a graph can be described in various ways (is a data structure) and only embeddings of a graph in a space need ahem … space.
Just graphs, encoded in .mol files, as used by the chemlambda visualiser I work on these days.
What you see on the screen when you use the visualiser is chemlambda as the main engine and some javascript salt and pepper, in order to impress our visually based monkey brains.
But, you see, chemlambda can do any computation, because it can do combinatory logic. The precise statement is that chemlambda with the reduction strategy which I call *the most stupid” is an universal computer.
That means that for any chain of reductions of a chemlambda molecule, there is another chemlambda molecule whose reductions describe the first mentioned reductions AND the javascript (and whatnot) computation which represent the said first chain of reductions on the screen.
What do you think about this bootstrapping?
__________________________
Hope that in the recent future will become THE SOUP. The distributed soup. The decentralized living soup.
Bookmark the page because content will be added on a daily basis!
_____________________________________________________
In this new version of chemlambda appear two new dist moves, DIST-FI and DIST-FO, as well as modifications of FAN-IN and the old DIST moves which use in some places the FOE node instead of FO.
See it in action in the example on the correct self-multiplication of the S combinator, where the FOE node is yellow.
I am preparing a click and play tutorial on that.
You can go already to the gallery of examples. Look at them and play with the nice graphs!
But you can already play with the stuff which makes the graphs!
I shall explain in a moment how to do this. Before that I write a very short description of what is this all about.
Chemlambda is an artificial chemistry like the Alchemy of Fontana and Buss but with rather big differences. They (Fontana and Buss) say basically that http://fontana.med.harvard.edu/www/Documents/WF/Papers/objects.pdf
The dream is the same, though, namely that if not all chemistry, maybe some parts of organic chemistry are used in real life like that, and not like in the bits-and-boolean-expressions- run-by-a-TM-automaton-model.
There is no need for seeing these graphs (molecules) in the plane or in 3D space (well, in 3D they embed anyways, and maybe there are real chemicals which behave like this!) because graphs don’t need to be embedded somewhere to make sense. In particular, these graphs are not constrained to be planar.
_______________________________________
How to play with the visualizer for chemlambda already. Follow these steps:
bash main_viral.sh
and look what happens. You’ll be asked to choose a something.mol file. There are several in the tar.
You can play without being connected to the net.
The input files are called something.mol . I put in the archive some examples. You can write new ones like this. For this you have to read the post about the g-patterns notation.
In a something.mol file there is a list in plain text (with space as separator character in the line and \n as separator character between lines). Is the list of the graphical elements of the g-pattern, but with the [ , ] deleted.
Thus instead of writing A[1,2,3] FO[3,4,5] you write
A 1 2 3
FO 3 4 5
Say you write a new .mol file, which is called blabla.mol. Save it and then type
bash main_viral.mol
There will be a text which appears which asks you to choose a mol file. You type blabla.mol and you hit enter. then you look with look.html, as explained.
For more explanations what it does, just open in a text editor the file check_and_balance* and read there. There are explanations inside.
If you look in the folder you see the appearance of several files with names starting with temp_* Open them to discover answers to some questions you may have.
For the moment, there is really no replacement for reading the chemlambda formalism. No chit-chat will suffice, I tell you from experience.
That is why, although I look for creative and open people to discuss it, I shall not engage in any meaningless stuff.
If you are creative yourself then you’ll understand and you shall not think the following applies to you.
OK, here goes the other part of the post.
I shall ignore naive questions (because I saw that most of the naive questions come from people who don’t like to really understand, so probably they won’t read my answers). Moreover, I shall not respond to any question which contains the word “bot”, because WTF is a bot anyway and what that has to do with more than a hundred posts about chemlambda and several articles? Nothing at all! You want “bots” then don’t waste my time.
However…
-before asking the next most stupid question, let me answer: no man, the graphs are not processes. No! No chance! No, it’s not related to categories. No, it’s not ZX. No, has nothing to do with spiders, quantum diagrams and all this stuff, you know why? because these graphs don’t represent processes.
- if you don’t know what a graph is or if you deeply feel that a graph has to be embedded in some external physical space, then refresh your reading of the definition of a graph. As well you may go and read this post.
- if you don’t know what a graph rewrite is then google it.
- if you don’t know what “local move” or “local graph rewrite” is it surely mean that you have not read anything about chemlambda, but here is the answer: a N-local move which consists into replacing at most N nodes and edges. All moves of chemlambda are at most 10-local.
-if you don’t know what is the reducton strategy used, then congrats, that’s the first intelligent question. For the moment, the strategy of reduction is the most stupid one (I call it like this, but it is brilliant compared to others), described here
Reduction strategy. For the moment I am using a sequential strategy of reduction, with priority choices, like this:
At each reduction step do the following:
The priority choice is called “viral”, meaning that DIST > BETA>LOC PRUNING.
For the moment the moves CO-COMM and CO-ASSOC are not used.
This is not yet distributed GLC, there is a ladder of other strategies to explore. I personally think that they are not a big deal, but I see from experience that this is not obvious. Moreover it is very entertaining to see these strategies in action, all this gives me the occasion to learn new tricks, so in the future I shall add new strategies.
_________________________________
Tell me what you think, and most recommended is to play with it first.
____________________________________
If you want to make your own then go to the explanations page and download and follow the instructions.
There is a gallery of examples now!
UPDATE 2: …phew, the fact that the shell script which launches the gui is called “main_viral.sh” is related to the reduction strategy used, has definitely nothing to do with the shellstock vulnerability.
___________________________________
You can download the awk file
and play with chemlambda with the priority choice “viral”. [see the UPDATE!]
This priority choice privileges the moves which increase the number of nodes in favour of those which decrease it.
More concretely DIST>BETA>LOC-PR. It is one of the priority choices from the post When priority matters.
How to use it:
Look at data_8.mol , which is the file for the initial pattern from the post When priority matters. Here is also data_7.mol, which is the file containing the initial pattern from the post When priority does not matter.
awk -f check_and_balance.awk data_7.mol
to play with data_7.mol. Then type ls to find a number of files, each one starting with “temp_”.
The file temp_nodes_before is basically the same as the input file.
The file temp_proposed_moves has the proposed moves :) , before any priority choice and before any COMB moves.
The file temp_final_nodes has the result after one reduction step.
You may remark the apparition of new nodes, like
FRIN 17
which is a “invisible” node which has only one port (in this case named “17”) which is an “out” port. It signals that port 17 is free (and it appears as a free “in” port, that is why FRIN, which caps it, has to have a paired “out” port).
FROUT 0
which is a invisible node with only one port (named “0” in this case) which is a “in” port. For similar reasons as before, it signals that 0 is a free “out” port.
This may change slightly the aspect of g-patterns, in the only sense that arrow elements with both ends free are replaced by pairs FRIN and FROUT, for example if
Arrow[ 17 , 0 ]
has both ports free, you shall see it in the temp_final_nodes as
FRIN 17
FROUT 0
Otherwise the FRIN-FROUT thing helps the understanding, in the sense that it makes visible the free ports.
awk -f check_and_balance.awk temp_final_nodes
and look again at
temp_nodes_before to see where you start in this reduction step
temp_proposed_moves to see the new moves proposed before any priority choice
temp_final_nodes to see the result.
And so on and so forth.
If you use data_7.mol or data_8.mol (or any g-pattern from this blog which is reduced by the “viral” priority choice) then you should see exactly what is described in the respective posts.
There is a small trick, namely that when DIST moves are done, the script has a way to choose new names for the new edges which appear. The trick is that first it computes the max over existing port names ( that is the variable “tutext”) and then it baptizes the new ports with tutext concatenated with “a”, tutext concatenated with “b”, with “c” and with “d”. This way one can be sure that the new ports don’t have names which conflict with the old ports.
I don’t have yet a visualizer for this, but work (mostly to understand) to use d3 for this.
UPDATE (20.09.2014): I can see my first molecule during reduction, basically using this and the json file produced by the script.
it represents (Lx.y) Omega, where Omega= (Lx.xx) (Lx.xx) ).
I can move and play with it but I have to control the colors, the ports, oriented edges. Soon.
Enjoy! Criticize! Contribute!
_____________________________________________________________
Look again at the move R2 of graphic lambda calculus.
The epsilon and mu are, originally, elements of a commutative group. Suggestions have been made repeatedly that the commutative group can be anything.
The epsilon and mu are port names, just like the red 1, 2, 3 from the figure.
In the more recent drawing conventions (not that that matters for the formalism) the port names are in blue.
Here is again the same move, but without the epsilon and mu.
Of course, the green node is the fanout, in the chemlambda version.
Yes, eventually, everything is related to everything in this open notebook.
In the next posts I shall take it step by step.
________________________________________________________________
Hey, everybody has a limited understanding, here is mine!
TL;DR> The crux of the matter is in this part of any recent CC 4.0 licence: in Section 2/Scope/a. Licence grant/5.
The new trend in academic publishing is:
It matters very much because that is what happens in the dispute between Amazon and Hachette, namely Hachette has the copyright of books but Amazon puts downstream restrictions!
Conclusion: never forget about Doctorow’s first law and always ask for a CC licence from any publisher!
Doctorow’s first law:
“Any time someone puts a lock on something that belongs to you, and won’t give you the key, you can be sure that the lock isn’t there for your benefit.”
This is from the very clear explanation about the Amazon and Hachette dispute by Cory Doctorow in Locus.
______________________________________________________________
Evidence now.
I made this post on G+, asking for info. I collect here the stuff:
Other things:
UPDATE 16.09.2014: See the post AAAS vies for the title the “Darth Vadar of publishing” by longpd. “They claim to support open access. They redefine it to be a pay for publishing charge (APC) of $3,000 USD and that restricts the subsequent use of the information in the article preventing commercial reuses such as publication on some educational blogs, incorporation into educational material, as well the use of this information by small to medium enterprises. If you really meant open access, the way the rest of world defines it, you’ll have to pay a surcharge of an additional $1,000. But it gets worse.”
_____________________________________________________________
Principle. I am using the g-patterns formalism to separate the reduction of molecules from the visual representation of them. For those who want to know more about here is the definition of g-patterns and here is the definition of moves in terms of g-patterns.
Reduction strategy. For the moment I am using a sequential strategy of reduction, with priority choices, like this:
At each reduction step do the following:
There is a subtlety concerning the use of CO-COMM and CO-ASSOC moves, related to the fact that I don’t want to use them directly, and related to the goal which I have, which may be one of those:
Where I am now. I wrote some shell scripts, using awk and perl, to do the first strategy and I know what to do to have the second strategy as well.
Mind that I learn as I do, so probably the main shell script should be named frankenstein.sh.
What I need next. The format of g-patterns can be easily turned into a format (I lean towards json) which can be then visualized as a force directed d3 graph. I need help here, I know there are lots of things already done, the main idea is that it should be something which doesn’t use java, has to be free, eventually has to need only the program I prepare (which will be freely available) and a browser.
What I need after. Several things:
That’s it for the moment, I APPRECIATE USEFUL HELP, thank you.
______________________________________________________________
In that post we see two possible reductions, depending on the PRIORITY CHOICE, either BETA>DIST or DIST>BETA.
In the case BETA>DIST the reduction stops quickly.
On the contrary, in the case DIST>BETA the reduction does not stop, because it enters in a cyclic process which produces an unbounded number of bubbles (i.e. loops graphical elements).
Moreover, we start from the g-pattern form of the combinator (Lx.xx)(Lx.xx).
Or, this may lead to the false impression that somehow this has anything to do with the choice between normal order reduction and applicative order reduction from lambda calculus.
Yes, because a standard example of the difference between these reduction strategies is the following one.
Let’s denote by Omega the combinator (Lx.xx)(Lx.xx). Consider then the term
(Lx.z) Omega
Under the normal order reduction this term suffers one beta reduction
(Lx.z) Omega –BETA–> z
and that’s all, the reduction stops.
On the contrary, under the applicative order reduction strategy, the reduction never stops, because we first try to reduce Omega, leading to a cycle
(Lx.z) Omega –BETA–> (Lx.z) Omega
The question is: is there any connection between these two phenomena?
No, not the slightest.
In order to prove this I shall reduce in chemlambda with the sequential strategy the g-pattern which represents the term (Lx.z) Omega. Let’s see what happens, but first let me remind what we do.
See the 1st part and 2nd part of the description of the conversion of lambda terms into g-patterns.
The sequential strategy is described by the following algorithm. I write it again because the g-pattern of Lx.z brings a termination node “T”, therefore we have to consider also the local pruning moves LOC-PR.
See the post about definition of moves with g-patterns.
The algorithm of the sequential reduction strategy is this. At each reduction step do the following:
The PRIORITY CHOICE means a predefined choice between doing one of the two moves in conflict. The conflict may be between BETA and DIST, between BETA and LOC-PR or between DIST and LOC-PR.
In the following we shall talk about the PRIORITY CHOICE only if needed.
In the first picture we see, in the upper side, the g-pattern which represents the term (Lx.z) Omega, then the first reduction step.
I kept the same names for the ports from the last post and I added new names for the ports of the new graphical elements.
First, remark that the g-pattern which represents (Lx.z) is
L[z,n2,n1] T[n2]
I named by “z” one of the ports of the lambda node L, which would correspond to the variable z of the term Lx.z. But recall that chemlambda does not use variable names, so the name “z” is there only by my choice of names for the port variables, could be anything instead (which was not used before in one of the g-pattern’s ports).
Then, A[n1,1,0] corresponds to the application of something linked to the port n1 (namely the g-pattern of (Lx.z), i.e. L[z,n2,n1] T[n2]) to something linked to the port 1 (i.e. the g-pattern of Omega, which was discussed in the post “When priority matters”).
Nice! What happened?
So, in a sense, this looks like the result of the normal order reduction, but no priority choice was involved!
However, the chemlambda sequential reduction continues, like explained in the picture of the 2nd reduction step.
OK, the Arrow[z,0] still exists after the reduction step, and a LOC-PR move appear.
Let’s see what happens in the 3rd reduction step.
The reduction stops here. There is nothing more to do, according to the sequential reduction strategy.
Differently from the reduction of Omega alone, explained in the previous post, this time there is NO PRIORITY CHOICE NEEDED.
Ergo, the priority choice has nothing to do here. The sequential chemlambda reduction of the g-pattern corresponding to (Lx.z) Omega stops after 3 steps, no matter which was the PRIORITY CHOICE made before the start of the computation.
_________________________________________________________
The goal is to see how the g-pattern of the combinator
(Lx.xx) (Lx.xx)
reduces in chemlambda with the sequential strategy.
See the 1st part and 2nd part of the description of the conversion of lambda terms into g-patterns.
The simple sequential strategy is the following: at each reduction step do the following:
The PRIORITY CHOICE means a predefined choice between doing one of the two moves in conflict.
In this post it will be about the priority between BETA and DIST moves.
Mind that the PRIORITY CHOICE is fixed before the start of the computation.
However, in the following I shall mention the choice when it will be needed.
OK, so let’s start with the g-pattern which represents the well known combinator (Lx.xx) (Lx.xx). Is clear that as a lambda term it has no normal form, because it transforms into itself by a beta reduction (so is a sort of a quine, if quines would have an interesting definition in lambda calculus).
As previously, you shall see that we depart quickly from the lambda calculus realm, and we go to some straightforward directions, nevertheless.
The first figure describes the first reduction step.
The g-pattern obtained after this first step is the one which appears as the starting point of the Metabolism of loops post.
The 2nd step is described in the next picture:
Technically we are already outside lambda calculus, because of the fanin node FI[15,12,6]. (We don’t split the computation into pure reduction and pure self-multiplication.)
Let’s see the 3rd step.
Look well at the g-pattern which we get after the 3rd step, you’ll see it again, maybe!
The 4th step is the one which will prepare the path to conflict.
In the 5th step we have conflict:
The 5th step finishes in a different manner, depending on the PRIORITY CHOICE (which is fixed from the beginning of the computation).
Let’s suppose that we choose DIST over BETA. Then the 5th step looks like this:
Wow, the g-pattern after the 5th step is the same as the g-pattern after the 3rd step, with a loop graphical element added.
This means that further the computation will look alike the 4th step, then 5th step again (with the same priority choice, which is fixed!). A new loop will be generated and the computation will never stop, producing an endless string of loops.
Bubbles!
Now, let’s see what happens if the PRIORITY CHOICE is BETA over DIST.
Then the 5th step looks like this:
The 5th step produced 2 loops and the shortest ouroboros, a fanout node with one out port connected to the in port, namely FO[13,1,13].
The computation then stops!
______________________________________________________
So, depending on the priority choice, we have either a computation which produces bubbles without end, or a computation which stops.
It is logical. Indeed, if the priority choice is DIST over BETA, this induces the choice of increasing the number of nodes of the g-pattern. From here, it may happen, as it is the case in this example, that a cyclic behaviour is induces.
On the other side, the priority choice BETA over DIST decreases the number of nodes, thus increasing the chances for a computation which stops eventually.
Both choices are good, it depends on what we want to do with them. If we want to compute with graphs resembling chemlambda quines, because they look like living organisms with a metabolism, then the choice DIST over BETA is a good one.
If we want to have a computation which stops (dies, would say a biologist) then BETA over DIST seems better.
_____________________________________________________
In chemlambda with the sequential strategy, a quine is a g-pattern with the property that after one reduction step it transforms into another g-pattern which is the same as the initial one, up to renaming of the port variables.
Therefore: we start with a g-pattern “P”. Then
We obtain a g-pattern, let’s call it P’.
If there is a renaming of the port variables of P’ such that, after renaming, P’ is identical with P, then P is a chemlambda quine.
Otherwise said, if P’ is identical with P as graphs, then P is a quine.
___________________________________________
Let’s think a bit: a DIST move adds 2 nodes, a BETA or a FAN-IN move remove 2 nodes, therefore, in order to hope to have a quine, we need to have the possibility to do at least a DIST move. That means that a quine has to contain at least the RIGHT g-pattern of a DIST move. Implies that a quine must have at least 4 nodes.
A quick inspection shows that the two RIGHT g-patterns of the two DIST moves cannot be made into quines.
Therefore a quine must have at least 5 nodes. Among the nodes have to be L, A, FO, FI. But in order to reconstruct the L node and the A node one needs two DIST moves, which gives a lower bound of 8 nodes for a quine.
I believe that there is no quine with less than 9 nodes, such that the reductions never involve a choice of priority of moves.
__________________________________________
Here is now a bigger quine:
It’s a walker from the ouroboros series, walking on a circular train track with only one pair of nodes L and FO.
It has 28 nodes and 42 edges.
Can you find a smaller quine?
_________________________________________________________
UPDATE: Here is a small quine with 9 nodes and 14 edges:
_________________________________________________________
The regularity of the train track is corrupted by a bit of food (appearing as a L node connected to a termination node), see the next (BIG) picture. It is at the right of the walker.
You can see (maybe if you click on the image to make it bigger) that the walker ingests the food. The ingested part travels through the walker organism and eventually is expelled as a pair L and A nodes.
Perhaps, by clever modifications of the walker (and some experiments with its food) one can make a Turing machine.
This would give a direct proof that chemlambda with the sequential strategy is universal as well. (Well, that’s only of academic interest, to build trust as well, before going to the really nice part, i.e. distrbuted, decentralized, alife computations.)
_____________________________________________
That is because there is a walking machine in those graphs.
Explanations follow.
Recall the reduction strategy:
In the drawings the COMB moves are not figured explicitly.
Let’s come back to the walking machine. You can see it in the following figure.
In the upper side of the figure we see one of the graphs from the reduction of the “ouroboros predecessor”, taken fom the last post.
In the lower side there is a part of this graph which contains the walking machine, with the same port names as in the upper side graph.
What I claim is that in a single reduction step the machine “goes to the right” on the train track made by pairs of FO and A nodes. That is why some of the reduction steps from the last post look alike.
One reduction step will involve:
Let’s start afresh, with the walking machine on tracks, with new port names (numbers).
For the sake of explanations only, I shall do first the two BETA and the two FAN-IN moves, then will follow the four DIST moves. There is nothing restrictive here, because the moves are all independent, moreover, according to the reduction strategy, these are all the moves which can be done in this step, and they can be done at once.
OK, what do we see? In the upper side of this figure there is the walking machine on tracks, with a new numbering of ports. We notice some patterns:
In the lower part of the figure we see what the graph looks like after the application of the 2 BETA moves and the 2 FAN-IN moves which are possible.
Let’s look closer. In the next figure is taken the graph from the lower part of the previous figure. Beneath it is the same graph, only arranged on the page such that it becomes simpler to see the patterns. Here is this figure:
Recall that we are working with graphs (called g-patterns, or molecules), not with particular embeddings of the graphs in the plane. The two graphs are the same, only the drawings on the plane are different. Chemlambda does not matter about, nor uses embeddings. This is only for you, the reader, to help you see things better.
OK, what do we see:
… but all these patterns are not the old ones, but new ones!
The 4 train cars made by DIST patterns are missing! Well, they appear again after we do the remaining 4 DIST moves.
In the next figure we see the result of these 4 DIST moves. I did not numbered the new edges which appear.
I also did the COMB moves, if you look closer you will see that now any arrow either has one or no number on it. The arrows without numbers are those appeared after the DIST moves.
Let’s compare the initial and final graphs, in the next figure.
We see that indeed, the walking machine went to the right! It did not move, but instead the walking machine dismembered itself and reconstructed itself again.
This is of course like the guns from the Game of Life, but with a big difference: here there is no external grid!
Moreover, the machine destroyed 8 nodes and 16 arrows (by the BETA, FAN-IN and COMB moves) and reconstructed 8 nodes and 16 arrows by the DIST moves. But look, the old arrows and nodes migrated inside and outside of the machine, assembling in the same patterns.
This is like a metabolism…
____________________________________________________________
The signal for the healing is given by the beta reduction
L[59,59,23] A[23,27,14] –beta–>
Arrow[59,14] Arrow[27,59]
The COMB moves are not figured. But they go like this in this case:
Arrow [59,14] Arrow[27,59] –COMB–>
Arrow[27,14]
and then
A[52,54,27] Arrow[27,14] –COMB–>
A[52,54,27]
In the third graph we see the element:
A[19,54,27]
which comes from yet another COMB move
A[52,54,27] Arrow[19,52] –COMB–>
A[19,54,27]
where the Arrow[19,52] comes from the FAN-IN move
FI[6,19,2] FO[2,52,53] –FAN-IN–>
Arrow[6,53] Arrow[19,52]
There are 8 rewrites per reduction step, starting from the 2nd figure. The repeating patterns are:
The number of nodes, from the 2nd to the 5th figure is the same.
What will happen next?
__________________________________________________________
I make an ouroboros from something like the Pred 8:
We’re in the middle of the computation, what will give eventually, can you guess?
Next time!
__________________________________________________________
In the post What reduction is this? I used chemlambda with the stupid sequential reduction strategy stated in Y again: conflict!, namely:
… And there is no conflict in the predecessor reduction.
In the post “What reduction is this?” I asked some questions, let me answer:
This is a streamlined version of the reduction hidden in
PRED(3) –> 2
where numbers appear as stacks of pair FO and A nodes. They are “bare” numbers, in the sense that all the currying has been eliminated.
Admire the mechanical, or should I say chemical precision of the process of reduction (in chemlambda, stupid sequential strategy). In the following figure I eliminated all the unnecessary nodes and arrows and we are left now with the pure phenomenon.
I find amazing that it works even with this stupidest strategy. Shows that chemlambda is much better than anything on the market.
Let me tell again: this is outside IT fundamental assumption that everything is reduced at signals send through wires, then processed by gates.
It is how nature works.
____________________________________________
“MU Panel 2. Future of Publishing
Date & Time : 18:00 – 19:30, August 19 (Tue), 2014
Moderator: Jean-Pierre Bourguignon, European Research Council, Belgium
Panelists:
Rajendra Bhatia, Indian Statistical Institute, New Delhi, India and Sungkyunkwan University, Suwon, Korea
Jean-Pierre Demailly, Institut Fourier, France
Chris Greenwell, Elsevier, The Netherlands
Thomas Hintermann, European Mathematical Society Publishing House, Switzerland
+Nalini Joshi, University of Sydney, Australia
Ravi Vakil, Stanford University, USA
======================================
http://youtu.be/RbIBrE0vepM“
I am extremely intrigued about this part:
“E[lsevier?] does pay its editors-in-chief (=academics) and sometimes associate editors – doesn’t go all the way to reimburse them for the time they spend. Q from floor: where are these figures published? A: “We don’t generally make that available, mostly because the individual editors probably don’t want their colleagues to know” (~http://youtu.be/RbIBrE0vepM?t=1h14m30s) Q: this is unfair A: depends on editors. There’s nothing in the contract stopping them from telling people. Most of them probably wouldn’t want to tell you. Averages out at about $100 per paper handled.”
This practice may be OK from the point of view of the publisher, but, in my opinion, the paid editors HAVE to tell in order to avoid a conflict of interest.
The conflict of interest appears when an editor is in a jury, or otherwise in any process which rewards publication in journals like the ones where the guy is a paid editor (hiring, phd supervising, grants dispensing). This is something which is worth discussing, I guess. Is not specific to math.
It is not a matter of the editor “wouldn’t want to tell you”, as cynically put by the E[lsevier?] speaker. It is a matter of being honest.
Recall in this context the post
We have met the enemy: part I, pusillanimous editors, by Mark C. Wilson
“My conclusions, in the absence of further information: senior researchers by and large are too comfortable, too timid, too set in their ways, or too deluded to do what is needed for the good of the research enterprise as a whole. I realize that this may be considered offensive, but what else are the rest of us supposed to think, given everything written above? I have not even touched on the issue of hiring and promotions committees perpetuating myths about impact factors of journals, etc, which is another way in which senior researchers are letting the rest of us down”…
Are we living in a research banana republic?
Apparently (some of) the publishers think we are morons, because they secured collaboration of (some of) the academic bosses.
I think there is no difference between this situation and the one of a medical professional who has to disclose payment by pharmaceutical companies.
What do you think?
_____________________________________________________
Can you guess what is this? (click on the big image to see it better)
As you see, you may ask:
_______________________________________________
Then, in the post Y again:compete! I took in parallel the two possible outcomes of the conflict. The contenders have been branded as fast shooting cowboys, offering a show.
Surprisingly, both possible paths of reduction ended in a very simple version of the Y combinator.
Only that the very simple version is not one coming from lambda calculus!
Indeed, let’s recall who is the Y combinator, seen a g-pattern in chemlambda.
In lambda calculus the Y combinator is
Lx.( (Ly.(x(yy)) (Ly.(x(yy)) )
As a molecule, it looks like this.
As g-pattern, it looks like this (see this post and this post for the conversion of lambda terms into g-patterns):
L[a,x,o] A[b,c,a] FO[x,y,z]
L[e,d,b] FO[d,f,g] A[f,g,h] A[y,h,e]
L[j,i,c] FO[i,l,m] A[l,m,k] A[z,k,j]
Applied to something means we add to this g-pattern the following:
A[o,p,u]
with the meaning that Y applies to whatever links to the port “p”. (But mind that in chemlambda there is no variable or term passing or evaluation! so this is a way to speak in the lambda calculus realm, only).
The two mentioned posts about Y again led to the conclusion that the g-pattern “Y applied to something” behaves (eventually, after several reductions) as the far more simple g-pattern:
A[o,p,u] (i.e. “applied to something at port “p”)
L[b,a,o]
FO[a,c,d] A[c,d,b]
Now, this means that the Y combinator g-pattern may be safely replaced in computations by
L[b,a,o]
FO[a,c,d] A[c,d,b]
or, in graphical version, by
But this is outside lambda calculus.
So what?
It is far simpler than the Y combinator from lambda calculus.
The same happens with other lambda terms and reductions.(see for example the post Actors for the Ackermann machine, for another example. Incidentally, the analysis of the Ackermann machine, i.e. the graph which behaves like the Ackermann function, gave me the idea of using the actor model with GLC. This evolved into arXiv:1312.4333.).
This shows the fact that chemlambda, even with the dumbest sequential reduction strategy (ok, enhanced in obvious ways so it solves conflicts), can do more with less fuss than lambda calculus.
By looking on the net (recall that I’m a mathematician, therefore excuse my ignorance in CS well known people, I’m working on this), I can’t but wonder what chemlambda would give in relation with, for example:
Of course, the dream is to go much, much further. Why? Because of the List of Ayes/Noes of the artificial chemistry chemlambda.
__________________________________________________________
Conflict means that the same graphical element appears in two LEFT g-patterns (see, in the series of expository posts the part II for the g-patterns and the part III for the moves) .
In the next figure we see this conflict, in the upper part (that’s where we were left in the previous post), followed by a fork: in the lower left part of the figure we see what we get if we apply the beta move and in the lower right part we see what happens if we apply the DIST move.
Recall that (or look again at the upper side of the picture) the conflict was between LEFT patterns of a beta move and of a DIST move.
I rearranged the drawing of the g-patterns a bit (mind that this is not affecting in any way the graphs, because the drawings on paper or screen are one thing and the graphs another thing, excuse me for being trivially obvious). In this pretty picture we see well that already the Y gun shot a second pair of nodes A and FO.
The differences now:
Which way is the best? What to do?
Let’s make them compete! Who shoots faster?
Imagine the scene: in the Lambda Saloon, somewhere in the Wide Wild West, enter two new cowboys.
“Who called these guys?” asks the sheriff of the Functional City, where the reputed saloon resides.
“They are strangers! They both look like Lucky Luke, though…”
The cowboys and cowgirls from the saloon nod in approval: they all remember what happened when Lucky Luke — the one, the single cowboy who shoots faster than his shadow — was put between the two parallel mirrors from the saloon stage. What a show! That day, the Functional City got a reputation, and everybody knows that reputation is something as good as gold. Let the merchants from Imperative County sell windows panes for the whole US, nobody messes with a cowboy, or cowgirl from the Functional City. Small, but elegant. Yes sir, style is the right word…
Let’s go back to the two strangers.
“I’m faster than Master Y” says the stranger from the right.
“I’m MUCH faster than Master Y” says the one from the left, releasing from his cigar a loop of smoke.
“Who the … is Master Y?” asks the sheriff.
“Why, it’s Lucky Luke. He trained us both, then sent us to Functional City. He says hello to you and tells you that he got new tricks to show” says one of the strangers.
“… things not learned from church…” says the other.
“I need to see how fast are you, or else I call you both liars” shouts one of the bearded, long haired cowboys.
The stranger from the right started first. What a show!
He first makes a clever DIST move (not that there was anything else to do). Then he is presented with 4 simultaneous moves to do (FAN-IN, 2 betas and a DIST). He shoots and freezes. Nothing moves, excepting two loops of smoke, rising from his gun.
“I could continue like this forever, but I stopped, to let my colleague show you what he’s good at.” said the stranger from the right.
“Anyway, I am a bit slow with the first shot, but after that I am faster.” he continued.
“Wait, said the sheriff, you sure you really shot?”
“Yep, sure, look better how I stand”, said the stranger from the right, only slightly modifying his posture, so that everybody could clearly see the shot:
“Wow, true!” said the cowboys.
“I’m fast from the first shoot!” said the stranger from the left. “Look!”
“I only did a DIST move.” said the stranger from the left, freezing in his posture.
“Nice show, guys! Hey, look, I can’t tell now which is which, they look the same. I got it, the guy from the right is a bit slower at the first shoot (however he is dazzlingly fast) but then he is as good as his fellow from the left.”
“Hm, said the sheriff, true. Only one thing: I have never seen in the Lambda Saloon anything like this. It’s fast, but it does not seem to belong here.”
___________________________________________________________
This time let’s not care about staying in lambda calculus and let’s take the simplest reduction strategy, to see what happens.
We posit in the frame of g-patterns from the expository posts part I and part II (definition of g-patterns) and part III (definition of moves) and part IV (g-patterns for lambda terms 1st part) and part V (g-patterns for lambda terms 2nd part) and part VI (about self-multiplication of g-patterns) and part VII (an example of reduction) .
We take the following reduction strategy:
What’s conflict? We shall see one.
Mind that this is a very stupid and simplistic strategy, which does not guarantee that if we start with a g-pattern which represents a lambda term then we stop by having a g-pattern of a lambda term.
It does have it’s advantages, though.
OK, so let us start with the g-pattern of Y applied to something.
In general, with g-patterns we can say many things. As any combinator molecule, when represented by a g-pattern, the Y combinator has only one free port, let’s call it “b”. Thus Y appears as a g-pattern which we denote by Y[b].
Suppose we want to tart the reduction from Y applied to something. This means that we shall start with the g-pattern
A[b,a,out] Y[b]
OK!
Look what happens when we apply the mentioned strategy.
(Advice: big picture, click on it to see it clearly and to travel along it.)
Here is a conflict: at one step we have two LEFT patterns, in this case
L[o,p,i] A[i,p,v] , which is good for a beta move
and
A[i.p.v] FO[v,q,a1] , which is good for a DIST move.
The patterns contain a common graphical element, in this case A[i,p,v], which will be deleted during the respective moves.
CONCLUSION: with this strategy we have a gun which shoots one pair of FO and A nodes, but then it got wrecked.
What to do then?
The human way is to apply
When in trouble or in doubt
Run in circles, scream and shout
for a moment, then acknowledge that this is a stupid reduction strategy, then find some qualities of this strategy, then propose another which has those qualities but works better, then reformulate the whole problem and give it an unexpected turn.
The AI way is to wait for somebody to change the reduction strategy.
__________________________________________________________
Here is not more than what is in this ephemeral google+ post, but is enough to get the idea.
And it’s controversial, although obvious.
“I just got hooked by github.io . Has everything, is a dream came true. Publishing? arXiv? pfff…. I know, everybody knows this already, let me enjoy the thought, for the moment. Then it will be some action.
- GitHub’s success is not just about openness, but also a prestige economy that rewards valuable content producers with credit and attention
-Open Science efforts like arXiv and PLoS ONE should follow GitHub’s lead and embrace the social web”
I am aware about the many efforts about publishing via github, I only wonder if that’s not like putting a horse in front of a rocket.
On the other side, there is so much to do, now that I feel I’ve seen rock solid proof that academia, publishing and all that jazz is walking dead, with the last drops of arterial blood splatting around from the headless body. “
Negative Coase cost?
__________________________________________________
I hope to make this presentation self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)
_________________________________________________________
This series of posts may be used as a longer, more detailed version of sections
from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication, arXiv:1403.8046 [cs.AI], presented by Louis Kauffman in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York. Here is a link to the published article, free, at MIT Press.
_________________________________________________________
Tags. I shall use the name “tag” instead of “actor” or “type”, because is more generic (and because in future developments we shall talk more about actors and types, continuing from the post Actors as types in the beta move, tentative).
Every port of a graphical element (see part II) and the graphical element itself can have tags, denoted by :tagname.
There is a null tag “null” which can be omitted in the g-patterns.
As an example, we may see, in the most ornate way, graphical elements like this one:
L[x:a,y:b,z:c]:d
where of course
L[x:null,y:null,z:null]:null means L[x,y,z]
The port names are tags, in particular “in” out” “middle” “left” and “right” are tags.
Any concatenation of tags is a tag. Concatenation of tags is denoted by a dot, for example “left.right.null.left.in”. By the use of “null” we have
a.null –concat–> a
null.a –concat–> a
I shall not regard concat as a move in itself (maybe I should, but that is for later).
Further in this post I shall not use tags for nodes.
Moves with tags. We can use tags in the moves, according to a predefined convention. I shall take several examples.
1. The FAN-IN move with tags. If the tags a and b are different then
FI[x:a, y:b, z:c] FO[z:c,u:b, v:a]
–FAN-IN–>
Arrow[x:a,v:a] Arrow[y:b,u:b]
Remark that the move is not reversible.
It means that you can do FAN-IN only if the right tags are there.
2. COMB with tags.
L[x:a, y:b, z:c] Arrow[y:b, u:d]
–COMB–>
L[x:a, u:d,z:c]
and so on for all the comb moves which involve two graphical elements.
3. DIST with tags. There are two DIST moves, here with tags.
A[x:a,y:b,z:c] FO[z:c,u:d,v:e]
–DIST–>
FO[x:a, w:left.d, p:right.e] FO[y:b, s:left.d, t:right.e]
A[w:left.d, s:left.d, u:d] A[p:right.e, t:right.e, v:e]
In graphical version
and the DIST move for the L node:
L[y:b, x:a, z:c] FO[z:c, u:d, v:e]
–DIST–>
FI[p:right, w:left, x:a] FO[y:b, s:left, t:right]
L[s:left, w:left,u:d] L[t:right, p:right, v:e]
In graphical version:
4. SHUFFLE. This move replaces CO-ASSOC, CO-COMM. (It can be done as a sequence of CO-COMM and CO-ASSOC; conversely, CO-COMM and CO-ASSOC can be done by SHUFFLE and LOC PRUNING, explanations another time.)
FO[x:a, y:b, z:c] FO[y:b, w:left, p:right] FO[z:c, s:left, t:right]
–SHUFFLE–>
FO[x:a, y:left, z:right] FO[y:left, w, s] FO[z:right, p, t]
In graphical version:
____________________________________________________________
Marius Buliga, Gery de Saxce, A symplectic Brezis-Ekeland-Nayroles principle
You can find here the slides of two talks given in Lille and Paris a while ago, where the article has been announced.
UPDATE: The article appeared, as arXiv:1408.3102
This is, we hope, an important article! Here is why.
The Brezis-Ekeland-Nayroles principle appeared in two articles from 1976, the first by Brezis-Ekeland, the second by Nayroles. These articles appeared too early, compared to the computation power of the time!
We call the principle by the initials of the names of the inventors: the BEN principle.
The BEN principle asserts that the curve of evolution of a elasto-plastic body minimizes a certain functional, among all possible evolution curves which are compatible with the initial and boundary conditions.
This opens the possibility to find, at once the evolution curve, instead of constructing it incrementally with respect to time.
In 1976 this was SF for the computers of the moment. Now it’s the right time!
Pay attention to the fact that a continuous mechanics system has states belonging to an infinite dimensional space (i.e. has an infinite number of degrees of freedom), therefore we almost never hope to find, nor need the exact solution of the evolution problem. We are happy for all practical purposes with approximate solutions.
We are not after the exact evolution curve, instead we are looking for an approximate evolution curve which has the right quantitative approximate properties, and all the right qualitative exact properties.
In elasto-plasticity (a hugely important class of materials for engineering applications) the evolution equations are moreover not smooth. Differential calculus is conveniently and beautifully replaced by convex analysis.
Another aspect is that elasto-plastic materials are dissipative, therefore there is no obvious hope to treat them with the tools of hamiltonian mechanics.
Our symplectic BEN principle does this: one principle covers the dynamical, dissipative evolution of a body, in a way which can be reasonably easy amenable to numerical applications.
_______________________________________
Then we do emergent algebra moves instead.
Look, instead of the beta move (see here all moves with g-patterns)
L[a,d,k] A[k,b,c]
<–BETA–>
Arrow[a,c] Arrow[b,d]
lets do for an epsilon arbitrary the epsilon beta move
Remark that I don’t do the beta move, really. In g-patterns the epsilon beta move does not replace the LEFT pattern by another, only it ADDS TO IT.
L[a,d,k] A[k,b,c]
– epsilon BETA –>
FO[a,e,f] FO[b,g,h]
L[f,i,k] A[k,h,j]
epsilon[g,i,d] epsilon[e,j,c]
Here, of course, epsilon[g,i,d] is the new graphical element corresponding to a dilation node of coefficient epsilon.
Now, when epsilon=1 then we may apply only ext2 move and LOC pruning (i.e. emergent algebra moves)
and we get back the original g-pattern.
But if epsilon goes to 0 then, only by emergent algebra moves:
that’s it the BETA MOVE is performed!
What is the status of the first reduction from the figure? Hm, in the figure appears a node which has a “0” as decoration. I should have written instead a limit when epsilon goes to 0… For the meaning of the node with epsilon=0 see the post Towards qubits: graphic lambda calculus over conical groups and the barycentric move. However, I don’t take the barycentric move BAR, here, as being among the allowed moves. Also, I wrote “epsilon goes to 0″, not “epsilon=0″.
__________________________________________________________
epsilon can be a complex number…
__________________________________________________________
Questions/exercises:
__________________________________________________________
List of ayes
__________________________________________________________
Example: decorations of S,K,I combinators in simply typed GLC
In the chemlambda version, the decoration with types for the lambda and application graphical elements is this:
or with g-patterns:
L[x:b, y:a, z:a->b]
A[x:a->b, y:a, z:b]
Recall also that there is a magma associated to any graph (or g-pattern) which is easy to define. If the magma is free then we say that the g-pattern is well typed (not that we need “well” further).
Let’s mix this with actors. We make the attribution of the port variables of a g-pattern to actors (id’s) and we write that the port variable x belongs to the actor a like this
x:a
I don’t want to define an operation -> for actors id’s, like if they are types. Instead I shall use the Arrow graphical element and the COMB move (see the moves of chemlambda in terms of g-patterns here).
Here is a COMB move, a bit modified:
L[x:b, y:a, z:d] –COMB–> L[x:b, y:a, u:a] Arrow[u:b, z:d]
which says something like
:d should be :a->:b
The same for the application
A[z:d, v:a, w:b] –COMB–> Arrow[z:d, s:a] A[s:b , v:a, w:b]
which says something like
:d should be :a->:b
which, you agree, is totally compatible with the decorations from the first figure of the post.
Notice the appearance of port variables u:a, u:b and s:a, s:b, which play the role a->b.
We allow the usual COMB moves only if the repeating variables have the same actors.
What about the beta move?
The LEFT g-pattern of the beta move is, say with actors:
L[x:a, y:d, z:c] A[z:c, v:b, w:e]
Apply the two new COMB moves;
L[x:a, y:d, z:c] A[z:c, v:b, w:e]
–2 COMB–>
L[x:a, y:d, u:d]
Arrow[u:a, z:c] Arrow[z:c, s:b]
A[s:e , v:b, w:e]
An usual COMB move applies here:
L[x:a, y:d, u:d]
Arrow[u:a, z:c] Arrow[z:c, s:b]
A[s:e , v:b, w:e]
<–COMB–>
L[x:a, y:d, u:d]
Arrow[u:a, s:b]
A[s:e , v:b, w:e]
and now the new beta move would be:
L[x:a, y:d, u:d]
Arrow[u:a, s:b]
A[s:e , v:b, w:e]
–BETA–>
Arrow[x:a, w:e]
Arrow[v:b, y:d]
This form of the beta move resembles with the combination of CLICK and ZIP from zipper logic.
Moreover the Arrow elements could be interpreted as message passing.
________________________________________________________
presented at ALIFE14.
Both articles look great and the ideas are very close to my actual interests. Here is why:
The chemlambda and distributed GLC project also has a paper there: M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication see the better arXiv version because it has links inside: arXiv:1403.8046.
The resemblance with the mentioned papers and the chemlambda and distributed GLC is that our (fully theoretical, helas) model is also about distributed computing, using actors, lambda calculus and space.
The differences are many, though, and I hope that these might lead to interesting interactions.
Further I describe the main difference, with the understanding that all this is very new for me, a mathematician, so I might be wrong in my grasping of the MFM (please correct me if so).
In the MFM the actors are atoms in a passive (i.e. predefined) space. In the distributed GLC the actors have as states graphs called molecules (more precisely g-patterns).
[Here is the moment to thank, first, to Stephen P. King who noticed me about the two articles. Second, Stephen works on something which may be very similar to the MFM, as far as I understand, but I have to strongly stress that the distributed GLC does NOT use a botnet, nor the actors are nodes in a chemlambda graph!]
In distributed GLC the actors interact by message passing to others actors with a known ID. Such message passing provokes a change in the states of the actors which corrsponds to one of the graph rewrites (moves of chemlambda). As an effect the connectivities between the actors change (where connectivity between an actor :alice and :bob means that :alice has as state a g-pattern with one of the free ports decorated with :bob ID). Here the space is represented by these connectivities and it is not passive, but an effect of the computation.
In the future I shall use and cite, of course, this great research subject which was unknown to me. For example the article Lance R. Williams Robust Evaluation of Expressions by Distributed Virtual Machines already uses actors! What more I am not aware of? Please tell, thanks!
_______________________________________________________________
This will NOT be made public, only by private mail messages.
If you want to hear more:
then mail me at chorasimilarity@gmail.com and let’s talk about parts you don’t get clearly.
Looking forward to hear from you,
Marius Buliga
__________________________________________________________
Example: from this post
L[a,x,b] A[b,x,a] <–eta–>
Arrow[b,b] loop <–comb–>
loop loop
or
L[a,x,b] A[b,x,a] <–beta–>
Arrow[a,a] Arrow[x,x] <–2comb–>
loop loop
Then why not
L[a,x,b] A[u,y,a] <–eta–> Arrow[u,b] Arrow[x,y]
which is exactly alike the FAN-IN
FO[a,x,b] FI[u,y,a] <–FAN-IN–> Arrow[u,b] Arrow[x,y]
Taking this seriously, the beta move should have a hidden companion
FO[a,x,b] FI[b,y,c] <–betahide–> Arrow[y,x] Arrow[a,c]
… which brings us to a symmetrized version of chemlambda which is very close to the interaction nets of Yves Lafont.
We present chemlambda (or the chemical concrete machine), an artificial chemistry with the following properties: (a) is Turing complete, (b) has a model of decentralized, distributed computing associated to it, (c) works at the level of individual (artificial) molecules, subject of reversible, but otherwise deterministic interactions with a small number of enzymes, (d) encodes information in the geometrical structure of the molecules and not in their numbers, (e) all interactions are purely local in space and time. This is part of a larger project to create computing, artificial chemistry and artificial life in a distributed context, using topological and graphical languages.
DOI: http://dx.doi.org/10.7551/978-0-262-32621-6-ch079
Pages 490-497
Supplementary material:
____________________________________________________________
I hope to make this presentation self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)
_________________________________________________________
This series of posts may be used as a longer, more detailed version of sections
from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication, arXiv:1403.8046 [cs.AI], which is accepted in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York, (go see the presentation of Louis Kauffman if you are near the event.) Here is a link to the published article, free, at MIT Press.
_________________________________________________________
In this post I take a simple example which contains beta reduction and self-multiplication.
Maybe self-multiplication is a too long word. A short one would be “dup”, any tacit programming language has it. However, chemlambda is only superficially resembling to tacit programming (and it’s not a language, arguably, but a GRS, nevermind).
Or “self-dup” because chemlambda has no “dup”, but a mechanism of self-multiplication, as explained in part VI.
Enough with the problem of the right denomination, because
“A rose by any other name would smell as sweet”
as somebody wrote, clearly not believing that the limit of his world is the limit of his language.
Let’s consider the lambda term (Lx.xx)(Ly.yz). In lambda calculus there is the following string of reductions:
(Lx.xx)(Ly.yz) -beta-> (Ly.yz) (Lu.uz) -beta-> (Lu.uz) z -beta-> zz
What we see? Let’s take it slower. Denote by C=xx and by B= Ly.yz. Then:
(Lx.C)B -beta-> C[x:=B] = (xx)[x:=B] = (x)[x:=B] (x)[x:=B] = BB = (Ly.yz) B -beta-> (yz)[y:=B] = (y)[y:=B] (z)[y:=B] = Bz = (Lu.uz)z -beta=> (uz)[u:=z] = (u)[u:=z] (z)[u:=z] = zz
Now, with chemlambda and its moves performed only from LEFT to RIGHT.
The g-pattern which represents (Lx.xx)(Ly.yz) is
L[a1,x,a] FO[x,u,v] A[u,v,a1] A[a,c,b] L[w,y,c] A[y,z,w]
We can only do a beta move:
L[a1,x,a] FO[x,u,v] A[u,v,a1] A[a,c,b] L[w,y,c] A[y,z,w]
<–beta–>
Arrow[a1,b] Arrow[c,x] FO[x,u,v] A[u,v,a1] L[w,y,c] A[y,z,w]
We can do two COMB moves
Arrow[a1,b] Arrow[c,x] FO[x,u,v] A[u,v,a1] L[w,y,c] A[y,z,w]
2 <–COMB–>
FO[c,u,v] A[u,v,b] L[w,y,c] A[y,z,w]
Now look, that is not a representation of a lambda term, because of the fact that FO[c,u,v] is “in the middle”, i.e. the middle.in port of the FO[c,u,v] is the out port of B, i.e. the right.out port of the lambda node L[w,y,c]. On the same time, the out ports of FO[c,u,v] are the in ports of A[u,v,b].
The only move which can be performed is DIST, which starts the self-dup or self-multiplication of B = L[w,y,c] A[y,z,w] :
FO[c,u,v] A[u,v,b] L[w,y,c] A[y,z,w]
<–DIST–>
FI[e,f,y] FO[w,g,h] L[h,e,v] L[g,f,u] A[u,v,b] A[y,z,w]
This is still not a representation of a lambda term. Notice also that the g-pattern which represents B has not yet self-multiplied. However, we can already perform a beta move for L[g,f,u] A[u,v,b] and we get (after 2 COMB moves as well)
FI[e,f,y] FO[w,g,h] L[h,e,v] L[g,f,u] A[u,v,b] A[y,z,w]
<–beta–>
FI[e,f,y] FO[w,g,h] L[h,e,v] Arrow[g,b] Arrow[v,f] A[y,z,w]
2 <–COMB–>
FI[e,f,y] FO[w,b,h] L[h,e,f] A[y,z,w]
This looks like a weird g-pattern. Clearly is not a g-pattern coming from a lambda term, because it contains the fanin node FI[e,f,y]. Let’s write again the g-pattern as
L[h,e,f] FI[e,f,y] A[y,z,w] FO[w,b,h]
(for our own pleasure, the order of the elements in the g-pattern does not matter) and remark that A[y,z,w] is “conjugated” by the FI[e,f,y] and FO[w,b,h].
We can apply another DIST move
L[h,e,f] FI[e,f,y] A[y,z,w] FO[w,b,h]
<–DIST–>
A[i,k,b] A[j,l,h] FO[y,i,j] FO[z,k,l] FI[e,f,y] L[h,e,f]
and now there is only one move which can be done, namely a FAN-IN:
A[i,k,b] A[j,l,h] FO[y,i,j] FO[z,k,l] FI[e,f,y] L[h,e,f]
<–FAN-IN–>
Arrow[e,j] Arrow[f,i] A[i,k,b] A[j,l,h] FO[z,k,l] L[h,e,f]
which gives after 2 COMB moves:
Arrow[e,j] Arrow[f,i] A[i,k,b] A[j,l,h] FO[z,k,l] L[h,e,f]
2 <–COMB–>
A[f,k,b] A[e,l,h] FO[z,k,l] L[h,e,f]
The g-pattern
A[f,k,b] A[e,l,h] FO[z,k,l] L[h,e,f]
is a representation of a lambda term, finally: the representation of (Le.ez)z. Great!
From here, though, we can apply only a beta move at the g-pattern A[f,k,b] L[h,e,f]
A[f,k,b] A[e,l,h] FO[z,k,l] L[h,e,f]
<–beta–>
Arrow[h,b] Arrow[k,e] A[e,l,h] FO[z,k,l]
2 <–COMB–>
FO[z,k,l] A[k,l,b]
which represents zz.
_____________________________________________________
Indeed, compare the non-combat stance of Episciences.org
The project proposes an alternative to existing economic models, without competing with traditional publishers.
with the one of EPI-IAM:
The driving force for this project is the take-over of the best journals in the field by the scientific communities, organised in thematic executive committees (so-called epicommittees) gathering international experts.
This project is intended for:
existing journals wishing to be liberated from a commercial editorial environment or already open-access journals in search of shared support services
newly created journals looking for a simple and highly visible editing environment
“IAM” stands for “Informatics and Applied Mathematics”, great! perhaps the first initiative towards new styles of communication of research, among those from mathematics and hard sciences (well, arXiv excluded, of course), which has a chance to compare with the much more advanced, already functioning ones, from biology and medicine.
In a previous post I wrote that in particular the episciences project looks dead to me. I am happy to be proven wrong!
This is what we need (a dire need in math), not any of the flawed projects which involve gold OA, friends recommendations networks, opaque peer-review and dislike of comments on articles, authority medals dispensed by journals.
It is a revolution, very much alike to the one 100 years ago in art, which led to an explosion of creativity.
The ball is on our side (and recall that we are not going to get any help from the academic management and colleagues adapted to the old ways).
Congrats EPI-IAM, a development to follow!
_________________________________________________________
How can this be done? Here is sketch, mind you that I propose things which I believe are possible from a chemical perspective, but I don’t have any chemistry knowledge. If you do, and if you are interested to make a chemical concrete machine for graphic lambda calculus, then please contact me.
(1) What has been achieved in one year? (2) What will happen next?
(1) More than 100 posts in the chorasimilarity open notebook cover, with lots of details, everything which will be mentioned further.
I am most grateful for the collaboration with Louis Kauffman. This was a dream for me since I wrote Computing with space: a tangle formalism for chora and difference. Via the continuous enthusiastic social web connector Stephen P. King, we started to work together and we are now in position, after a year, to take a big leap. We wrote two articles GLC actors, artificial chemical connectomes, topological issues and knots , which is for the moment a not very well understood hidden treasure of a distributed computing model, and Chemlambda, universality and self-multiplication, which will be presented at ALIFE 14, concentrating on the self-multiplication phenomenon (see the last post of the thread of expository posts on this here). These works are embedded into hundreds of hours of discussions with many people. These discussions helped at least as motivations for well explaining things.
In parallel the chemlambda paper was published on figshare: Chemical concrete machine. Follwed by Zipper logic, another piece of the puzzle.
We had a NSF proposal which was centered around cybersecurity, perhaps too early in the stage of development of the project. However, the theoretical part of the project has been appreciated beyond my expectations, what is needed is the practical implementation.
(2) More and more I become convinced that the distributed, decentralized computing project based on chemlambda would be possible today, provided is done in the right place and frame. The most recent thoughts are about the use of the semantic web tools like RDF and N3logic for this (although I strongly believe in the no semantics slogan).
I shall write much more in a part II post, right now I have a very bad connection…
UPDATE: … so, imagine that chemlambda molecules are RDF datasets, accesible via the respective URI. If you want to run a computation then you need to impersonate the actors (because the initial actor diagram is already in the structure of the RDF dataset) and to specify a model of computation (i.e. to specify the reduction rules decorated with actors, along with the actors behaviours, all in N3).
Well designed computations could then have their URIs.
Then, imagine that you want to endow your computer with a microbiome OS, just follow the links.
Another, related direction of future research concerns the IoT, things and space ….
__________________________________________________________
separation of form from content: The principle that one should represent separately the essence of a document and the style with which it is presented.
Applied to decentralized computing, this means no semantics.
[One more confirmation of my impression that logic is something from the 21st century disguised in 19th century clothes.]
___________________________________________________________
I hope to make this presentation self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)
_________________________________________________________
This series of posts may be used as a longer, more detailed version of sections
from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication, arXiv:1403.8046 [cs.AI], which is accepted in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York, (go see the presentation of Louis Kauffman if you are near the event.) Here is a link to the published article, free, at MIT Press.
_________________________________________________________
In this post I want to concentrate on the mechanism of self-multiplication for g-patterns coming from lambda terms (see part IV where the algorithm of translation from lambda terms to g-patterns is explained).
Before that, please notice that there is a lot to talk about an important problem which shall be described later in detail. But here is it, to keep an eye on it.
Chemlambda in itself is only a graph rewriting system. In part V is explained that the beta reduction from lambda calculus needs an evaluation strategy in order to be used. We noticed that in chemlambda the self-multiplication is needed in order to prove that one can do beta reduction as the beta move.
We go towards the obvious conclusion that in chemlambda reduction (i.e. beta move) and self-multiplication are just names used for parts of the computation. Indeed, the clear conclusion is that there is a computation which can be done with chemlambda, which has some parts where we use the beta move (and possibly some COMB, CO-ASSOC, CO-COMM, LOC PRUNING) and some other parts we use DIST and FAN-IN (and possibly some of the moves COMB, CO-ASSOC, CO-COMM, LOC PRUNING). These two parts have as names reduction and self-multiplication respectively, but in the big computation they mix into a whole. There are only moves, graphs rewrites applied to a molecule.
Which brings the problem: chemlambda in itself is not sufficient for having a model of computation. We need to specify how, where, when the reductions apply to molecules.
There may be many variants, roughly described as: sequential, parallel, concurrent, decentralized, random, based on chemical reaction network models, etc
Each model of computation (which can be made compatible with chemlambda) gives a different whole when used with chemlambda. Until now, in this series there has been no mention of a model of computation.
There is another aspect of this. It is obvious that chemlambda graphs form a larger class than lambda terms, and also that the graph rewrites apply to more general situations than beta reduction (and eventually an evaluation strategy). It means that the important problem of defining a model of computation over chemlambda will have influences over the way chemlambda molecules “compute” in general.
The model of computation which I prefer is not based on chemical reaction networks, nor on process calculi, but on a new model, inspired from the Actor Model, called the distributed GLC. I shall explain why I believe that the Actor Model of Hewitt is superior to those mentioned previously (with respect to decentralized, asynchronous computation in the real Internet, and also in the real world), I shall explain what is my understanding of that model and eventually the distributed GLC proposal by me and Louis Kauffman will be exposed in all details.
4. Self-multiplication of a g-pattern coming from a lambda term.
For the moment we concentrate on the self-multiplication phenomenon for g-patterns which represent lambda terms. In the following is a departure from the ALIFE 14 article. I shall not use the path which consists into going to combinators patterns, nor I shall discuss in this post why the self-multiplication phenomenon is not confined in the world of g-patterns coming from lambda terms. This is for a future post.
In this post I want to give an image about how these g-patterns self-multiply, in the sense that most of the self-multiplication process can be explained independently on the computing model. Later on we shall come back to this, we shall look outside lambda calculus as well and we shall explore also the combinator molecules.
OK, let’s start. In part V has been noticed that after an application of the beta rule to the g-pattern
L[a,x,b] A[b,c,d] C[c] FOTREE[x,a1,...,aN] B[a1,...,aN, a]
we obtain (via COMB moves)
C[x] FOTREE[x,a1,...,aN] B[a1,...,aN,d]
and the problem is that we have a g-pattern which is not coming from a lambda term, because it has a FOTREE in the middle of it. It looks like this (recall that FOTREEs are figured in yellow and the syntactic trees are figured in light blue)
The question is: what can happen next? Let’s simplify the setting by taking the FOTREE in the middle as a single fanout node, then we ask what moves can be applied further to the g-pattern
C[x] FO[x,a,b]
Clearly we can apply DIST moves. There are two DIST moves, one for the application node, the other for the lambda node.
There is a chain of propagation of DIST moves through the syntactic tree of C, which is independent on the model of computation chosen (i.e. on the rules about which, when and where rules are used), because the syntactic tree is a tree.
Look what happens. We have the propagation of DIST moves (for the application nodes say) first, which produce two copies of a part of the syntactic tree which contains the root.
At some point we arrive to a pattern which allows the application of a DIST move for a lambda node. We do the rule:
We see that fanins appear! … and then the propagation of DIST moves through the syntactic tree continues until eventually we get this:
So the syntactic tree self-multiplied, but the two copies are still connected by FOTREEs which connect to left.out ports of the lambda nodes which are part of the syntactic tree (figured only one in the previous image).
Notice that now (or even earlier, it does not matter actually, will be explained rigorously why when we shall talk about the computing model, for the moment we want to see if it is possible only) we are in position to apply the FAN-IN move. Also, it is clear that by using CO-COMM and CO-ASSOC moves we can shuffle the arrows of the FOTREE, which is “conjugated” with a fanin at the root and with fanouts at the leaves, so that eventually we get this.
The self-multiplication is achieved! It looks strikingly like the anaphase [source]
followed by telophase [source]
____________________________________________________
I hope to make this presentation self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)
_________________________________________________________
This series of posts may be used as a longer, more detailed version of sections
from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication, arXiv:1403.8046 [cs.AI], which is accepted in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York, (go see the presentation of Louis Kauffman if you are near the event.) Here is a link to the published article, free, at MIT Press.
_________________________________________________________
2. Lambda calculus terms as seen in chemlambda continued.
Let’s look at the structure of a molecule coming from the process of translation of a lambda term described in part IV.
Then I shall make some comments which should be obvious after the fact, but useful later when we shall discuss about the relation between the graphic beta move (i.e. the beta rule for g-patterns) and the beta reduction and evaluation strategies.
That will be a central point in the exposition, it is very important to understand it!
So, a molecule (i.e. a pattern with the free ports names erased, see part II for the denominations) which represents a lambda term looks like this:
In light blue is the part of the molecule which is essentially the syntactic tree of the lambda term. The only peculiarity is in the orientation of the arrows of lambda nodes.
Practically this part of the molecule is a tree, which has as nodes the lambda and application ones, but not fanouts, nor fanins.
The arrows are directed towards the up side of the figure. There is no need to draw it like this, i.e. there is no global rule for the edges orientations, contrary to the ZX calculus, where the edges orientations are deduced from from the global down-to-up orientation.
We see a lambda node figured, which is part of the syntactic tree. It has the right.out port connecting to the rest of the syntactic tree and the left.out port connecting to the yellow part of the figure.
The yellow part of the figure is a FOTREE (fanout tree). There might be many FOTREEs, in the figure appears only one. By looking at the algorithm of conversion of a lambda term into a g-pattern, we notice that in the g-patterns which represent lambda terms the FOTREEs may appear in two places:
As a consequence of this observation, here are two configurations of nodes which NEVER appear in a molecule which represents a lambda term:
Notice that these two patterns are EXACTLY those which appear as the LEFT side of the moves DIST! More about this later.
Remark also the position of the the insertion points of the FOTREE which comes out of the left.out port of the figured lambda node: the out ports of the FOTREE connect with the syntactic tree somewhere lower than where the lambda node is. This is typical for molecules which represent lambda terms. For example the following molecule, which can be described as the g-pattern L[a,b,c] A[c,b,d]
(but with the port variables deleted) cannot appear in a molecule which corresponds to a lambda term.
Let’s go back to the first image and continue with “TERMINATION NODE (1)”. Recall that termination nodes are used to cap the left.out port of a lambda lode which corresponds to a term Lx.A with x not occurring in A.
Finally, “FREE IN PORTS (2)” represents free in ports which correspond to the free variables of the lambda term. As observed earlier, but not figured in the picture, we MAY have free in ports as ones of a FANOUT tree.
I collect here some obvious, in retrospect, facts:
_______________________________________________________
3. The beta move. Reduction and evaluation.
I explain now in what sense the graphic beta move, or beta rule from chemlambda, corresponds to the beta reduction in the case of molecules which correspond to lambda terms.
Recall from part III the definition of he beta move
“
L[a1,a2,x] A[x,a4,a3] <–beta–> Arrow[a1,a3] Arrow[a4,a2]
or graphically
If we use the visual trick from the pedantic rant, we may depict the beta move as:
i.e. we use as free port variables the relative positions of the ports in the doodle. Of course, there is no node at the intersection of the two arrows, because there is no intersection of arrows at the graphical level. The chemlambda graphs are not planar graphs.”
The beta reduction in lambda calculus looks like this:
(Lx.B) C –beta reduction–> B[x:=C]
Here B and C are lambda terms and B[x:=C] denotes the term which is obtained from B after we replace all the occurrences of x in B by the term C.
I want to make clear what is the relation between the beta move and the beta reduction. Several things deserve to be mentioned.
It is of course expected that if we translate (Lx.B)C and B[x:=C] into g-patterns, then the beta move transforms the g-pattern of (Lx.B)C into the g-pattern of B[x:=C]. This is not exactly true, in fact it is true in a more detailed and interesting sense.
Before that it is worth mentioning that the beta move applies even for patterns which don’t correspond to lambda terms. Hence the beta move has a range of application greater than the beta reduction!
Indeed, look at the third figure from this post, which can’t be a pattern coming from a lambda term. Written as a g-pattern this is L[a,b,c] A[c,b,d]. We can apply the beta move and it gives:
L[a,b,c] A[c,b,d] <-beta-> Arrow[a,d] Arrow[b,b]
which can be followed by a COMB move
Arrow[a,d] Arrow[b,b] <-comb-> Arrow[a,d] loop
Graphically it looks like that.
In particular this explains the need to have the loop and Arrow graphical elements.
In chemlambda we make no effort to stay inside the collection of graphs which represent lambda terms. This is very important!
Another reason for this is related to the fact that we can’t check if a pattern comes from a lambda term in a local way, in the sense that there is no local (i.e. involving an a priori bound on the number of graphical elements used) criterion which describes the patterns coming from lambda terms. This is obvious from the previous observation that FOTREEs connect to the syntactic tree lower than their roots.
Or, chemlambda is a purely local graph rewrite system, in the sense that the is a bound on the number of graphical elements involved in any move.
This has as consequence: there is no correct graph in chemlambda. Hence there is no correctness enforcement in the formalism. In this respect chemlambda differs from any other graph rewriting system which is used in relation to lambda calculus or more general to functional programming.
Let’s go back to the beta reduction
(Lx.B) C –beta reduction–> B[x:=C]
Translated into g-patterns the term from the LEFT looks like this:
L[a,x,b] A[b,c,d] C[c] FOTREE[x,a1,...,aN] B[a1,...,aN, a]
where
The beta move does not need all this context, but we need it in order to explain in what sense the beta move does what the beta reduction does.
The beta move needs only the piece L[a,x,b] A[b,c,d]. It is a local move!
Look how the beta move acts:
L[a,x,b] A[b,c,d] C[c] FOTREE[x,a1,...,aN] B[a1,...,aN, a]
<-beta->
Arrow[a,d] Arrow[c,x] FOTREE[x,a1,...,aN] B[a1,...,aN, a]
and then 2 comb moves:
Arrow[a,d] Arrow[c,x] C[c] FOTREE[x,a1,...,aN] B[a1,...,aN, a]
<-2 comb->
C[x] FOTREE[x,a1,...,aN] B[a1,...,aN,d]
Graphically this is:
The graphic beta move, as it looks on syntactic trees of lambda terms, has been discovered in
Wadsworth, Christopher P. (1971). Semantics and Pragmatics of the Lambda Calculus. PhD thesis, Oxford University
This work is the origin of the lazy, or call-by-need evaluation in lambda calculus!
Indeed, the result of the beta move is not B[x:=C] because in the reduction step is not performed any substitution x:=C.
In the lambda calculus world, as it is well known, one has to supplement the lambda calculus with an evaluation strategy. The call-by-need evaluation explains how to do in an optimized way the substitution x:=C in B.
From the chemlambda point of view on lambda calculus, a very interesting thing happens. The g-pattern obtained after the beta move (and obvious comb moves) is
C[x] FOTREE[x,a1,...,aN] B[a1,...,aN,d]
or graphically
As you can see this is not a g-pattern which corresponds to a lambda term. That is because it has a FOTREE in the middle of it!
Thus the beta move applied to a g-pattern which represents a lambda term gives a g-patterns which can’t represent a lambda term.
The g-pattern which represents the lambda term B[x:=C] is
C[a1] …. C[aN] B[a1,...,aN,d]
or graphically
In graphic lambda calculus, or GLC, which is the parent of chemlambda we pass from the graph which correspond to the g-pattern
C[x] FOTREE[x,a1,...,aN] B[a1,...,aN,d]
to the g-pattern of B[x:=C]
C[a1] …. C[aN] B[a1,...,aN,d]
by a GLOBAL FAN-OUT move, i.e. a graph rewrite which looks like that
if C[x] is a g-pattern with no other free ports than “x” then
C[x] FOTREE[x, a1, ..., aN]
<-GLOBAL FAN-OUT->
C[a1] …. C[aN]
As you can see this is not a local move, because there is no a priori bound on the number of graphical elements involved in the move.
That is why I invented chemlambda, which has only local moves!
The evaluation strategy needed in lambda calculus to know when and how to do the substitution x:C in B is replaced in chemlambda by SELF-MULTIPLICATION.
Indeed, this is because the g-pattern
C[x] FOTREE[x,a1,...,aN] B[a1,...,aN,d]
surely has places where we can apply DIST moves (and perhaps later FAN-IN moves).
That is for the next post.
___________________________________________________
I hope to make this presentation self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)
_________________________________________________________
This series of posts may be used as a longer, more detailed version of sections
from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication, arXiv:1403.8046 [cs.AI], which is accepted in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York, (go see the presentation of Louis Kauffman if you are near the event.) Here is a link to the published article, free, at MIT Press.
_________________________________________________________
2. Lambda calculus terms as seen in chemlambda .
In this post is explained how to associate to any untyped lambda calculus term a g-pattern.
Important. Not any g-pattern, i.e. not any pattern, and not any molecule from chemlambda is associated to a lambda term!
Recall first what is a(n untyped) lambda term.
<lambda term> ::= <variable> | ( <lambda term> <lambda term> ) | ( L <variable> . <lambda term>)
The operation which associates to a pair of lambda terms A and B the term AB is called application.
The operation which associates to a variable x and a term A the term Lx.A is called (lambda) abstraction.
Every variable which appears in a term A is either bound or free. The variable x is bound if it appears under the scope of an abstraction, i.e. there is a part of A with the form Lx. B .
It it allowed to rename the bound variables of a term. This is called alpha renaming or alpha conversion. Two terms which differ only by alpha renaming are considered to be the same one.
It is then possible to rename the bound variables of a term such that if x is a bound variable then it appears under the scope of only one abstraction and moreover it does not appear as a free variable.
Further is an algorithm which transforms a lambda term, in this form which eliminates the ambiguities of the names of bound variables, into a g-pattern. See the post Conversion of lambda calculus terms into graphs for an algorithm which transforms a general lambda term into a GLC graph.
In this algorithm, a variable is said to be “fresh” if it does not appear before the step of the algorithm in question.
We start from declaring that we shall use (lambda terms) variables as port variables.
Let Trans[a,A] be the translation operator, which has as input a variable and a lambda term and as output a mess (see part II for the definition of a mess: “A mess is any finite multiset of graphical elements in grammar version.”)
The algorithm defines Trans.
We start from an initial pair a0, A0 , such that a0 does not occur in A0.
Then we define Trans recursively by
Practically, Trans gives a version of the syntactic tree of the term, with some peculiarities related to the use of the grammar version of the graphical elements instead of the usual gates notation for the two operations, and also the strange orientation of the arrow of the lambda node which is decorated by the respective bound variable.
Trans[a0,A0] is a mess and not a g-pattern because there may be (port) variables which occur more than twice. There are two possible cases for this:
Let’s see examples:
As you see the port variable x appears 3 times, once as an out port variable, in L[a1,x,a] , and twice as an in port variable, in Arrow[x,a2] Arrow[x,a3] .
In this case the port variable z does not occur as a out port variable, but it appears twice as a in port variable, in Arrow[z,a4] Arrow[z,a6].
To pass from a mess to a g-pattern is easy now: we shall introduce fanout nodes.
Indeed, an FO tree with free in port a and free out ports a1, a2, …, aN is, by definition, ANY g-pattern formed by the rules:
Remark that by a sequence of CO-COMM and CO-ASSOC moveswe can pass from any FO tree with free in port variable a and free out port variables a1, …, aN to any other FO tree with the same free in or out port variables.
We shall not choose a canonical FO tree associated to a pair formed by one free in port variable and a finite set of free out port variables, for this reason. (However, in any algorithm where FO trees have to be constructed, such a choice will be embedded in the respective algorithm.]
In order to transform the mess which is outputted by the Trans operator, we have to solve the cases (a), (b) explained previously.
(a) Suppose that there is a port variable x which satisfies the description for (a), namely that x occurs once as an out port variable and more than once as an in port variable. Remark that, because of the definition of the Trans operator, the port variable x will appear at least twice in a list Arrow[x,a1] … Arrow[x,aN] and only once somewhere in a node L[b,x,c].
Pick then an FO tree FOTREE[x,a1,...,aN] with the only free in port variable x and the only free out port variables a1, …, aN. Erase then from the mess outputted by Trans the collection Arrow[x,a1] … Arrow[x,aN] and replace it by FOTREE[x,a1,...,aN].
In this way the port variable x will occur only once in a out port, namely in L[b,x,c] and only once in a in port, namely the first FO[x,...] element of the FO tree FOTREE[x,a1,...,aN].
Let’s see for our example, we have
Trans[a, Lx.xx] = L[a1,x,a] A[a2,a3,a1] Arrow[x,a2] Arrow[x,a3]
so the variable x appears at an out port in the node L[a1,x,a] and at in ports in the list Arrow[x,a2] Arrow[x,a3] .
There is only one FO tree with the free in port x and the free out ports a2, a3, namely FO[x,a2,a3]. Delete the list Arrow[x,a2] Arrow[x,a3] and replace it by FO[x,a2,a3]. This gives
L[a1,x,a] A[a2,a3,a1] FO[x,a2,a3]
which is a g-pattern! Here is what we do, graphically:
(b) Suppose that there is a port variable x which satisfies the description for (b), namely that x does not occur as an out port variable but it occurs more than once as an in port variable. Because of the definition of the Trans operator, it must be that x will appear at least twice in a list Arrow[x,a1] … Arrow[x,aN] and nowhere else.
Pick then a FO tree FOTREE[x,a1,...,aN] with the only free in port variable x and the only free out port variables a1, …, aN.
Delete Arrow[x,a1] … Arrow[x,aN] and replace it by FOTREE[x,a1,...,aN] .
In this way the variable x will appear only once, as a free in port variable.
For our example, we have
Trans[a,(xz)(yz)] = A[a1,a2,a] A[a3,a4,a1] Arrow[x,a3] Arrow[z,a4] A[a5,a6,a2] Arrow[y,a5] Arrow[z,a6]
and the problem is with the port variable z which does not occur in any out port, but it does appear twice as an in port variable, namely in Arrow[z,a4] Arrow[z,a6] .
We delete Arrow[z,a4] Arrow[z,a6] and replace it by FO[z,a4,a6] and we get the g-pattern
A[a1,a2,a] A[a3,a4,a1] Arrow[x,a3] FO[z,a4,a6] A[a5,a6,a2] Arrow[y,a5]
In graphical version, here is what has been done:
OK, we are almost done.
It may happen that there are out port variables which appear from Lx.A with x not occuring in A (i.e. free). For example let’s start with a0=a and A0 = Lx.(Ly. x) . Then Trans[a,Lx.(Ly.x)] = L[a1,x,a] Trans[a1, Ly.x] = L[a1,x,a] L[a2,y,a1] Trans[a2,x] = L[a1,x,a] Arrow[x,a2] L[a2,y,a1].
There is the port variable y which appears only as an out port variable in a L node, here L[a2,y,a1], and not elsewhere.
For those port variables x which appear only in a L[a,x,b] we add a termination node T[x].
In our example L[a1,x,a] Arrow[x,a2] L[a2,y,a1] becomes L[a1,x,a] Arrow[x,a2] L[a2,y,a1] T[y]. Graphically this is
We may still have Arrow elements which can be absorbed into the nodes ports, therefore we close the conversion algorithm by:
Apply the COMB moves (see part III) in the + direction and repeat until there is no place to apply them any more.
Exercice: Consider the Y combinator
Y = Lf.( (Lx. f(xx)) (Ly. f(yy)) )
Find it’s conversion as a g-pattern.
________________________________________________________________
Here is the portrait of the ideal collaborator:
Oh, and can discuss over the border with quick learning mathematicians.
ALTERNATIVELY:
If interested please call and let’s make stuff that counts!
______________________________________________________________
I hope to make this presentation self-contained. (However, look up this page, there are links to online tutorials, as well as already many posts on the general subjects, which you may discover either by clicking on the tag cloud at left, or by searching by keywords in this open notebook.)
_________________________________________________________
This series of posts may be used as a longer, more detailed version of sections
from the article M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication, arXiv:1403.8046 [cs.AI], which is accepted in the ALIFE 14 conference, 7/30 to 8/2 – 2014 – Javits Center / SUNY Global Center – New York, (go see the presentation of Louis Kauffman if you are near the event.) Here is a link to the published article, free, at MIT Press.
_________________________________________________________
1. The chemlambda formalism continued: graph rewrites.
Now we have all we need to talk about graph rewrites.
For clarity, see part II for the notion of “pattern”. Its meaning depends on what we use: the graphical version of the grammar version. In the graphical version a pattern is a chemlambda graph with the free ports and invisible nodes decorated with port variables. In the grammar version we have the equivalent notion of a g-pattern, which is a way to write a pattern as a multiset of graphical elements.
It is allowed to rename the port variables in a g-pattern, such that after the renaming we still get a g-pattern. That means that if M is a g-pattern and f is any one-to-one function from V(M) to another set of port variables A, then we may replace any port variable x from V(M) by f(x). We shall not think about this (sort of alpha renaming for g-patterns) as being a graph rewrite.
I shall use the following equivalent names further:
In simple words, a graph rewrite is a rule which says: “replace the pattern by the pattern “.
Let’s see more precisely then what is a graph rewrite. (Technically this is a simple form of graph rewrite, which is not dependent on context, later we may speak about more involved forms. First let’s understand exactly this simple form!)
In order to define a graph rewrite, or move, we need two g-patterns, call them and , such that (perhaps after a renaming of port variables):
A move is a pair of such g-patterns. The is called the LEFT pattern of the move, the is called the RIGHT pattern of the move.
The move can be performed from LEFT to RIGHT, called sometimes the “+” direction: replace the LEFT pattern by the RIGHT pattern.
Likewise, the move can be performed from RIGHT to LEFT, called sometimes the “-” direction: replace the RIGHT pattern with the LEFT pattern.
Technically, what I describe here can be made fully explicit as a DPO graph rewriting.
Even if the moves are reversible (they can be performed in the + or – direction), there is a preference to use only the “+” direction (and to embed, if needed, a move performed in the “-” direction into a sequence of moves, called “macro”, more about this later).
The “+” direction is not arbitrarily defined.
_________________________________________________________
OK, enough with these preparations, let’s see the moves.
We shall write the moves in two ways, which are equivalent.
When expressed with g-patterns, they are written as
LEFT pattern <–name of move–> RIGHT pattern
When expressed with patterns (i.e graphical), they appear as
The port names appear in blue. The name of the move appears in blue, the LEFT is on the left, the RIGHT is on the right, the move is figured by a blue arrow.
Pedantic, but perhaps useful rant. For some reason, there are people who confuse graphs (which are clearly defined mathematical objects) with their particular representations (i.e. doodles), taking them “literally”. Graphs are graphs and doodles are doodles. When people use doodles for reasoning with graphs, this is for economy of words reasons, the famous “a picture is worth a thousand words”. There is nothing wrong with using doodles for reasoning with graphs, as long as you know the convention used. Perhaps the convention is so intuitive that it would need 1000000 words to make it clear (for a machine), but however there is a simple criterion which helps those who don’t trust their sight: you got it right if you understand what the doodle means at the graph level.
Look again at the previous picture, which shows you what a generic move looks like. The move (from LEFT to RIGHT) consists into:
How simple is that?
To make it even more simple, we use the following visual trick: use the relative placements of the free ports in the doodle as the port variables.
If look carefully at the previous picture, then you notice that you may redraw it (without affecting what the doodle means at the graph level) by representing the free ports of the RIGHT in the same relative positions as the free ports from the left.
The drawing would then look like this:
Then you may notice that you don’t need to write the port variables on the doodles, because they have the same relative positions, so you may as well describe the move as:
This is the convention used everywhere in the doodles from this blog (and it’s nothing special, it’s used everywhere).
I shall close the pedantic rant by saying that there is a deep hypocrisy in the claim that there is ANY need to spend so much words to make clear things clear, like the distinction between graphs and doodles, and relative positions and so on. I ask those who think that text on a page is clear and (a well done) doodle is vague: do you attach to your text a perhaps sound introduction which explains that you are going to use latin letters, that no, the letter and it’s image in the mirror are not the same, that words are to be read from left to right, that space is that character which separates two words, that if you hit the end of a text line then you should pass to the line from behind, that a line is a sequence of characters separated by an invisible character eol, …..? All this is good info for making a text editor, but you don’t need to program a text editor first in order to read a book (or to program a text editor). It would be just crazy, right? Our brains use exactly the same mechanisms to parse a doodle as a text page and a doodle as a depiction of a graph. Our brains understand very well that if you change the text fonts then you don’t change the text, and so on. A big hypocrisy, which I believe has big effects in the divide between various nerd subcultures, like IT and geometers, with a handicapping effect which manifests into the real life, under the form of the products the IT is offering. Well, end of rant.
Combing moves. These moves are not present in the original chemlambda formalism, because they are needed at the level of the g-patterns. Recall from part I that Louis Kauffman proposed to use commutative polynomials as graphical elements, which brings the need to introduce the Arrow element A[x,y]. This is the same as introducing invisible nodes in the chemlambda molecules (hence the passage from molecules to patterns). The combing moves are moves which eliminate (or add) invisible nodes in patterns. This corresponds in the graphical version to decorations (of those invisible nodes) on arrows of the molecules.
A combing move eliminates an invisible node (in the + direction) or adds an invisible node (in the – direction).
A first combing move is this:
Arrow[x,y] A rrow[y,z] <–comb–> Arrow[x,z]
or graphically remove (or add) a (decoration of an) invisible node :
Another combing move is:
Arrow[x,x] <–comb–> loop
or graphically an arrow with the in and out ports connected is a loop.
Another family of combing moves is that if you connect an arrow to a port of a node then you can absorb the arrow into the port:
L[x,y,z] Arrow[u,x] <–comb–> L[u,y,z]
L[x,y,z] Arrow[y,u] <–comb–> L[x,u,z]
L[x,y,z] Arrow[z,u] <–comb–> L[x,y,u]
______________________________________
FO[x,y,z] Arrow[u,x] <–comb–> FO[u,y,z]
FO[x,y,z] Arrow[y,u] <–comb–> FO[x,u,z]
FO[x,y,z] Arrow[z,u] <–comb–> FO[x,y,u]
______________________________________
A[x,y,z] Arrow[u,x] <–comb–> A[u,y,z]
A[x,y,z] Arrow[u,y] <–comb–> A[x,u,z]
A[x,y,z] Arrow[z,u] <–comb–> A[x,y,u]
______________________________________
FI[x,y,z] Arrow[u,x] <–comb–> FI[u,y,z]
FI[x,y,z] Arrow[u,y] <–comb–> FI[x,u,z]
FI[x,y,z] Arrow[z,u] <–comb–> FI[x,y,u]
______________________________________
Now, more interesting moves.
The beta move. The name is inspired from the beta reduction of lambda calculus (explanations later)
L[a1,a2,x] A[x,a4,a3] <–beta–> Arrow[a1,a3] Arrow[a4,a2]
or graphically
If we use the visual trick from the pedantic rant, we may depict the beta move as:
i.e. we use as free port variables the relative positions of the ports in the doodle. Of course, there is no node at the intersection of the two arrows, because there is no intersection of arrows at the graphical level. The chemlambda graphs are not planar graphs.
The FAN-IN move. This is a move which resembles the beta move.
FI[a1,a4,x] FO[x,a2,a3]
<–FAN-IN–>
Arrow[a1,a3] Arrow[a4,a2]
(I wrote it like this because it does not fit in one line)
Graphically, with the obvious convention from the pedantic rant, the move is this:
The FAN-OUT moves. There are two moves: CO-COMM (because it resembles with a diagram which expresses co-commutativity) and CO-ASSOC (same reason, but for co-associativity).
FO[x,a1,a2] <–CO-COMM–> FO[x,a2,a1]
and
FO[a1,u,a2] FO[u,a3,a4]
<-CO-ASSOC->
FO[a1,a3,v] FO[v,a4,a2]
or graphically:
The DIST moves. These are called distributivity moves. Remark that the LEFT pattern is simpler than the RIGHT pattern in both moves.
A[a1,a4,u] FO[u,a2,a3]
<–DIST–>
FO[a1,a,b] FO[a4,c,d] A[a,c,a2] A[b,d,a3]
and
L[a1,a4,u] FO[u,a2,a3]
<–DIST–>
FI[a1,a,b] FO[a4,c,d] L[c,b, a2] L[d,a,a3]
or graphically:
The LOCAL PRUNING moves. These are used with the termination node. There are four moves:
FO[a1,a2,x] T[x] <–LOC-PR–> Arrow[a1,a2]
L[a1,x,y] T[x] T[y] <–LOC-PR–> T[a1]
FI[a1,a2,x] T[x] <–LOC-PR–> T[a1] T[a2]
A[a1,a2,x] T[x] <–LOC-PR–> T[a1] T[a2]
or graphically
____________________________________________________________
“The proceedings of ALIFE 14 are now available from MIT Press. The full proceedings, as well as individual papers, are freely available under Creative Commons licenses.
http://mitpress.mit.edu/books/artificial-life-14
“
Great!
Here is a link to our published article.
______________________________________________________