… opens when I look at graphic lambda calculus as a graph rewriting system (see also the foundational Term graph rewriting by Barendregt et al.) The first, most obvious one, is that by treating graphic lambda calculus as a particular GRS, I might USE already written software for applications, like the chemical concrete machine or the B-type neural networks (more about this further). There are other possibilities, much, much more interesting from my point of view.
The reason for writing this post is that I feel a bit like the character Lawrence Pritchard Waterhouse from Neal Stephenson’s Cryptonomicon, more specifically as described by the character Alan Turing in a discussion with Rudolf von Hacklheber. Turing (character in the book) describes science as a train with several locomotives, called “Newton”, etc (Hacklheber suggests there’s a “Leibniz” locomotive as well), with today’s scientists in the railroad cars and finally with Lawrence running after the train with all his forces, trying to keep the pace.
When you change the research subjects as much as I did, this feeling is natural, right? So, as for the lucky Lawrence from the book (lucky because having the chance to be friend with Turing), there is only one escape for keeping the pace: collaboration. Why run after the GRS train when there is already amazing research done? My graphic lambda calculus is a particular GRS, which is designed so that it has applications in real (non-silicon) life, like biological vision (hopefully), chemistry, etc. In real life, I believe, geometry rules, not term rewriting, not types, not any form of the arbor porphyriana. These are extremely useful tools for getting predictions on (silicon) computers out of models. Nature has a way to be massively (and geometrically) parallel, extremely fast and unpreoccupied with the cartesian method. On the other side, in order to get predictions from geometrically interesting models (like graphic lambda calculus and eventually emergent algebras) there is no other tool for simulations better than the computer.
Graphic lambda calculus is not just any GRS, but one which has very interesting properties, as I hope shown in this open notebook/blog. So, I am not especially interested in the way graphic lambda calculus falls into the general formalism of GRS, in particular because of various reasons, like (I might be naive to think) that of the heavy underlying machinery which seems to be used to reduce the geometrically beautiful formalism to some linear writing formalism dear to logicians. But how to write a (silicon) computer program without this reduction? Mind that Nature does not need this step, for example a chemical concrete machine may be (I hope) implemented in reality just by well mixing the right choice of substances (gross simplification which serves to describe the fact that reality is effortlessly parallel).
All this for calling the attention of eventual GRS specialists to the subject of graphic lambda calculus and it’s software implementation (in particular), which is surely more productive than becoming myself such a specialist and then use GRS for graphic lambda calculus.
Please don’t let me run after this train 🙂 The king character from Brave says better than me (00.40 in this clip) :
Now, back to the other possibilities. I’ll give you evidence for these, so that you can judge for yourself. I started from the following idea, which is related to the use of graphic lambda calculus for neural networks. I wrote previously that the NN which I propose have the strange property that there’s nothing circulating through the wires of the network. Indeed, a more correct view is the following: say you have a network of real neurons which is doing something. Whatever it does the network, it does it by physical and chemical mechanisms. Imagine the network, then image, overimposed over the network, a dynamical chemical reaction network which explains what the real network does. The same idea rules the NN which I am beginning to describe in some of the previous posts. Instead of the neurons there are graphs which link to others through synapses, which are also graphs. The “computation” consists in graph rewriting moves, so at the end of the computation the initial graphs and synapses are “consumed”. This image fits well not with the image of the physical neural network, but with the image of the chemical reaction network which is overimposed.
I imagine this idea is not new, so I started to google for this. I have not found (yet, thank you for sending me any hints) exactly the same idea, but here is what I have found:
- google scholar for “chemical reaction network” “neural network” gives: Chemical implementation of neural networks and Turing machines , A new approach to decoding life: systems biology and Generic properties of chemical networks: Artificial chemistry based on graph rewriting
- jump to wiki page about graph rewriting system , where we find “Term graphs are also used as abstract machines capable of modelling chemical and biological computations as well as graphical calculi such as concurrency models”. Jump to TERMGRAPH2013, where we find “the modelling of biological or chemical abstract machines”,
- among other “implementations and applications” of GRS we find Ben Goertzel’s OpenCog .
That’s already to much to process in one’s plate, right? But I think I have made my point: a sea of possibilities. I can think about others, instead of “running after the train”.
Research is so much fun!