Seriously! It’s done. An actor like model, which uses graphic lambda calculus for doing distributed computing (i.e. parallel, asynchronous, no controller) . No evaluation needed, everything is local. There is no signal circulating far through the wires, only small local exchanges. In this respect it resembles to what is happening in chemical reaction networks, but it goes much further than this.
Looks perfectly feasible, needs angel investor.
UPDATE: Let me add a bit more details about this. The awesome thing is not in the actor model side, alone, nor in the graphic lambda calculus or chemlambda alone. Is in the combination of those. The graphic lambda calculus brings the possibility to do distributed computing (and even far away from the usual logic realm, into the field of computing with space, see A me for space). The space which is needed by graphic lambda calculus is, of course, the network. But the network lacks an essential ingredient of space: the proximity relations. This is where the actor model is useful.
Secondly, it is well known that in the usual sense computation needs two ingredients: reduction and evaluation. They appear under a form or another in any computation model I am aware of, excepting the one of graphic lambda calculus + actors! Here, only local (if we rely purely on chemlambda instead of graphic lambda calculus) reduction moves are used, after an initial stage of preparation of the computation.
This has implication into understanding how brains “compute”. Indeed, there are two things (at least, but let’s simplify) which happen in a brain: one is (electrical) signal transmitted (chemically mediated in the synapses) and neural network rewiring (as a form of “learning”). The computation model I propose may be used in two places:
- in the understanding of the computational aspects of the chemical connectome of a brain (by looking for real embodiments of the chemlambda formalism, i.e. by identifying some real chemical molecules which interact as the molecules of chemlambda)
- in the understanding of the rewiring mechanism. The graphic lambda calculus + actors proposes that the rewiring alone is a form of computation, one related to pure local reduction, while the signal transmissions are related more to evaluation aspects. In this respect, neurons (or maybe family of neurons nurtured by one glial cell) are actors and the physical neural network is like an actors diagram.
Continues from Actors for the Ackermann machine . In this post we see the first interaction between the actors.
Notations: each actor has an address and a stack .
Let the play begin. The cast is the following:
At the beginning of the performance, the actors are in the following configuration
Only with can interact (i.e. the only moves in GLC which may happen between actors are those between the mentioned ones). Their interaction is a form of pure communication (via the graphic beta move). Two such moves are possible, in succession. This is described in the next figure, along with the changes in the actors configuration.
The performance is thrilling: the actor is almost exhausted after forcing the actor to drop down his mask. In the process lost his friends and (with his buddy ) in favour of . Only is still taking the side of . What will happen next?
At this stage, no other interaction is possible without revealing what is really thinking (what’s in his stack ). The first act is over.
Continues A machine for computing the Ackermann function in graphic lambda calculus , by a preliminary analysis of the proposed Ackermann machine in GLC from the Actor Model point of view.
Very brief notes:
- see Hewitt Actor Model, lambda calculus and graphic lambda calculus for the actor notation (or decoration)
- binary fission is an apt name for the replacement of the GLOBAL FAN-OUT from GLC by the DIST moves from chemlambda (because eventually I am interested in using the purely local formalism of chemlambda for the Ackermann machine).
- “mask” is a new name instead of the “pacman” name used in the first post on the Ackermann machine.
- “stack” might correspond to the “mailbox” in the actor model, but it’s internal to the actor and interacts only with the mask.
All this has a working hypothesis basis, I’m thinking out loud – btw, if you look then you can also talk here . Everything is subject to changes, remember this is an open notebook (which can also be cited as explained at the right side of the post).
Recently I became aware of the research line started by Wadsworth ( Christopher P. Wadsworth, Semantics and Pragmatics of the Lambda Calculus , DPhil thesis, Oxford, 1971) and then Lamping (John Lamping, An algorithm for optimal lambda calculus reduction, POPL ’90 Proceedings of the 17th ACM SIGPLAN-SIGACT symposium on Principles of programming languages, p. 16-30) on lambda graphs and sharing graphs (see for example this article by Stefano Guerrini). I see in Guerrini article, at page 5, a figure which is almost identical to the graphic beta move (and even more, when I shall really understand what a “mux” is, then maybe some of the moves from the middle of the page 5 will turn out to be related with the distributivity moves from the chemical concrete machine).
There are several differences between GLC and Wadsworth-Lamping kind of lambda graphs (research line denoted by “WL” further):
- the graphs used in GLC are all oriented, made by trivalent or univalent nodes (the termination node) , while those from WL are a mix of oriented and non-oriented nodes,
- GLC does not use any names, neither of variables, nor of terms, while in WL names are used everywhere. This looks like a minor difference but instead is very important and here is why. If we live by the “no names” rule then forget about evaluations, in particular forget about evaluation strategies and about using the result of an evaluation for deciding the next computational step. Moreover, forget about representing the state of a computation by a “value”, whatever value might mean. Does it start to look strange? It gets even weirder: forget about thinking in terms of flowcharts with nodes which represent operations (like application and lambda abstraction), which have inputs and outputs like wires which allow propagations of signals. After all forget about this signal idea. Can you? GLC can.
- GLC has among the nodes a “fan-out” node which satisfies moves (graph rewrites) called CO-ASSOC and CO-COMM, as a replacement for the need to use names in WL, thus the only place in GLC which resembles to “sharing” from WL is in the use of the GLOBAL FAN-OUT move (see however how this move is eliminated in the fully local version of GLC, called the chemical concrete machine),
Let’s now pass to bigger differences:
- there is a sector of GLC which is made by graphs resembling the lambda graphs from WL, however, there is no effort made in GLC to work only with this sector, while a big deal of effort in the WL approach consists in finding ways to select those “valid” lambda graphs. In GLC there is no need to restrict only to well constructed lambda graphs.
- this gives an advantage which GLC has over WL, namely that GLC has also other sectors, different from the one of lambda graphs, where the graphic beta move can be applied outside lambda calculus. One of this sectors is particularly interesting: is the sector containing knot and tangle diagrams, which allows GLC to interact with Kauffman Knot Logic and Knot Automata. Thus GLC is a formalism which can be used for lambda calculus but also for other types of nonstandard computing models, like Kauffman’s Knot Logic.
Finally, the most important difference comes from the motivation of GLC, which is the need to have a visual representation of certain finite differential computations in spaces with dilations and emergent algebras. This is possible in another sector of GLC, called the “emergent algebra sector” and it is completely out of reach of WL, see
- Dictionary from emergent algebra to graphic lambda calculus (I)
- Dictionary from emergent algebra to graphic lambda calculus (II)
- Dictionary from emergent algebra to graphic lambda calculus (III)
This leads to a rigorous definition of what “computing with space” might be. It can also have very concrete applications, evoked here:
Otherwise, there are many things to learn by a geometer like me, from previous work. There are certainly many geometric things to be learned by CS experts as well, because there is more and more clear that computation needs some notion of space, besides the boringly trivial one dimensional one.
In the post WWW with Metabolism and The chemical connectome of the internet was evoked the idea of using the chemical concrete machine not on biological, wet networks (although I still find this line of research very promising), but on the www. I am very happy to see that efforts concerning the use of chemical programming and bio inspired models of programming already exist!
Tell me more about this, if you read this and you are an expert! (And, of course, excuse my ignorance, I am a geometer, not a CS expert, therefore I shall mention your previous work here, as I learn about it.)
Fraglets is an extremely promising direction: here is the fraglets site and here is a fascinating article by Christian F. Tschudin. The title of the article is “Fraglets – a Metabolistic Execution Model for Communication Protocols” and the abstract reads:
In this paper we introduce a molecular biology inspired execution model for computer communications. The goal is to lay the ground for automatic network adaption and optimization processes as well as the synthesis and evolution of protocol implementations. Our execution model is based on the unification of code and data, featuring a single unit called “fraglets ” that are operands as well as operators. We have built a simulator and started to program classical communication tasks with fraglets that show metabolistic pathways patterns like the ones found in biological cells. We give an example of a fraglet implementation for a non-trivial flow–control–with–reordering protocol and briefly discuss how to search the fraglet program space with genetic algorithms.
I like very much two things here: “unification of code and data” and the “metabolistic” word in the title.
Lidia Yamamoto is another researcher who works in the (finished?) BIONETS collaboration, with Biochemically inspired emergent computation, co-authored with Thomas Meyer. Also, with Christian Tschudin, “A metabolic approach to protocol resilience” (here, from p. 191).
Of course, Banatre is the first who introduced the concept of “chemical programming” and Berry with Boudol the CHAM, or the chemical abstract machine. (Recall that the “chemical concrete machine” denomination points to the CHAM, but it is “concrete” because really it works with “concrete” molecules, involved in “concrete” chemical reactions, without using any name (or name management), and without the need for evaluation in the computation).
1. For Unlimited Detail (UD), euclideon or non-euclideon like, go see the blog of bcmpinc, until now the only contributor here able to give a UD solution (see also the UD projects page, where his solution is listed, along other proposals which were not visibly successful). Let me know, please, from time to time, if really new things arrive. I keep my pov, namely that bcmpinc solution is non-euclideon, because the main thing of the euclideon UD is in the way it manages realtime, by producing first a reformatting of the database (before any rendering, without any time constraints), which database is then exploited for realtime rendering. My primary interest in UD lies in the database reformatting.
2. I posted less than usual lately, because of many discussions around graphic lambda calculus (GLC) and the chemical concrete machine (chemlambda). I am grateful for all these discussions and perspectives and I shall report here, some time, about this. The main idea is that we might be on a great thing or, as usually in research, not. I believe we do, even more: that is something genuinely new in this pure syntax approach, which might be (a proof of principle) solution for the great mystery about how it’s possible for groups of simple (compared to us) cells — neurons — to implement arbitrarily high level semantics by purely physical and chemical phenomena. I am very far, in real life, from being a socialite, therefore these extremely interesting discussions take a bit from my motivation to write here.
3. The series on the vision theater is not over yet, in fact there is a very intriguing development of the point 2, namely that there may be nontrivial links between distributed computing (as a purely local phenomenon) and the homunculus fallacy. This would be also a natural continuation from and reformulation of the post Theatron as an eye.