Actor model distributed computing with graphic lambda calculus, angel needed

Seriously! It’s done. An actor like model, which uses graphic lambda calculus for doing distributed computing (i.e. parallel, asynchronous, no controller) . No evaluation needed, everything is local. There is no signal circulating far through the wires, only small local exchanges. In this respect it resembles to what is happening in chemical reaction networks, but it goes much further than this.

Looks perfectly feasible, needs angel investor.

UPDATE: Let me add a bit more details about this. The awesome thing is not in the actor model side, alone, nor in the graphic lambda calculus or chemlambda alone. Is in the combination of those. The graphic lambda calculus brings the possibility to do distributed computing (and even far away from the usual logic realm, into the field of computing with space, see A me for space). The space which is needed by graphic lambda calculus is, of course, the network. But the network lacks an essential ingredient of space: the proximity relations. This is where the actor model is useful.

Secondly, it is well known that in the usual sense computation needs two ingredients: reduction and evaluation. They appear under a form or another in any computation model I am aware of, excepting the one of graphic lambda calculus + actors! Here, only local (if we rely purely on chemlambda instead of graphic lambda calculus) reduction moves are used, after an initial stage of preparation of the computation.

This has implication into understanding how brains “compute”. Indeed, there are two things (at least, but let’s simplify) which happen in a brain: one is (electrical) signal transmitted (chemically mediated in the synapses) and neural network rewiring (as a form of “learning”). The computation model I propose may be used in two places:

  • in the understanding of the computational aspects of the chemical connectome of a brain (by looking for real embodiments of the chemlambda formalism, i.e. by identifying some real chemical molecules which interact as the molecules of chemlambda)
  • in the understanding of the rewiring mechanism. The graphic lambda calculus + actors proposes that the rewiring alone is a form of computation, one related to pure local reduction, while the signal transmissions are related more to evaluation aspects.  In this respect, neurons (or maybe family of neurons nurtured by one glial cell) are actors and the physical neural network is like an actors diagram.

Un-currying with Actors

This is an illustration of a sequential computation with actors and graphic lambda calculus, based on the ideas from  Actors for the Ackermann machine .  It describes the inverse of the currying process described in  Currying by using zippers and an allusion to the cartesian theater.

Without further ado, let’s watch the play.

The preparation consists in introducing the cast and the initial actors diagram.

actor_curry_prep

The play starts. The actor c introduces a to b and dies, or maybe goes playing elsewhere.

actor_curry_1

We see the states of the actors a and b, the other main character d is waiting for his line, while the figuration b1, ..., bN wait in the back of the stage, in artful positions.

The rest of the process consists in unzipping a graphic lambda calculus zipper. Zippers are very useful, because the unzipping process is sequential, i.e. it can happen in only one way. Therefore, by using zippers we can force a naturally distributed parallel computation to happen in a sequential way!

actor_curry_N

At the end of the computation, the two actors a and b, which represent the two parts of the zipper, cancel one another. Eventually the graph D is now connected with the output to the address :out and with the inputs to the (addresses of the) figuration actors b1, ..., bN.

The graph D corresponds to the graph A from  the post   Currying by using zippers and an allusion to the cartesian theater  and the computation just described is a reversed computation of the one described in the mentioned previous post.

__________________________

As a side remark, graphic lambda calculus suggests that currying and un-currying should be eliminated at the preparation stage of a distributed computation. However, I took this example of computation with actors and GLC as a simple illustration and also for stressing that zippers can impose, if needed, an order of reduction moves in GLC.

__________________________

Actors for the Ackermann machine (II) Pure communication.

Continues from Actors for the Ackermann machine . In this post we see the first interaction between the actors.

Notations: each actor a has an address :a and a stack a^{*}.

Let the play begin. The cast is the following:

ack_7

At the beginning of the performance, the actors are in the following configuration

ack_8

Only a with b can interact (i.e. the only moves in GLC which may happen between actors are those between the mentioned ones). Their interaction is a form of pure communication (via the graphic beta move). Two such moves are possible, in succession. This is described in the next figure, along with the changes in the actors configuration.

ack_9

The performance is thrilling: the actor a is almost exhausted after forcing  the actor b to drop down his mask.  In the process  a lost his friends  d and f (with his buddy e) in favour of b.  Only c is still taking the side of a.   What will happen next?

At this stage, no other interaction is possible without revealing what b is really thinking (what’s in his stack b^{*}). The first act is over.

_________________________

Actors for the Ackermann machine

Continues A machine for computing the Ackermann function in graphic lambda calculus , by a preliminary analysis of the proposed Ackermann machine in GLC from the Actor Model point of view.

ack_6

Very brief notes:

  • see Hewitt Actor Model, lambda calculus and graphic lambda calculus   for the actor notation (or decoration)
  • binary fission is an apt name for the replacement of the GLOBAL FAN-OUT from GLC by the DIST moves from chemlambda (because eventually I am interested in using the purely local formalism of chemlambda for the Ackermann machine).
  • “mask” is a new name instead of the “pacman” name used in the first post on the Ackermann machine.
  • “stack” might correspond to the “mailbox” in the actor model, but it’s internal to the actor and interacts only with the mask.

All this has a working hypothesis basis, I’m thinking out loud – btw, if you look then  you can also talk here 🙂 . Everything is subject to changes, remember this is an open notebook (which can also be cited as explained at the right side of the post).

________________________

Graphic Lambda Calculus and Lambda Graphs

Recently I became aware of the research line started by Wadsworth ( Christopher P. Wadsworth, Semantics and Pragmatics of the Lambda Calculus , DPhil thesis, Oxford, 1971) and then Lamping (John Lamping, An algorithm for optimal lambda calculus reduction, POPL ’90 Proceedings of the 17th ACM SIGPLAN-SIGACT symposium on Principles of programming languages, p. 16-30) on lambda graphs and sharing graphs (see for example this article by Stefano Guerrini). I see in Guerrini article, at page 5, a figure which is almost identical to the graphic beta move (and even more, when I shall really understand what a “mux” is, then maybe some of the moves from the middle of the page 5 will turn out to be related with the distributivity moves from the chemical concrete machine).

There are several differences between GLC and  Wadsworth-Lamping  kind of lambda graphs (research line denoted by “WL” further):

  •  the graphs used in GLC are all oriented, made by  trivalent or univalent nodes (the termination node) , while those from WL are a mix of oriented and non-oriented nodes,
  • GLC does not use any names, neither of variables, nor of terms, while in WL names are used everywhere. This looks like a minor difference but instead is very important and here is why. If we live by the “no names” rule then forget about evaluations, in particular forget about evaluation strategies and about using the result of an evaluation for deciding the next computational step. Moreover, forget about representing the state of a computation by a “value”, whatever value might mean. Does it start to look strange? It gets even weirder: forget about thinking in terms of flowcharts with nodes which represent operations (like application and lambda abstraction), which have inputs and outputs like wires which allow propagations of signals. After all forget about this signal idea. Can you? GLC can.
  • GLC has among the nodes a “fan-out” node which satisfies moves (graph rewrites) called CO-ASSOC and CO-COMM, as a replacement for the need to use names in WL, thus the only place in GLC which resembles to “sharing”  from WL is in the use of the GLOBAL FAN-OUT move (see however how this move is eliminated in the fully local version of GLC, called the chemical concrete machine),

Let’s now pass to bigger differences:

  • there is a sector of GLC which is made by graphs resembling the lambda graphs from WL, however, there is no effort made in GLC to work only with this sector, while a big deal of effort in the WL approach consists in finding ways to select those “valid” lambda graphs. In GLC there is no need to restrict only to well constructed lambda graphs.
  • this gives an advantage which GLC has over WL,  namely  that GLC has also other sectors, different from the one of lambda graphs, where the graphic beta move can be applied outside lambda calculus. One of this sectors is particularly interesting: is the sector containing knot and tangle diagrams, which allows GLC to interact with Kauffman Knot Logic   and Knot Automata.  Thus GLC is a formalism which can be used for lambda calculus but also for other types of nonstandard computing models, like Kauffman’s Knot Logic.

Finally, the most important difference comes from the motivation of GLC, which is the need to have a visual representation of certain finite differential computations in spaces with dilations and emergent algebras. This is possible in another sector of GLC, called the “emergent algebra sector” and it is completely out of reach of WL, see

This leads to a rigorous definition of what “computing with space” might be. It can also have very concrete applications, evoked here:

Otherwise, there are many things to learn by a geometer like me, from previous work. There are certainly many geometric things to be learned by CS experts as well, because there is more and more clear that computation needs some notion of space, besides the boringly trivial one dimensional one.

Fraglets, bionets, and the www with metabolism

In the post  WWW with Metabolism  and  The chemical connectome of the internet  was evoked the idea of using the chemical concrete machine not on biological, wet networks (although I still find this line of research very promising), but on the www. I am very happy to see that efforts concerning the use of chemical programming and bio inspired models of programming already exist!

Tell me more about this, if you read this and you are an expert! (And, of course, excuse my ignorance, I am a geometer, not a CS expert, therefore I shall mention your previous work here, as I learn about it.)

Fraglets is an extremely promising direction: here is the  fraglets site and here is a fascinating article by Christian F. Tschudin. The title of the article is  “Fraglets – a Metabolistic Execution Model for Communication Protocols”  and the abstract reads:

In this paper we introduce a molecular biology inspired execution model for computer communications. The goal is to lay the ground for automatic network adaption and optimization processes as well as the synthesis and evolution of protocol implementations. Our execution model is based on the unification of code and data, featuring a single unit called “fraglets ” that are operands as well as operators. We have built a simulator and started to program classical communication tasks with fraglets that show metabolistic pathways patterns like the ones found in biological cells. We give an example of a fraglet implementation for a non-trivial flow–control–with–reordering protocol and briefly discuss how to search the fraglet program space with genetic algorithms.

I like very much two things here: “unification of code and data” and the “metabolistic” word in the title.

Lidia Yamamoto is another researcher who works in the (finished?) BIONETS collaboration, with Biochemically inspired emergent computation,  co-authored with Thomas Meyer. Also, with Christian Tschudin,  “A metabolic approach to protocol resilience” (here, from p. 191).

Of course, Banatre is the first who introduced the concept of “chemical programming” and Berry with Boudol the CHAM, or the chemical abstract machine. (Recall that the “chemical concrete machine” denomination points to the CHAM, but it is “concrete” because really it works with “concrete” molecules, involved in “concrete” chemical reactions, without using any name (or name management), and without the need for evaluation in the computation).

Computing with space like applications from the Trillion Sensor Summit

From the Trillion Sensor Summitthis document, here is a list of interesting subjects related to the Internet of Things and the main subject of this blog: computing with space. (See this link for an explanation of usefulness of the graphic lambda calculus for this.)

  • Internet of Things, connecting devices around us through new network architecture to enable low latency control. (Janusz Bryzek, Fairchild Semiconductor, Mancef)
  • CeNSE, Central Nervous System for the Earth, building global environment monitoring. (Janusz Bryzek, Fairchild Semiconductor, Mancef)
  • Smart cities (Susumu Kaminaga, SPP Technologies, Japan for the Growth Strategy, Ajith Amerasekera, TI, Roberto De Nuccio, ST Microelectronics)
  • Biologic nanoprocessor (Luke Lee, UC Berkeley)
  • Molecular diagnostics systems (Luke Lee, UC Berkeley)
  • Brain-machine interfaces  (Jan Rabaey, UC at Berkeley)
  • Interaction swarms (Jan Rabaey, UC at Berkeley)
  • Cyber security  (Jan Rabaey, UC at Berkeley)
  • Distributed network computing (Matthew Hopcroft, HP)
  • Logistic processes: autarkic labeling, condition monitoring, position monitoring (Stephan Guttowski, Fraunhofer)
  • Ambient sensing platform (Steve Nasiri, Nasiri Ventures)
  • National level usage: global warming, health infrastructure, river flow monitor, health epidemics, smart grid sensors, forest fire, ozone, cyber security (Sandhiprakash Bhide, Intel)
  • Context sensing (Kevin Shaw, Sensor Platforms)

_______________

Hewitt Actor Model, lambda calculus and graphic lambda calculus

In Actor Model of Computation: Scalable Robust Information Systems, arXiv:1008.1459,  Hewitt writes, at p 7 that “lambda calculus cannot implement concurrency”. At p.13-14 he writes:

… the semantics of the lambda calculus were expressed using variable substitution in which the values of parameters were substituted into the body of an invoked lambda expression. The substitution model is unsuitable for concurrency because it does not allow the capability of sharing of changing resources.

Correct. However, the graphic lambda calculus does not have this problem because reductions don’t need evaluations. See  The graphic beta move, with details to understand this better.
All the arguments listed further apply also verbatim to graphic lambda calculus! (Hewitt loc cit, p. 14)

Inspired by the lambda calculus, the interpreter for the programming language Lisp [McCarthy et. al. 1962] made use of a data structure called an environment so that the values of  parameters did not have to be substituted into the body of an invoked lambda expression.
Note that in the definition in ActorScript [Hewitt 2011] of the lambda-calculus below:

  • All operations are local.
  • The definition is modular in that each lambda calculus programming language construct is an Actor.
  • The definition is easily extensible since it is easy to add additional programming language constructs.The definition is easily operationalized into efficient concurrent implementations. The definition easily fits into more general concurrent computational frameworks for many-core and distributed computation.

Hewitt continues:

That Actors which behave like mathematical functions exactly correspond with those definable in the lambda calculus provides an intuitive justification for the rules of the lambda  calculus:

  • Lambda identifiers: each identifier is bound to the address of an Actor. The rules for free and bound identifiers correspond to the Actor rules for addresses.
  • Beta reduction: each beta reduction corresponds to an Actor receiving a message. Instead of performing substitution, an Actor receives addresses of its arguments.

Further I try to understand this, as a particular case of the  Chemical actors   sketch.

We have Actors, we have addresses and we have lambda identifiers (it’s about usual lambda calculus, not the better graphic lambda calculus). The idea, as far as I understand, is to “bound” lambda identifiers to the address of an Actor. The second idea is “each beta reduction corresponds to an Actor receiving a message”.

I want a way to identify Actors with graphs in GRAPH or with molecules in MOLECULES, if I wish to use the chemlambda. Actually, because, after some thinking, is better to think about  Actors as if they are a sort of decoration of the arrows of a graph, almost like a decoration with types. (I shall not use the epsilon gates further, just concentrating on the pure logic side)

actor_decor

We shall have actors A, B, etc, with actor names :A. :B. There are other names, like :?  which is a placeholder for indetermination (that’s just a name, if you don’t like this one then change it with  😕 , or whatever), there is also :talk and :listen names. These are not names of actors.

By using these rules we can decorate all half-arrows (meaning that we may have arrows with two decorations. For concreteness, look at the example of the combinator K.

actor_decor_K

I shall take another Actor named A and make it interact with the Actor K, in order to mimick the KUV \rightarrow U couple of reductions.

actor_decor_another

Here “1” and “2” mean anything you want, from “they can be names of actors” to “I don’t care what follows next”.

Now, they are in position to communicate:

actor_decor_int

What happens next? The :listen:talk decoration indicates that a graphic beta reduction can be applied (cf. Hewitt 2nd suggestion). But we need a rule for decorations with names before and after the graphic beta move. Here is what I propose:

actor_decor_beta

We apply this for our example:

actor_decor_int_2

Is that nice or what?  The actor K lost one of it’s “lambda identifiers”, the actor A also simplified a bit, but they still talk one to another:

actor_decor_int_3They did their job and disappeared. What happens next is a matter of other actors.

___________________________

Remark that graphic lambda calculus easily did what Hewitt wants. This example is not showing any concurrency, though, because here there is no concurrency involved, properly speaking. I only wanted to implement the 2 suggestions of Hewitt with graphic lambda calculus. But take examples where concurrency does appear, like here, to see that everything works really nice. This will be explained in another post.

___________________________

Video: The Actor Model (everything you wanted to know, but were afraid to ask)

This is a repost of a  very informative discussion: Hewitt, Meijer and Szyperski: The Actor Model (everything you wanted to know, but were afraid to ask). Enjoy!

The following is a note to myself. There are so many interesting things which could be relevant for  chemical actors, but among these I only mention the moment when Hewitt draws an arbiter, as a box with two inputs and two outputs. Look at the Ackermann machine, or look at Pair of synapses, one controlling the other. From the video I see that actors are the primitive units for computation and addresses are the only real  capabilities. Moreover, there are these arbiters, I really need to think if chemlambda and arbiters are not sufficient for this. Time will tell if this is reasonable or stupid.