The Distributed GLC is a decentralized asynchronous model of computation which uses chemlambda molecules. In the model, each molecule is managed by an actor, which has the molecule as his state, and the free ports of the molecule are tagged with other actors names. Reductions between molecules (like chemical reactions) happen only for those molecules which have actors who know one the other, i.e. only between molecules managed by actors :Alice and :Bob say, such that:
- the state of :Alice (i.e. her molecule) contains a half of the pattern of a move, along with a free port tagged with :Bob name.
- the state of :Bob contains the other half of the pattern, with a free port tagged with :Alice name
- there is a procedure based exclusively on communication by packets, TCP style, [UPDATE: watch this recent video of an interview of Carl Hewitt!] which allow to perform the reduction on both sides and which later informs the eventual other actors which are neighbors (i.e. appear as tags in :Alice or :Bob state) about possible new tags at their states, due to the reduction which happened (this can be done for example by performing the move either via introduction of new invisible nodes in the chemlambda molecules, or via the introduction of Arrow elements, then followed by combing moves).
Now, here is possibly a better idea. To explore. One which connects to a thread which is not developed for the moment (anybody interested? contact me) neural type computation with chemlambda and GLC .
The idea is that once the initial configuration of actors and their initial states are set, then why not move the actors around and make the possible reductions only if the actors :Alice and :Bob are in the same synapse server.
Because the actor IS the state of the actor, the rest of stuff a GLC actor knows to do is so trivially easy so that it is not worthy do dedicate one program per actor running some place fixed. This way, a synapse server can do thousands of reductions on different actors datagrams (see further) in the same time.
- be bold and use connectionless communication, UDP like, to pass the actors states (as datagrams) between servers called “synapse servers”
- and let a synapse server to check datagrams to see if by chance there is a pair which allow a reduction, then perform in one place the reduction, then let them walk, modified.
There is so much place for the artificial chemistry chemlambda at the bottom of the Internet layers that one can then add some learning mechanisms to the synapse servers. One is for example this: suppose that a synapse server matches two actors datagrams and finds there are more than one possible reductions between them. Then the synapse server asks his neighbour synapse servers (which perhaps correspond to a virtual neuroglia) if they encouter this configuration. It chooses then (according to a simple algorithm) which reduction to make based on the info coming from its neighbours in the same glial domain and tags the packets which result after the reduction (i.e. adds to them in some field) a code for the mode which was made. Successful choices are those which have descendants which are active, say after more than $n$ reductions.
Plenty of possibilities, plenty of room at the bottom.