Rewiring of neurons, seen as actors

This is a continuation of the thread concerning the mix of the Actor Model (AM) with the graphic lambda calculus (GLC) and/or the chemical concrete machine (chemlambda).

A GLC neuron was first defined here, in the freedom sector of the GLC. I shall use the term “neuron”, in a more restrictive sense than previously, as being a GLC actor (example here) which has only one exit half-arrow, possibly continued by a tree of fan-out nodes, which are seen as the axon of the neuron.

I don’t know how to constrain an AM computation with GLC actors so they are all neurons at the preparation stage ad they remain neurons during the computation.

Instead, I wonder if there is any chance to model real neurons this way. That is why I try to obtain a prediction concerning a rewiring pattern.  If this model of computation is pertinent for real neural networks, then we should see this rewiring pattern in the wild.

__________________________

Let’s contemplate the following chain of reductions, as seen from the GLC actors point of view. I shall use in fact the chemlambda formalism, but I shall change a bit the notation from the coloured nodes to nodes marked by letters, as in GLC. The fan-in node will be denoted by the letter \phi. Recall that the fan-in node  from  chemlambda replaces the dilation node from GLC (actually we reuse the dilation node of GLC and change it into a fan-in node by replacing the emergent algebra moves with the FAN-IN and DIST moves of chemlambda).

The first reduction involves doing an internal DIST move in the actor a. neuron_actor_2

In red, we see the actor diagram, it does not change during this internal interaction.

The next step involves a graphic beta move:

neuron_actor_3As you see, the actors diagram changed as an effect of the interaction between actors a and b.

The next step is a name change interaction between the actors a and c, namely a gives a fan-out node to c.

neuron_actor_4

That’s it. What is this having to do with neurons?

We may see the fan-out node from the initial configuration, belonging to the actor a, as the axon of a. Further, we may identify actors a $ and d, and also actors b and g. In this way, the whole sequence of three interactions can be rewritten like this:

neuron_actor_5

Notice how I changed the convention of drawing the actors diagram:

  • I took care to indicate the orientation of the links between actors
  • the fan-out node from the initial configuration was externalized, to look more like an axon, and likewise for the final configuration.

This gives a prediction: if neurons can be modelled as GLC actors, then we should observe in reality the following change of wiring between neurons.

neuron_actor_6

It involves the rewiring of 5 neurons! Are there somewhere in the neuroscience literature patterns of wiring of 5 neurons like the ones from this figure? Is there any evidence concerning a rewiring like in this figure?

Advertisements

3 thoughts on “Rewiring of neurons, seen as actors”

  1. I do not comprehend how the fan-out occurs outside of an Actor in the rewiring diagram. Could a cyclical relation between the :a, :b and :e and the :a, :b and :c replace the fan-out and thus create an actor from the :conversation ?

    1. What is “:conversation”? Actor “a” has name “:a”, so you say there should be an actor “conversation”, which has the axon (i.e. the tree of fanout nodes)?

      There is no rewiring diagram. These are actors diagrams which we use to reason about (or design) distributed GLC computations, but the actor diagram does not contain extra information than the one contained in the actors.

      The fan-out nodes may be used appear outside actors nodes in a modified version of an actor diagram, why not? In the wet case these happens with correspond to the axons of the neurons (however, each axon belongs to a neuron). Neurons have this asymmetry: only one axon, with lots of branches, and many dendrites. GLC actors, in general have no restriction on the number or the “orientation” of the links with other actors (that is why we draw the actors diagrams with unoriented arrows). In silico, there should be no problem for the actor a to have a list (:b,:e) where he can send the same message (like CC in mail). No, that’s not correct, is a bad way of thinking, by looking at what really happens. The GLC, or chemlamba explains all without introducing mailboxes. There is no “message” sent to :b and :e as if there is a mailbox CC. Is very easy to fall into this “messaging” mind habit, and distributed GLC does very well without.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s