Neural darwinism, large scale homunculus

Neural darwinism, wiki entry:

Neural Darwinism, a large scale theory of brain function by Gerald Edelman, was initially published in 1978, in a book called The Mindful Brain (MIT Press). It was extended and published in the 1989 book Neural Darwinism – The Theory of Neuronal Group Selection.”

“The last part of the theory attempts to explain how we experience spatiotemporal consistency in our interaction with environmental stimuli. Edelman called it “reentry” and proposes a model of reentrant signaling whereby a disjunctive, multimodal sampling of the same stimulus event correlated in time leads to self-organizing intelligence. Put another way, multiple neuronal groups can be used to sample a given stimulus set in parallel and communicate between these disjunctive groups with incurred latency.”

Here is my (probably very sketchy) understanding of this mysterious “reentry”.

Say X is the collection of neurons of the brain, a discrete set with large cardinality N. At any moment the “state” of the brain is partially described by a N \times N matrix of weights: the number w_{ij} is the weight of the connection of the neuron i with the neuron j (a non-negative real  number).

We may imagine such a state of the brain as being the trivial groupoid X \times X with a weight function defined on arrows with values in [0,+\infty).

Instead of neurons and weights of connections we may easily imagine more complex situations (for example take the trivial groupoid generated by connections; an arrow between two connections is the neuron incident to both connections, and so on; moreover, weights could be enriched,…), so let’s just say that a state of  the brain is a weighted groupoid.

With a dynamics of weights.

Define a “neuronal group” as being a sub-groupoid with a weight function.  Take now two neuronal groups (A,w_{A}) and (B,w_{B}).  How similar are they, in the brain?

For this we need a cost function which applies to any “weighted relation” (in the brain, i.e. in the big weighted groupoid) from A to B and spills a non-negative number. The similarity between two neural groups (with respect to a particular state of the brain) is the minimum of these numbers over all the possible connectomes between A and B.

My feeble understanding of this reentry is that, as time passes, the state of the brain evolves in a way which increases (in fact decreases the cost of) similarity of neuronal groups “encoding” aspects of the same “stimulus” which are correlated in time.

We may the imagine a “large scale homunculus” as being a similar but strict neural sub-group of the whole brain. The reentry weighted relation will then have a structure of an emergent algebra.

Indeed, there is a striking similarity between this formalization (probably very naive w.r.t. the complexity of the problem, and also totally ignoring dynamical aspects) and the characterization of emergent algebras as being related to the problem of exploring space and matching collections of maps, as described in  “Maps of metric spaces“, see also  these slides.

Just replace distances with matrices of weights, or equivalently, think about the previous image as:

I shall come back to this with all details later.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s