Rant about Jeff Hawkins “the sensory-motor model of the world is a learned representation of the thing itself”

I enjoyed very much the presentation given by Jeff Hawkins “Computing like the brain: the path to machine intelligence”

 

Around 8:40

Hawkins_1

This is something which could be read in parallel with the passage I commented in the post   The front end visual system performs like a distributed GLC computation.

I reproduce some parts

In the article by Kappers, A.M.L.; Koenderink, J.J.; Doorn, A.J. van, Basic Research Series (1992), pp. 1 – 23,

Local Operations: The Embodiment of Geometry

the authors introduce the notion of  the  “Front End Visual System” .

Let’s pass to the main part of interest: what does the front end?  Quotes from the section 1, indexed by me with (a), … (e):

  • (a) the front end is a “machine” in the sense of a syntactical transformer (or “signal processor”)
  • (b) there is no semantics (reference to the environment of the agent). The front end merely processes structure
  • (c) the front end is precategorical,  thus – in a way – the front end does not compute anything
  • (d) the front end operates in a bottom up fashion. Top down commands based upon semantical interpretations are not considered to be part of the front end proper
  • (e) the front end is a deterministic machine […]  all output depends causally on the (total) input from the immediate past.

Of course, today I would say “performs like a distributed chemlambda computation”, according to one of the strategies described here.

Around 17:00 (Sparse distributed representation) . ” You have to think about a neuron being a bit (active:  a one, non active: a zero).  You have to have many thousands before you have anything interesting [what about C. elegans?]

Each bit has semantic meaning. It has to be learned, this is not something that you assign to it, …

… so the representation of the thing itself is its semantic representation. It tells you what the thing is. ”

That is exactly what I call “no semantics”! But is much better formulated as a positive thing.

Why is this a form of “no semantics”? Because as you can see the representation of the thing itself edits out the semantics, in the sense that “semantics” is redundant, appears only at the level of the explanation about how the brain works, not in the brain workings.

But what is the representation  of the thing itself? A chemical blizzard in the brain.

Let’s put together the two ingredients into one sentence:

  • the sensory-motor model of the world is a learned representation of the thing itself.

Remark that as previously, there is too much here: “model” and “representation” sort of cancel one another, being just superfluous additions of the discourse. Not needed.

What is left: a local, decentralized.  asynchronous, chemically based, never ending “computation”, which is as concrete as the thing (i.e. the world, the brain) itself.

I put “computation” in quotes marks because this is one of the sensible points: there should be a rigorous definition of what that “computation” means. Of course, the first step would be fully rigorous mathematical proof of principle that such a “computation”, which satisfies the requierements listed in the previous paragraph, exists.

Then, it could be refined.

I claim that chemlambda is such a proof of principle. It satisfies the requirements.

I don’t claim that brains work based on a real chemistry instance of chemlambda.

Just a proof of principle.

But how much I would like to test this with people from the frontier mentioned by Jeff Hawkins at the beginning of the talk!

In the following some short thoughts from my point of view.

While playing with chemlambda with the strategy of reduction called “stupid” (i.e. the simplest one), I tested how it works on the very small part (of chemlambda) which simulates lambda calculus.

Lambda calculus, recall, is one of the two pillars of computation, along with the Turing machine.

In chemlambda, the lambda calculus appears as a sector, a small class of molecules and their reactions. Contrary to the Alchemy of Fontana and Buss, abstraction and application (operations from lambda calculus) are both concrete (atoms of molecules). The chemlambda artificial chemistry defines some very general, but very concrete local chemical interactions (local graph rewrites on the molecules) and some (but not all) can be interpreted as lambda calculus reductions.

Contrary to Alchemy, the part which models lambda calculus is concerned only with untyped lambda calculus without extensionality, therefore chemical molecules are not identified with their function, not have they definite functions.

Moreover, the “no semantics” means concretely that most of the chemlambda molecules can’t be associated to a global meaning.

Finally, there are no “correct” molecules, everything resulted from the chemlambda reactions goes, there is no semantics police.

So from this point of view, this is very nature like!

Amazingly,  the chemical reductions of molecules which represent lambda terms reproduce lambda calculus computations! It is amazing because with no semantics control, with no variable passing or evaluation strategies, even if the intermediary molecules don’t represent lambda calculus terms, the computation goes well.

For example the famous Y combinator reduces first to only a small (to nodes and 6 port nodes molecule), which does not have any meaning in lambda calculus, and then becomes just a gun shooting “application” and “fanout” atoms (pair which I call a “bit”). The functioning of the Y combinator is not at all sophisticated and mysterious, being instead fueled by the self-multiplication of the molecules (realized by unsupervised local chemical reactions) which then react with the bits and have as effect exactly what the Y combinator does.

The best example I have is the illustration of the computation of the Ackermann function (recall: a recursive but not primitive recursive function!)

What is nice in this example is that it works without the Y combinator, even if it’s a game of recursion.

But this is a choice, because actually, for many computations which try to reproduce lambda calculus reductions, the “stupid” strategy used with chemlambda is a bit too exuberant if the Y combinator is used as in lambda calculus (or functional programming).

The main reason is the lack of extension, there are no functions, so the usual functional programming techniques and designs are not the best idea. There are shorter ways in chemlambda, which employ better the “representation of the thing itself is its own semantic interpretation” than FP.

One of those techniques is to use instead of long linear and sequential lambda terms (designed as a composition of functions), so to use instead of that another architecture, one of neurons.

For me, when I think about a neural net and neural computation, I tend to see the neurons and synapses as loci of chemical activity. Then  I just forget about these bags of chemicals and I see a chemical connectome sort of thing, actually I see a huge molecule suffering chemical reactions with itself, but in a such a way that its spatial extension (in the neural net), phisically embodied by neurons and synapses and perhaps glial cells and whatnot, this spatial extention is manifested in the reductions themselves.

In this sense, the neural architecture way of using the Y combinator efficiently in chemlambda is to embed it into a neuron (bag of chemical reactions), like sketched in the following simple experiment

Now, instead of a sequential call of duplication and application (which is the way the Y combinator is used in lambda calculus), imagine a well designed network of neurons which in very few steps build a (huge, distributed) molecule (instead of a perhaps very big number of sequential steps) which at it’s turn reduce itself in very few steps as well, and then this chemical connectome ends in a quine state, i.e. in a sort of dynamic equilibrium (reactions are happening all the time but they combine in such a way that the reductions compensate themselves into a static image).

Notice that the end of the short movie about the neuron is a quine.

For chemlambda quines see this post.

In conclusion there are chances that this massively parallel (bad name actually for decentralized, local) architecture of a neural net, seen as it’s chemical working, there are chances that chemlambda really can do not only any computer science computation, but also anything a neural net can do.

_________________________________________________________________

Advertisements

6 thoughts on “Rant about Jeff Hawkins “the sensory-motor model of the world is a learned representation of the thing itself””

  1. Chemical & electrical activity in a biological brain. Electrical operands and chemical computing. Physical brains, of neccesity, had to evolve in a ‘container of porridge’. A.I need not be so constrained. Your chemlamda is incredibly exciting as an agent of molecular mimicry.
    I fear this may only be part of the puzzle however? You probably need an electrical analogue as well. A non-linear fractal, branching, function which maps inputs onto themselves in information space. A non linear hashing function where collisions are possibly desirable in terms of pattern recognition? Such a function would be an information ‘attractor’.
    I think I might have one lying around? 😉

    1. Thanks for comment. Please do tell me about. But re:”function which maps inputs onto themselves in information space” I’m past this since a long time. I come from the research in the fractal world and I arrived to this. Chemlambda is certainly not all the story, but “function” is a step back for sure. It is a classical and over researched part of CS, because (I believe) CS is an outcome of things done around the 2nd WW, when the telephone was still a new thing, hierarchical thinking was strong and secrecy was a must. All this gives the fallacious argument: (a) everything can be reproduced by a contraption of wires and gates (b) the basic unit of interest is thus a sender-wire-receiver triple (aka IT) (c) thus everything can be explained in these terms. It is fallacious because even if any goal can be attained by IT means, this is not explaining the way nature has to attain this goal. Does not explains the journey. Besides now when hierarchy is becoming more and more a bad idea, now when the Net behaviour is definitely not reducible to sender-wire-receiver, to come back to the telephone age with a fractal twist is just a lesser idea than the one saying that everything is just local graph reduction with no global semantics,

  2. It depends upon the model you have in mind for memory architecture? Non linear branching need not necessarilly merely mean just ‘wires & gates’. Some form of semantic addressing is surely needed and I see efficiencies in the application of abstract fractal networks. Particularly where they have a quality of an attractor, collision resistance & ‘one-wayness’. I’ve no doubt that in nature, long term memory is probably lain down in chemical form, I just believe (semi-intuitively), that It’ll take more than ‘botton-up’ distributed chemical type processes to provide the required structure neccessary to spark a quantum leap in A.I. Perhaps a combination methinks. Thoughts?

    1. I don’t have a clear idea about that, in the sense that I don’t understand if the sparse representation is an effect of something else (we can see what’s written on the neon signs and we deduce the internal wirings from that) or if is in some ways a necessary thing. I tend to see the sparse part as a beautiful solution, and the representation part as an artifice, like semantics, for explaining our observations of the system. TLDR: I don’t know.

  3. FWIW, my guess, a consequence of parallelism. Once again, ‘many to one’ being an evolutionary path forced upon the ‘bowl of porridge’.
    Machine modelling of neural nets in imitation invokes all sorts of inefficiecies and is probably unnecessary anyway if the same semantic sorting can be performed serially with exponentially less inputs. Inputs>hashing>transcription>association>outputs.
    Thanks for discussing, & I will be studying chemlamda much more closely as time pemits in order to better understand its potentials.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s