In this post I want to explain why the Distributed GLC model of computation can be seen as a proof of principle that it is possible to describe rigorously some complex functioning of the brain as computation.
If you are not aware about this as being a problem, then please learn that the matter whether what brains do is computation is very controversial. On one side there are rigorous notions of computation (expressed in terms of Turing Machines, or in terms of lambda calculus, for example) which are used with full competence in CS. On the other side, in (some parts of) neuroscience the word “computation” is used in a non-rigorous sense, not because the neuroscience specialists are incapable of understanding computation in the rigorous CS sense, but because in real brains the matters are far more complex to make sense than in regards to paper computers. Nevertheless, (some) CS specialists believe (without much real evidence) that brains compute in the CS sense, and (some) neuroscience specialists believe that their vague notions of computation deserve to bear this name, even it does not look like computation in CS rigorous sense.
OK, I shall concentrate on a particular example which I think it is extremely interesting.
In the article by Kappers, A.M.L.; Koenderink, J.J.; Doorn, A.J. van, Basic Research Series (1992), pp. 1 – 23,
Local Operations: The Embodiment of Geometry
the authors introduce the notion of the “Front End Visual System” . From section 1, quotes indexed by me with (1), (2), (3).
(1) Vision […] is sustained by a dense, hierarchically nested and heterarchically juxtaposed tangle of cyclical processes.”
(2) In this chapter we focus upon the interface between the light field and those parts of the brain nearest to the transduction stage. We call this the “visual front end”.
(3) Of course, the exact limits of the interface are essentially arbitrary, but nevertheless the notion of such an interface
- (2) is the definition of the front end
- (3) is a guard against a possible entry path of the homunculus in the brain
- (1) has these very nice expression “dense tangle of cyclical processes”, will come back to this!
Let’s pass to the main part of interest: what does the front end? Quotes from the section 1, indexed by me with (a), … (e):
- (a) the front end is a “machine” in the sense of a syntactical transformer (or “signal processor”)
- (b) there is no semantics (reference to the environment of the agent). The front end merely processes structure
- (c) the front end is precategorical, thus – in a way – the front end does not compute anything
- (d) the front end operates in a bottom up fashion. Top down commands based upon semantical interpretations are not considered to be part of the front end proper
- (e) the front end is a deterministic machine […] all output depends causally on the (total) input from the immediate past.
Comments and reformulations, indexed by (I), … (IV)
- (I) the front end is a syntactical transformer, it processes structure [from (a), (b)]
- (II) there is no semantics [from (b)]; semantical interpretations are not part of the front end [from (d)]
- (III) the front end does not compute, in the sense that there is no categorical like chasing diagrams type of computing [not formulated in terms of signals processed by gates?] [from (d)]
- (IV) there is a clear mechanism, based on something like a “dense tangle of cyclical processes” which processes the total input (from the light field) from the immediate past [from (e) and (1)]
These (I)-(IV) are exactly the specifications of a distributed computation with GLC actors, namely:
- a distributed, asynchronous, rigorously defined computation
- based on local graph rewrites which are purely syntactic transformers, a correspondent of both “dense tangle of cyclical processes” and also of “processes structure”
- there is no semantics, because there are no names or values which decorate the arrows of the GLC graphs, nor they travel through the nodes of such graphs. There is no evaluation procedure needed for the computation with GLC actors
- the computation with GLC actors is done starting from an initial graph (structure) , which may use also external constructs (the cores are equivalents of the light field which triggers chemical reaction in the retina, which are then processed by the front end)
This is no coincidence! One of the reasons of building GLC was exactly the one of making sense of the front end visual system.
- yes, there is a way to rigorously describe what the front end does as computation in the CS sense, although
- this notion of computation has some unique features: no evaluation, graph reduction based asynchronous, distributed, purely local. No semantics needed, no global notions or global controllers, neither in space, nor in time.