Tag Archives: Jan Koenderink

No semantics principle in GEB book

A very interesting connection between the no semantics principle of Distributed GLC and the famous GEB book by Douglas R. Hofstadter has been made by Louis Kauffman, who indicated the preface of the 20th anniversary edition.

Indeed, I shall quote from the preface (boldface by me), hoping that the meaning of the quoted will not change much by being taken out of context.

… I felt sure I had spelled out my aims over and over in the text itself. Clearly, however, I didn’t do it sufficiently often, or
sufficiently clearly. But since now I’ve got the chance to do it once more – and in a prominent spot in the book, to boot – let me try one last time to say why I wrote this book, what it is about, and what its principal thesis is.

In a word, GEB is a very personal attempt to say how it is that animate beings can come out of inanimate matter. […]

GEB approaches these questions by slowly building up an analogy that likens inanimate molecules to meaningless symbols, and further likens selves (or ‘I”s or “souls”, if you prefer – whatever it is that distinguishes animate from inanimate matter) to certain special swirly, twisty, vortex-like, and meaningful patterns that arise only in particular types of systems of meaningless symbols.

I have not read this book, instead I arrived, partially, to conclusion which are close to these from trying to understand some articles,  written by Jan Koenderink, being mesmerized by the beautiful meme “Brain a geometry engine“.  Last time I discussed about this in the post The front end visual system performs like a distributed GLC computation.

Advertisements

The front end visual system performs like a distributed GLC computation

In this post I want to explain why the Distributed GLC  model of computation can be seen as a proof of principle that it is possible to describe rigorously some complex functioning of the brain as computation.

If you are not aware about this as being a problem, then please learn that the matter whether what brains do is computation is very controversial. On one side there are rigorous notions of computation (expressed in terms of Turing Machines, or in terms of lambda calculus, for example) which are used with full competence in CS. On the other side, in (some parts of) neuroscience the word “computation” is used in a non-rigorous sense, not because the neuroscience specialists are incapable of understanding computation in the rigorous CS sense, but because in real brains the matters are far more complex to make sense than in regards to paper  computers. Nevertheless, (some) CS specialists believe (without much real evidence) that brains compute in the CS sense, and (some) neuroscience specialists believe that their vague notions of computation deserve to bear this name, even it does not look like computation in CS rigorous sense.

OK, I shall concentrate on a particular example which I think it is extremely interesting.

In the article by Kappers, A.M.L.; Koenderink, J.J.; Doorn, A.J. van, Basic Research Series (1992), pp. 1 – 23,

Local Operations: The Embodiment of Geometry

the authors introduce the notion of  the  “Front End Visual System” . From section 1, quotes indexed by me with (1), (2), (3).

(1) Vision […]  is sustained by a dense, hierarchically nested and heterarchically juxtaposed tangle of cyclical processes.”

(2) In this chapter we focus upon the interface between the light field and those parts of the brain nearest to the transduction stage. We call this the “visual front end”.

(3) Of course, the exact limits of the interface are essentially arbitrary, but nevertheless the notion of such an interface
is valuable.

Comments:

  • (2) is the definition of the front end
  • (3) is a guard against a possible entry path of the homunculus in the brain
  • (1)  has these very nice expression “dense tangle of cyclical processes”, will come back to this!

Let’s pass to the main part of interest: what does the front end?  Quotes from the section 1, indexed by me with (a), … (e):

  • (a) the front end is a “machine” in the sense of a syntactical transformer (or “signal processor”)
  • (b) there is no semantics (reference to the environment of the agent). The front end merely processes structure
  • (c) the front end is precategorical,  thus – in a way – the front end does not compute anything
  • (d) the front end operates in a bottom up fashion. Top down commands based upon semantical interpretations are not considered to be part of the front end proper
  • (e) the front end is a deterministic machine […]  all output depends causally on the (total) input from the immediate past.

Comments and reformulations, indexed by (I), … (IV)

  • (I) the front end is a syntactical transformer, it processes structure [from (a), (b)]
  • (II) there is no semantics [from (b)]; semantical interpretations are not part of the front end [from (d)]
  • (III) the front end does not compute, in the sense that there is no categorical like chasing diagrams type of computing [not formulated in terms of signals processed by gates?] [from (d)]
  • (IV) there is a clear mechanism, based on something like a “dense tangle of cyclical processes” which processes the total input (from the light field) from the immediate past [from (e) and (1)]

These (I)-(IV) are exactly the specifications of a distributed computation with GLC actors, namely:

  • a distributed, asynchronous, rigorously defined computation
  • based on local graph rewrites which are purely syntactic transformers,  a correspondent of both “dense tangle of cyclical processes” and also of “processes structure”
  • there is no semantics,  because there are no names or values which decorate the arrows of the GLC graphs, nor they travel through the nodes of such graphs. There is no evaluation procedure needed for the computation with GLC actors
  • the computation with GLC actors is done starting from an initial graph (structure) , which may use also external constructs (the cores are equivalents of the light field which triggers chemical reaction in the retina, which are then processed by the front end)

This is no coincidence! One of the reasons of building GLC was exactly the one of making sense of the front end visual system.

In conclusion:

  • yes, there is a way to rigorously describe what the front end does as computation in the CS sense, although
  • this notion of computation has some unique features: no evaluation, graph reduction based asynchronous, distributed, purely local. No semantics needed, no global notions or global controllers, neither in space, nor in time.

Before aiming to explain consciousness

… you need to explain awareness, in particular all these things ignored by non-geometrical minds.

If the following is in any way a result of computing, it would be “computing with space”, I think and hope.

Enjoy reading Experimental Phenomenology: Art & Science , by Jan Koenderink,  published by  The Clootcrans Press!  Quotes from the beginning of the e-book:

The contents of this eBook are the slides of an invited talk held by me in Alghero (Sardinia) in the VSAC (Visual Science of Art Conference) 2012. The talk was scheduled for an hour and a half, thus there are many slides.

Judging from the responses (discounting polite remarks such as “nice pictures”, and so forth) most of the audience didn’t get the message. Most hinted that they were surprised that I apparently “didn’t believe in reality”, thus showing that the coin didn’t drop.

The topic of the talk are the relations between life, awareness, mind, science and art. The idea is that these are all ways of creating alternative realities. The time scales are vastly different, ranging all the way from less than a tenth of  a second (the microgenesis of visual awareness), to evolutionary time spans (the advent of a new animal species). The processes involved play on categorically different levels, basic physicochemical process (life), pre-conscious processes (awareness), reflective thought (mind), to the social level (art and science). Yet the basic processes, like taking perspective (predator versus gatherer in evolution, sense modality in awareness, language in reflective thought, style in art, geometry versus algebra in science), selection, analogy, consolidation, construction, are found on all levels, albeit (of course) in different form.

_____________________________________

“Visual awareness” by Koenderink

Further is an excerpt from the ebook Visual awareness by Jan Koenderink. The book is part of a collection, published by The Clootcrans Press.

What does it mean to be “visually aware”? One thing, due to Franz Brentano (1838-1917), is that all awareness is awareness of something. One says that awareness is intentional. This does not mean that the something exists otherwise than in awareness. For instance, you are visually aware in your dreams, when you hallucinate a golden mountain, remember previous visual awareness, or have pre-visions. However, the case that you are visually aware of the scene in front of you is fairly generic.

The mainstream account of what happens in such a generic case is this: the scene in front of you really exists (as a physical object) even in the absence of awareness. Moreover, it causes your awareness. In this (currently dominant) view the awareness is a visual representation of the scene in front of you. To the degree that this representation happens to be isomorphic with the scene in front of you the awareness is veridical. The goal of visual awareness is to present you with veridical representations. Biological evolution optimizes veridicality, because veridicality implies fitness.  Human visual awareness is generally close to veridical. Animals (perhaps with exception of the higher primates) do not approach this level, as shown by ethological studies.

JUST FOR THE RECORD these silly and incoherent notions are not something I ascribe to!

But it neatly sums up the mainstream view of the matter as I read it.

The mainstream account is incoherent, and may actually be regarded as unscientific. Notice that it implies an externalist and objectivist God’s Eye view (the scene really exists and physics tells how), that it evidently misinterprets evolution (for fitness does not imply veridicality at all), and that it is embarrassing in its anthropocentricity. All this should appear to you as in the worst of taste if you call yourself a scientist.  [p. 2-3]

___________________

I hold similar views, last time expressed in the post Ideology in the vision theater (but not with the same mastery as Koenderink, of course). Recall that “computing with space“, which is the main theme of this blog/open notebook, is about rigorously understanding (and maybe using) the “computation” done by the visual brain with the purpose to understand what space IS.  This is formulated in arXiv:1011.4485  as the “Plato’s hypothesis”:

(A) reality emerges from a more primitive, non-geometrical, reality in the same way as
(B) the brain construct (understands, simulates, transforms, encodes or decodes) the image of reality, starting from intensive properties (like a bunch of spiking signals sent by receptors in the retina), without any use of extensive (i.e. spatial or geometric) properties.
___________________
Nevermind my motivations, the important message is that  Koenderink critic is a hard science point of view about a hard science piece of research. It is not just a lexical game (although I recognize the value of such games as well, but as a mathematician I am naturally inclined towards hard science).

Geometry of imaginary spaces, by Koenderink

This post is about the article “Geometry of imaginary spaces“,   Journal of  Physiology – Paris, 2011, in press, by Jan Koenderink.

Let me first quote from the abstract (boldfaced  by me):

“Imaginary space” is a three-dimensional visual awareness that feels different from what you experience when you open your eyes in broad daylight. Imaginary spaces are experienced when you look “into” (as distinct from “at”) a picture for instance.

Empirical research suggests that imaginary spaces have a tight, coherent structure, that is very different from that of three-dimensional Euclidean space.

[he proposes the structure of a bundle E^{2} \times A^{1} \rightarrow E^{2}, with basis the euclidean plane, “the visual field” and fiber the 1-dimensional affine line, “the depth domain”,]

I focus on the topic of how, and where, the construction of such geometrical structures, that figure prominently in one’s awareness, is implemented in the brain. My overall conclusion—with notable exceptions—is that present day science has no clue.

What is remarkable in this paper? Many many things, here are just three quotes:

–  (p. 3) “in the mainstream account”, he writes, “… one starts from samples of … the retinal “image”. Then follows a sequence of image operations […] Finally there is a magic step: the set of derived images turns into a “representation of the scene in front of you”. “Magic” because image transformations convert structures into structures. Algorithms cannot convert mere structure into quality and meaning, except by magic. […] Input structure is not intrinsically meaningful, meaning needs to be imposed (magically) by some arbitrary format.”

– (p. 4) “Alternatives to the mainstream account have to […] replace inverse optics with “controlled hallucination” [related to this, see the post “The structure of visual space“]

– (p. 5) “In the mainstream account one often refers to the optical structure as “data”, or “information”. This is thoroughly misleading because to be understood in the Shannon (1948) sense of utterly meaningless information. As the brain structures transform the optical structure into a variety of structured neural activities, mainstream often uses semantic terms to describe them. This confuses facts with evidence. In the case of an “edge detector” (Canny, 1986) the very name suggests that the edge exists before being detected. This is nonsensical, the so-called edge detector is really nothing but a “first order directional derivative operator” (Koenderink and van Doorn, 1992). The latter term is to be preferred because it describes the transformation of structure into structure, whereas the former suggests some spooky operation” [related to this, see the tag archive “Map is the territory“]

Related to my  spaces with dilations, let me finally quote from the “Final remarks”:

The psychogenetic process constrains its articulations through probing the visual front end. This part of the brain is readily available for formal descriptions that are close to the neural hardware. The implementation of the group of isotropic similarities, a geometrical object that can  easily be probed through psychophysical means, remains fully in the dark though.

The structure of visual space

Mark Changizi has an interesting post “The Visual Nerd in You Undestands Curved Space” where he explains that spherical geometry is relevant for the visual perception.

At some point he writes a paragraph which triggered my post:

Your visual field conforms to an elliptical geometry!

(The perception I am referring to is your perception of the projection, not your perception of the objective properties. That is, you will also perceive the ceiling to objectively, or distally, be a rectangle, each angle having 90 degrees. Your perception of the objective properties of the ceiling is Euclidean.)

Is it true that our visual perception senses the Euclidean space?

Look at this very interesting project

The structure of optical space under free viewing conditions

and especially at this paper:

The structure of visual spaces by J.J. Koenderink, A.J. van Doorn, Journal of mathematical imaging and vision, Volume: 31, Issue: 2-3 (2008), pp. 171-187

In particular, one of the very nice things this group is doing is to experimentally verify the perception of true facts in projective geometry (like this Pappus theorem).

From the abstract of the paper: (boldfaced by me)

The “visual space” of an optical observer situated at a single, fixed viewpoint is necessarily very ambiguous. Although the structure of the “visual field” (the lateral dimensions, i.e., the “image”) is well defined, the “depth” dimension has to be inferred from the image on the basis of “monocular depth cues” such as occlusion, shading, etc. Such cues are in no way “given”, but are guesses on the basis of prior knowledge about the generic structure of the world and the laws of optics. Thus such a guess is like a hallucination that is used to tentatively interpret image structures as depth cues. The guesses are successful if they lead to a coherent interpretation. Such “controlled hallucination” (in psychological terminology) is similar to the “analysis by synthesis” of computer vision.

So, the space is perceived to be euclidean based on prior knowledge, that is because prior controlled hallucinations led consistently to coherent interpretations.