A discussion about the structure of visual space

In August I discovered the blog The structure of visual space group and I was impressed by the post The Maya theory of (visual) perception.  A discussion with Bill Rosar started in the comments, which I consider interesting enough to rip it from there and put it here.

This discussion may also help to better understand some of the material from my blog. Several  links were added in order to facilitate this. Here is the exchange of comments.

Me: “I just discovered this blog and I look forward to read it in detail.

Re: “What I am calling into question is the ontological status of a physical world existing beyond the senses.”

Also in relation to the mind-body dualism, I think there is a logical contradiction in the “Cartesian Theater” argument by Dennett, due to the fact that the fact that Dennett’s theater is a theater in a box, already designed for a dualist homunculus-stage space perception (in contradistinction with the older, original Greek Theater).”

Bill Rosar: “Thank you for your posting, Marius Buliga, and welcome! It is great to have a mathematician join us, for obvious reasons, especially since you are interested in problems in geometry.

Your idea of the eye as a “theatron” is interesting, though I do not believe that the brain is computing anything, for the simple reason that it is not a computer, and doesn’t behave like one, as some neuroscientists are now publicly saying. It is people who perform computations, not brains.

Raymond Tallis, who posted “The Disappearance of Appearance” here two years ago, went to some pains to articulate the fallacious reasoning behind the computational metaphor of mind and brain in his marvelous little book WHY THE MIND IS NOT A COMPUTER.

It has long been a truism in cognitive psychology that we do not see our retinal images, and the “function” or process of vision is probably very different from the creation of images, because there is no image in the brain, nor anything like one. If anything, the pattern of stimulation on the retinae is “digested” by the visual system, broken down rather like food is into nutrients (as an alternative, think of chemical communication among insects).

To my knowledge, Descartes did not invoke the analogy of a theater for vision (or perception in general), so for Dennett to construe his ideas on such an analogy is dubious at the outset and, in this instance, just seems to make for a straw man. For that matter, Dennett does not seem to understand the reasons for dualism very fully, and as nearly as I can determine, never bothered to acquaint himself with the excellent volume edited by John Smythies and John Beloff, THE CASE FOR DUALISM (1989). His ill-informed refutations just strike me as facile and unconvincing (and his computational theory of mind has been roundly rejected by Ray Tallis as being fallacious).

My own invoking of theater here as an analogy is to reality itself, not just perception, and is therefore quite different from the view Dennett imputes to Cartesian dualism, though. I propose that physics studies the stagecraft of a reality that only (fully) exists when perceived–which is closer to Berkeley than Descartes, and is a view consistent with John Wheeler’s “observer-participant” model of the universe.

Theoretical physicist Saul-Paul Sirag advanced a “many realities” alternative to the Everett-Wheeler “many worlds” hypothesis, arguing that other realities are mathematically possible. That is why I have tendered the provocative notion that the reality we know is a sort of construction, one that is maintained by the physical constants–or so it seems. Sirag argued that it is not the only possible reality for that reason, and that the constants are comparable to the “chains” that hold the cave dwellers captive to the shadow play on the wall.

I propose instead that the senses are part of the reality-making “mechanism,” and that vision has more the character of a resolving mechanism than a picture-making one (not quite like the Bohm-Pribram holographic reality/brain analogy, though). That gets rid of the homunculus problem, because it turns the perception process inside out: The person and homunculus are one and the same, and visual space is just where it appears to be, viz. in front of us, not a picture made by the visual system in the brain. The forerunner of this view was James Culbertson. The flaw is that it requires a rejection or modification of the causal theory of perception, as we have discussed here. But causality is a metaphysical principle, not a physical one, and perhaps in this context at least requires some close scrutiny, just as Culbertson gave it.”

Me: “…”…for Dennett to construe his ideas on such an analogy is dubious at the outset and, in this instance, just seems to make for a straw man.” This is my impression also, but what can we learn from this about vision?

As a mathematician, maybe, I am quite comfortable with vagueness. What I get from the greek theater/theater in a box argument is that the homunculus is as artificial as the scenic space, or the outer, physical space. These two notions come in pairs: either one has both, or none. The positive conclusion of the argument is that we have to go higher: there is a relation, akin to a map-territory relation, which has on one side the homunculus and on the other side the space.

Let me elaborate a bit on the map-territory relation. What is a map of a territory? It is the outcome of a collection of procedures agreed by the cartographer and the map reader. The cartographer wanders through the territory and constructs a map by some procedure, say by measuring angles and distances using some apparatus. The cartographer follows a convention of representation of the results of his experiments on a piece of paper, let us call this convention “euclidean geometry” (but it might be “quantum mechanics” as well, or “relativity theory”…). The map reader knows that such convention exists and moreover, at least concerning basic facts, he knows how to read the map by applying the convention (for example, the reader of the map of a city, say, knows that straight lines are shortest on the maps as well as across the city). We may say that the map-territory relation (correspondence between points from the territory – pixels from the map) IS the collection of agreed procedures of representation of experiments of the cartographer on the map. The relation between the particular map and the particular territory is just an outcome of this map-territory relation.

Looking at this level, instead of speaking about the perception of the exterior by the homunculus, it is maybe more reasonable to speak, like in “The structure of visual spaces” by J.J. Koenderink, A.J. van Doorn, Journal of mathematical imaging and vision, Volume: 31, Issue: 2-3 (2008), pp. 171-187, about the structure of the visual space as being the result of a controlled hallucination, based on prior experiences which led to coherent results.

Bill Rosar: “Thank you, Marius! What can we learn from Dennett’s faulty analysis of vision, you ask? The “moral of the story” IMO is that any model based on computation presupposes that we know how people perform computations–or how the human minds does–which is something we presently unknown, because we don’t really know what the “mind” really is–it’s just a name. All a computer does is automate a procedure we humans perform. To assume that Nature makes computers strikes me as a classic example of anthropormorphism, and Ray Tallis would agree. How then to get beyond that fallacy? Or, in the case of vision, to echo John Wheeler’s style of formulating foundational problems in physics, “How do you get vision without vision?”–that is, how to understand vision without presupposing it? That’s quite a feat!

A few months ago when Bob French and I were last debating some of these points I suggested that we turn to the evolution of the eye and see what that tells us. Conveniently the evolution of the eyes has been one of Richard Dawkins’ favorite examples to refute the idea of “intelligent design”.

In light of all the questions the account Dawkins raises but leaves unanswered, intelligent design seems to make more sense (I offer no opinion on that myself). So it is a question of what the simplist eyes do and how the organisms possessing them use them. There is a nice little video on YouTube that highlights all that Dawkins does not explain in his simplist account of the evolution of the eye.

As for the map-territory analogy you suggest, it is comparable to the idea of “cortical maps” but shares the same conceptual pitfall as that of the perspective projection analogy I gave above, because as I noted, unlike being able to compare the flat perspective projection (map) with the 3-D *visual space* of which it is (supposedly) a projection, we cannot do that with visual space in relation to putative physical space, which lies beyond our senses. It seems to me that we are to some extent each trapped solipsistically within our own perceptual world.

Koenderink’s idea just seems like nonsense to me, because we don’t even really know what hallucinations are any more than how a hallucinatory space is created relative to our “normal” waking visual space (BTW we invited Koenderink to join the blog a few years ago, but he never replied). The *concept* of a hallucination is only useful when one has some non-hallucinatory experience to which to compare it–thus the same problem as the projection analogy above.

Trouble is we seem to be *inside* the system we are trying to understand, and therefore cannot assume an Archimedean point outside it from which to better grasp it (one of the fundamental realizations Einstein had in developing the theory of relativity, i.e., relativity is all *within* the system = universe).

As for visual space being non-Euclidean or not, I called into question many years ago the interpretation of the data upon which all theories of the geometry of visual space are based, because the “alley experiments” never took into account changes of projection on the retinae as a function of eye movement, i.e., the angles of objects projected on the retina are constantly changing as the eyes move. This has never been modeled mathematically, but it should be. Just look at the point where a wall meets the ceiling an run your eyes along its length, back and forth. You will notice that the angle of the line changes as you move your eyes along it.

Yes, the space and homunculus are an inseparable pair IMO–just look at Wheeler’s symbolic representation of the observer-participant universe (the eye looking at the U).”

Bill Rosar: “I should hasten to emend my remarks above by stating that when we speak of “eyes” and “brains” such objects are only known to us by perception. So like any physical object, we cannot presuppose their existence as such separate from our perception of them–except by an act of a kind of faith (belief), much as we believe that the sun will rise every morning. Therefore talking about their “function” etc. is still all resting upon perceptions, without which we would have no knowledge of anything, ergo, something like Aristotle’s dictum “There is nothing in the mind that was not first in the senses.” Are there eyes and brains that exist independently of perceptions of them?”

Me: “Dear Bill, thank you for the interesting comments! I have several of my own (please feel free to edit the post if it is too long, boring or otherwise repellent for the readers of this blog):

1. It looks to me we agree more than my faulty style of exposition shows: one cannot base an explanation of how the space is “re-constructed” in the brain on the structure of the physical space, point. It may be that what we call structure of physical space is formed by features selected as significant by our brain, in the same way as a wind pipe extracts a fundamental note from random noise (thank you Neal Stephenson).

2. We both agree (as well as Koenderink, see his “Brain a geometry engine”) that, as you write, “the senses are part of the reality-making “mechanism,” and that vision has more the character of a resolving mechanism than a picture-making one”.

3. Concerning “computing”, is just a word. In the sense that “computing” is defined as something which could be done by Turing machines, or expressed in lambda calculus, etc, I believe too that the brain is not computing in this sense. With efforts and a lot of dissipation, it seems that the brain is able to compute in this sense, but naturally it does not. (It would be an interesting project to experimentally “measure” this dissipation, starting for example from a paper by Mark Changizi “Harnessing vision for computation”, here is the link to a pdf.

4. But if we enlarge the meaning of the word “computing” then it may as well turn out that the brain does compute. The interesting question for a mathematician is: find a definition of “computation in enlarged sense” which fits with what the brain does in relation to vision. This is a project dear to me, I don’t want to bother you with this (unless you are interested), which might have eventual real world applications. I started it with the paper “Computing with space, a tangle formalism for chora and difference” and I reached the goal of connecting this (as a matter of proof of principle, not because I believe that the brain really computes in the classical sense of the word) with lambda calculus in the paper “Local and global moves on locally planar trivalent graphs, lambda calculus and lambda-Scale“.
(By the way, I cannot solve the problem of where to submit a paper like “Computing with space…”)

5. Concerning “hallucination”, as previously, is just a word. What I think is likely to be true is that, even if the brain does not have direct access to the physical space, it may learn a language of primitives of this space, by some bayesian or other, unknown, procedure, which is akin to say that we may explain why we see (suppose, for the sake of the discussion) an euclidean 3d space not by using as hypothesis the fact that the physical space has this structure, but because our brains learn a family of primitives of such a structure and then lay in front of our eyes a “hallucination” which is constructed by the consistent use of those primitives.”

Bill Rosar: “Thanks for these stimulating thoughts and ideas, Marius. Not to worry about the length of your blog postings. Mine are often (too) long, too. My remarks will be in two parts. This is part I.

When John Smythies and I started this blog (which was really intended to be a “think tank” rather than a blog), we agreed that, following the lead of Einstein, it may be necessary to re-examine fundamental concepts of space and geometry (not to mention time), thus John’s very first posting about Jean Nicod’s work in this regard, and a number of mine which followed.

One of these fundamental concepts that calls for closer scrutiny is space itself, or, to be more precise, the nature of *spatial extension,* both of which are abstractions, especially in mathematics (in this regard see Graham Nerlich’s excellent monograph, “The Shape of Space”).

We need to better understand the basis of those two abstractions–space and extension–IMO if we are to make progress on the nature of visual space, or the other sensory modalities that occupy perceptual space as a whole (auditory, tactile, olfactory, gustatory). Abstractions reflect both what they omit and what they assume, and it is the assumptions that we especially need to examine here. While clearly visual space is extended, what about smell? Are smells extended in space?

What we find is that there is a *hierarchy* in perceptual space, one that in man is dominated by visual sensation–what has been called the “dominant visual matrix” by psychologists studying perception. Even sounds are referred to visual loci (“localized”), and I think that can be said of smells, too. But in of themselves it is not clear that even auditory sensations are extended in the same way that visual sensations are, because it is as if when a sound is gone, that part of the “soundscape” is also gone, but that which remains is visual space. In visual space an object may disappear, but the locus it occupied does not also disappear. For example, though we can point to the *visual* source of a sound we hear, we do not point to a sound–even the phrase sounds strange, and ordinary language reveals much about the nature of the perceptual world–or what the man of the street calls the “physical world.””

Bill Rosar: “Part II.

If that is so, why should we assume that physical space has all the properties of visual space and is perhaps not more like smell? Physics is making one big assumption!

I will always remember what Caltech mathematician Richard M. Wilson told me when I consulted him many years ago on ideas I had about how the geometry of visual space reflects changing perspective projections on the retinas. He said, “Keep it simple!” By that he meant being parsimonious and not jumping into fancy mathematical formulations without necessity. I am suggesting that we need to keep the mathematical apparatus here to a minimum, lest its elegance obscure the deeper truth we are seeking–just as Einstein cautioned.

So when we talk about the brain, I think we need to be mindful of what Ray Tallis says about it in his posting “The Disappearance of Appearance,” and just *how* we know about the brain, because we cannot talk about the extended world of physical space and exclude the brain itself from that as a (presumably) physically extended biophysical object. It is not that there is the physical world and there is the brain apart from it.

This ultimately becomes question-begging, because in talking about the brain, we are presupposing physical space, rather than explaining how we have arrived at the notion of physical space and extension. Certainly physical science would deny that physical space is created by the brain. Yet David Bohm would say that physics is largely based on an optical lens-like conception of the physical world, but that physical reality may be more like a hologram (now once again a popular analogy in cosmology because of Leonard Susskind’s theory).

Of course when Karl Pribram then talks about the brain being a mechanism that resolves the holonomic reality (“implicate order”) into a hologram or holographic image (“explicate order”), he forgets that the brain itself would presumably be part of the same holonomic implicate order, and would therefore be resolving itself. By what special power can it perform that trick?

So the very “picture” we have of the brain itself is no different from any other physical entity, as Ray Tallis has been at pains to show.

For now, I’m going to rest with just these rejoinders, and return to your other points later.”

7 thoughts on “A discussion about the structure of visual space”

  1. Hi Marius, What a wonderful post! I scratched some notes on a print-out that I will tidy up and submit tomorrow. For now, a question. What observable basis is the sense of smell? If I am not mistaken it is the momentum basis or what ever is associated with the resonant frequency modes of molecules. If vision is in the position and duration basis…

  2. I see what you suggest. “If vision is in the position and duration basis…” but is not. We are not seeing pictures, nor movies. Position and duration are concepts which a fly, my favorite example, cannot grasp in its brain. Explaining vision by such concepts is like explaining how a computer works by saying that a computer is something which asks algorithms and data as input and spills data as output, while in fact a computer is a machine which takes data as input and spills data as output. The human who built the computer prepares the input data (as well as the internal structure of the computer) such that the machine treats the data as if it has a semantic meaning, but the semantic is in the mind of the human. “Position” and “duration” are semantic terms. The mistery is how the brain (of a fly, for example) makes sense of input and output data.

    1. Umm, your argument sounds like arguments against a version of Searle’s Chinese Room! There is no “information processing” inside some part of the room, but the room as a whole can indeed be understood as processing information. A question arises as to how to separate the organism from its environment simply for the sake of analysis. I think that there are many non-mutually exclusive explanations and we might be arguing between a pair of these.
      I see your point that ” We are not seeing pictures, nor movies” (internally), but is the interpretation that the entire physical system, be it a fly or a human or a machine, is processing differing types of information invalid? We as individuals, whatever the “self” is, are “observing” a world and it may be accurate to say that some aspects of those observations fall under a position and others under a different basis.
      If we were to bracket out everything except for a consideration of the mechanism in the retina of eyes, is this equivalent to a position basis measurement? Similarly, if we bracket out everything except the mechanism within the cochlea is a frequency (temporal basis). If we bracket out all but the Olfactory system, and consider the “The nose as a spectroscope” (http://www.cf.ac.uk/biosi/staffinfo/jacob/teaching/sensory/olfact1.html) idea of Luca Turin (1996), then we see a case of measurement in the momentum basis. A more precise treatment is here: http://arxiv.org/abs/physics/0611205
      Does this make any sense?

      To propose a possible answer to the last question above: “how the brain (of a fly, for example) makes sense of input and output data?” The entire fly is involved in “computing” the data in some capacity and its “sense” of the world is an internal map of the computation. In other words, the subjective sense is the internal aspect of the processing. My remarks are an attempt to widen David Chalmers concept of property dualism (http://www.youtube.com/watch?v=LRrnAXgxS2U).

      1. Thanks for the interesting comment! Let us make the game a bit more difficult, by being more specific.

        Question: Take a Braitenberg vehicle with sensors reacting to light, can you attach to it an observable basis? Or, what does a Braitenberg vehicle “sees”?

        Another fact is well-known: what we think we see is a highly processed product of the brain. Indeed, supposing the retina is just an array of light sensors, the raw data collected by the retina has a big hole in the middle, large parts occupied by the nose, brows, cheeks, moreover, since the light detectors are behind the vascular system, everything detected is blurred by the web of blood vessels. We have two eyes which jerk all the time and most of what we see is detected by a very small part of the retinae, called the foveas. The raw data the eyes collect are incredibly far from what a photographic camera does, there is only a superficial ressemblance between those. After that comes the brain processing this raw data, there are different paths which process movement, central vision, peripheral vision, contours, colors (which are just codes for light frequences detected, the world is not colored more than countries are, even if geographical atlasses could make us think that this country is yellow and that country is blue, like Huckleberry thinks in a Tom Sawyer story). The brain does its job in about 20ms and each time edits out the vessels, the nose, fills the hole with something credible and moreover edits the time, so that you don’t “see” the jerks, …)

        I do believe in an objective reality (whatever that means), so from this viewpoint I accept that physics can explain a lot in terms of “position”, “momentum”, etc, but the problem is to find models (with predictive powers) for human, or biological vision. The difficulty is to transform syntactic into semantics, nobody really knows how to do this.

        “Position” is semantic. “Color”, “contour”, “time”, as well. Neural connections are syntactic (say, even if connectomics is not something everybody agrees that is the right viewpoint, but let’s take this as a hypothesis for our discussion). How brain processes syntactically (though connectomics, say) the input raw data and spills out “position”?

  3. ” Take a Braitenberg vehicle with sensors reacting to light, can you attach to it an observable basis? Or, what does a Braitenberg vehicle “sees”?”

    AFAIK, we can only form models of the internal phenomenon of Vehicle and choose which observable basis may apply. We can see if our model makes predictions of behavior and then test our model against experiments. We can never be sure of “what the vehicle “sees”, but we might be able to find some bounds on the possible contents of what it might see.
    Yes, “position”, “Color”, “contour”, “time”, etc. are semantic, but the semantic content as “just” an internal narrative of the world does not seem to be an explanation that works (contra Dennett). We need more…
    I am trying to see if it is possible to use the Stone representation theorem to define an isomorphism between the internal modeling (such as what is described in your work) and the individual observer’s perception of an external world. An isomorphism between Noumena and Phenomena, if you like.
    This conjecture requires that the internal models can be faithfully represented as Boolean Algebras and predicts that the content of observations are consistent with representations in terms of totally disconnected compact Hausdorff (Stone) spaces. It might be possible to enlarge the domain of Stone spaces with the Pontryagin duality, but that is another discussion…

  4. Thanks for the great post and discussion, and thanks Stephen for directing me to it. These issues have been occupying my mind for many years, and I have some conjectures which I think begin to make a deeper and simpler sense of all of this.

    I enjoyed Ray Tallis book as well, particularly the first half where he ably states the case better than I could have of the problem with overextending what I call the micro-impersonal and the macro-impersonal levels of description with the mid-range or meso-impersonal range. His terms are a bit folksy, but accurate – Neuromania and Darwinitis. In my own model, I see that even this criticism only goes half-way, as an accurate framework for talking about what we experience as our ordinary reality must include not only a range of impersonal hierarchy organized roughly by orders of magnitude of spatial extension, but – and here’s the important part – a range of persona, sub-personal, and super-personal phenomenology which runs orthogonal to the impersonal spectrum. Rather than being literally *ex-tended* across public space, the personal side of the continuum of sense is figurative and *in-tentional* through private experience, which is proto-temporal and non-mereological.

    I don’t want to spew too much, and I have my website at multisenserealism.com if anyone is interested, but I think that the whole question of perception can be understood more clearly within this framework. I think that Stephen is right about the Stone duality figuring in with the impersonal side (where logical algebras are mapped with topologies as realized bodies performing smooth continuous functions) and through that we can derive sort of an anti-Stone duality which spans pre- and post- mapping experiences. This anti-Stone duality would cover the relation between what I call trans-rational algebras (eidetic phenomenology) and entopic mereologies or apocatastatic gestalts, which are the rich narrative dreams, filled with characters and stories, myth, rhythms, poetry, etc that populate the super-personal (or super-signifying), personal, and sub-personal (sensorimotive-recursive participation).

    I agree, the brain is not (just) a computer, but I would not say that what we see is a hallucination either. The limitations of our visual sense, the optical illusions, blind spots, etc. are only shortcomings when compared with an assumed perfect visual sense which gives us access to an impersonal world on the meso- level. This, I would argue is not the case, as the impersonal realism we assume is actually a lowest-common denominator filtering of what is ultimately a personal diffraction of what is ultimately a boundaryless apocatastasis – the Totality/Singularity/Absolute, etc. By this I mean that unlike the impersonal side of the sense continuum, the personal side is not built up from nothingness but rather pinched off temporarily from everythingness. The interference pattern between the two ends of the spectrum is what I call ‘sense’ and the shadow of that interference pattern reflects to our conscious sense as ‘data’ or ‘information’ (really formation. To be informed requires a sense-making experience. These squiggles are forms, the meaning of the words we hear in our mind are what truly ‘inform’ us.).

    Craig Weinberg

Leave a reply to chorasimilarity Cancel reply