(related to this post)

Do you recognize this?

Have you seen this before?

Details soon, stay close.

(related to this post)

Do you recognize this?

Have you seen this before?

Details soon, stay close.

Advertisements

After the post on Unlimited Detail, here is another example of something which may be seen as (facilitating the) computing with space: the Leap, by Leap Motion.

I see a difference and a common point, when I look at both.

* Difference:* The Leap, by is a finished, or almost, product, while Unlimited detail, by Euclideon, is still in development.

* Common point:* In both cases it is stressed that the respective products are outcomes of mathematical breakthroughs!

The paper on the calculus adapted to emergent algebras has been almost completely rewritten. I submitted it to arXiv, it shall appear tomorrow.

The new name of the paper is “-Scale, a lambda calculus for spaces with dilations”, and it is already available from my page at this link.

Let me try to explain what I am trying to do in this work related to “computing with space“. The goal is to understand the process of emergence, in its various precise mathematical forms, like:

– how the dynamics of a big number of particles becomes the dynamics of a continuous system? Apart the physics BS of neglecting infinities, I know of very few mathematically correct approaches. From my mixed background of calculus of variations and continuous media mechanics, I can mention an example of such an approach in the work of Andrea Braides on the -convergence of the energy functional of a discrete system to the energy functional of a continuous system and atomistic models of solids.

– how to endow a metric space (like a fractal, or sub-riemannian space) with a theory of differential calculus? Translated: how to invent “smoothness” in spaces where there is none, apparently? Because smoothness is certainly emergent. This is part of the field of non-smooth calculus.

– how to explain the profound resemblance between geometrical results of Gromov on groups with polynomial growth and combinatorial results of Breuillard, Gree, Tao on approximate groups? In both cases a nilpotent structure emerges from considering larger and larger scales. The word “explain” means here: identify a general machine at work in both results.

– how to explain the way our brain deals with visual input? This is a clear case of emergence because the input is the excitation of some receptors of the retina and the output is almost completely not understood, except that we all know that we see objects which are moving and complex geometrical relations among them. A fly sees as well, read From insect vision to robot vision by N. Franceschini, J.M. Pichon, C. Blanes. Related to this paper, I cite from the abstract (boldfaced by me):

We designed, simulated, and built a complete terrestrial creature which moves about and avoids obstacles solely by evaluating the relative motion between itself and the environment. The compound eye uses an array of elementary motion detectors (EMDS) as smart, passive ranging sensors.

Like its physiological counterpart, the visuomotor system is based on analogue, continuous-time processing and does not make use of conventional computers. It uses hardly any memory to adjust the robot’s heading in real time via a local and intermittent visuomotor feedback loop.

More generally, there seems to be a “computation” involved in vision, massively parallel and taking very few steps (up to six), but it is not understood how this is a computation in the mathematical, or computer science sense. Conversely, the visual performances of any device based on computer science computation up to now, are dwarfed by any fly.

I identified a “machine of emergence” which is in work in some of the examples given above. Mathematically, this machine should have something to do with emergent algebras, but what about the computation part?

Probably geometers reason like flies: by definition, a geometrical statement is invariant up to the choice of maps. A sphere is not, geometrically speaking, a particular atlas of maps on the sphere. For a geometer, reproducing whatever it does by using ad-hoc enumeration by natural numbers, combinatorics and Turing machines is nonsense, because profoundly not geometrical.

On the other hand, the powerful use and control of abstraction is appealing to the geometer. This justifies the effort to import abstraction techniques from computer science and to replace the non-geometrical stuff by … whatever is more of a geometrical character.

For the moment, such efforts are mostly a source of frustration, a familiar feeling for any mathematician.

But at some point, in these times of profound changes in, mathematics as well as in the society, from all these collective efforts will emerge something beautiful, clear and streamlined.

This post is about the article “Geometry of imaginary spaces“, Journal of Physiology – Paris, 2011, in press, by Jan Koenderink.

Let me first quote from the abstract (boldfaced by me):

“Imaginary space” is a three-dimensional visual awareness that feels different from what you experience when you open your eyes in broad daylight. Imaginary spaces are experienced when you look “into” (as distinct from “at”) a picture for instance.

Empirical research suggeststhat imaginary spaces have a tight, coherent structure, that is very different from that of three-dimensional Euclidean space.

[he proposes the structure of a bundle , with basis the euclidean plane, “the visual field” and fiber the 1-dimensional affine line, “the depth domain”,]

I focus on the topic of

how, and where, the construction of such geometrical structures, that figure prominently in one’s awareness,is implemented in the brain. My overall conclusion—with notable exceptions—is that present day science has no clue.

What is remarkable in this paper? Many many things, here are just three quotes:

– (p. 3) “in the mainstream account”, he writes, “… one starts from samples of … the retinal “image”. Then follows a sequence of image operations […] Finally there is a magic step: the set of derived images turns into a “representation of the scene in front of you”. “Magic” because image transformations convert structures into structures. Algorithms cannot convert mere structure into quality and meaning, except by magic. […] Input structure is not *intrinsically* meaningful, meaning needs to be imposed (magically) by some arbitrary format.”

– (p. 4) “Alternatives to the mainstream account have to […] replace inverse optics with “controlled hallucination” [related to this, see the post “The structure of visual space“]

– (p. 5) “In the mainstream account one often refers to the optical structure as “data”, or “information”. This is thoroughly misleading because to be understood in the Shannon (1948) sense of utterly meaningless information. As the brain structures transform the optical structure into a variety of structured neural activities, mainstream often uses semantic terms to describe them. This confuses facts with evidence. In the case of an “edge detector” (Canny, 1986) the very name suggests that the edge exists before being detected. This is nonsensical, the so-called edge detector is really nothing but a “first order directional derivative operator” (Koenderink and van Doorn, 1992). The latter term is to be preferred because it describes the transformation of structure into structure, whereas the former suggests some spooky operation” [related to this, see the tag archive “Map is the territory“]

Related to my spaces with dilations, let me finally quote from the “Final remarks”:

The psychogenetic process constrains its articulations through probing the visual front end. This part of the brain is readily available for formal descriptions that are close to the neural hardware. The implementation of the group of isotropic similarities, a geometrical object that can easily be probed through psychophysical means, remains fully in the dark though.

My first attempt to introduce a scaled version of lambda epsilon turned out to be wrong, but now I think I have found a way. It is a bit trickier than I thought. Let me explain.

In lambda epsilon calculus we have three operations (which are not independent), namely the lambda abstraction, the application and the emergent algebra (one parameter family of) operation(s), called dilations. If we want to obtain a scaled version then we have to “conjugate” with dilations. Looking at terms as being syntactic trees, this amounts to:

– start with a term and a scale ,

– transform a tree such that , into a tree , by conjugating with .

This can be done by recursively defining the transform . Graphically, we would like to transform the elementary syntactic trees of the three operations into this:

The problem is that, while (c) is just the familiar scaled dilation, the scaled from (a) does not make sense, because is not a variable. Also, the scaled application (b) is somehow misterious.

The solution is to exploit the fact that it makes sense to make substitutions of the form because of the invertibility of dilations. Indeed, is equivalent with , therefore we may define to mean .

If we look to the rule (ext2) here, the discussion about substitution becomes:

Therefore the correct scaled lambda, instead of (a) from the first figure, should be this:

The term (syntactic tree) from the LHS should be seen as a notation for the term from the RHS.

And you know what? The scaled application, (b) from the first figure, becomes less misterious, because we can prove the following.

* 1.* Any defined a relative variable (remark that relative variables are terms!).The set of relative variables is denoted by .

* 2. * The function is defined for any term such that . The definition is this:

– ,

– for any

– for any such that

– is given by:

* 3. * is a scaled term, notation , if there is a term such that and such that .

* 4. *Finally, the operations on scaled terms are these:

– for any and the scaled application (of coefficient ) is

– for any scaled variable and any scaled term the scaled abstraction is

* 5.* With this, we can prove that , which is remarkable, I think!

I am updating the paper on almost daily. It is the first time when I am doing such a thing, it is maybe interesting to see what comes out of this way of writing.

The last addition is something I was thinking about for a long time, something which is probably well known in some circles, but maybe not. It is about eliminating variable (names) from such a calculus. This has been done in several ways, here is another (or the same as a previous one?).

The idea is simple. let be any term and a variable. Look at the syntactic tree of , then glue to all leaves decorated by the leaves of a tree with nodes consisting of FAN-OUT gates.

Further on I shall identify syntactic trees with terms. I shall add to such trees a new family

(of trees), constructed from the elementary tree depicted at (a) in the next figure. At (b) we see an example of such a tree. We consider also the trivial tree (c).

We have to think about such trees as devices for multiplying the occurences of a variable. I call them FAN-OUT trees. All these trees, with the exception of the trivial one (c), are planar binary trees. We shall add the following rule of associativity:

* (ASSOC)* any two FAN-OUT trees with the same number of leaves are identified.

This rule will be applied under the following form: we are free to pass from a FAN-OUT tree to an equivalent one which has the same number of leaves. The name “associativity” comes from the fact that a particular instance of this rule (which deserves the name “elementary associativity move”) is this:

With the help of these FAN-OUT trees we shall replace the multiple occurences of a variable by such trees. Let us see what become the rules of calculus by using this notation.

conversion is no longer needed, because variables have no longer names. Instead, we are free to graft usual trees to FAN-OUT trees. This way, instead of terms we shall use “neurons”.

* Definition:* The forgetful form (b) of a syntactic tree (a) (of a term) is the tree with the name variables deleted.

A neuron is the result of gluing the root of a forgetful form of a syntactic tree to the root of a FAN-OUT tree, like in the following figure.

The axon of the neuron is the FAN-OUT tree. The dendrites of the neuron are the undecorated edges of the forgetful form of the syntactic tree. A dendrite is bound if it is a left edge pointing to a node decorated by . For any bound dendrite, the set of dependent dendrides are those of the sub-tree starting from the right edge of the node (where the bound dendrite is connected via a left edge), which are moreover not bound. Otherwise a dendrite is free. The soma of the neuron is the forgetful form of the syntactic tree.

* Substitution.* We are free to connect leaves of axons of neurons with dendrites of other neurons.

In order to explain the substitution we have to add the following rule:

* (subst)* The leaves of an axon cannot be glued to more than one bounded dendrite of another neuron. If a leaf of the axon of the neuron is connected to a bound dendrite of the neuron , then it has to be the leftmost leaf of the axon of . Moreover, in this case all other leaves of which are connected with have to be connected only via dendrites which are dependent on the mentioned bound dendrite of , possibly via adding leaves to the axon of by using (assoc).

Substitution is therefore assimilated to connecting neurons.

* New (beta*).* The rule (beta*) takes the following graphical form. In the next figure appear only the leaf of the neuron connecting to the node (the other leaves of the axon of not drawn) and only

The neuron may have other dendrites, not figured. According to the definition of the neuron, together with the node and adiacent edges form a bigger neuron. Also figured is another leaf of the axon of the neuron , which may point to another neuron. Finally, in the RHS, the bound dendrite looses all dependents.

* Important remark. *Notice that there may be multiple outcomes from (subst) and (beta*)! Indeed, this is due to the fact that connections could be made in several ways, by using all or only of part of the dependent dendrites. Because of that, it seems that this version of the calculus is richer than the previous one, but I am not at all sure if this is the case.

Another thing to be remarked is that the “” sign in this version of these rules is reflexive and transitive, but not symmetric. Previously the “$ latex =$” sign was supposed to be symmetric.

* (R1)* That is easy, we use the emergent algebra gates:

* (R2)* Easy as well:

The rules (ext1), (ext2) take obvious forms.

Therefore, if we think about computation as reduction to a normal form (if any), in this graphical notation with “neurons”, computation amounts to re-wiring of neurons or changing the rewiring inside the soma of some neurons.

Variables dissapeared, with the price of introducing FAN-OUT trees.

As concerns the remark previously made, we could obtain a calculus which is clearly equivalent with by modifying the definition of the neuron, in this way.

In order to clearly specify which are the dependent dendrites, we could glue to any bound dendrite a FAN-OUT tree, such that the leaves of this tree connect again with a set of dependent dendrites of the same neuron. In this way, substitution and (beta*) will amount of erasing such a FAN-OUT tree and then perform the moves, as previously explained, but using this time all the dependent dendrites which were connected to the bound dendrite by the erased tree.

MolView

computing with space | open notebook

MaidSafe

The World's First Autonomous Data Network

Voxel-Engine

An experimental 3d voxel rendering algorithm

Gödel's Lost Letter and P=NP

a personal view of the theory of computation

%d bloggers like this: