Tag Archives: spacebook

Computing with space: done!

The project of giving a meaning to “computing” part of “Computing with space” is done, via the \lambda-Scale calculus and its graphic lambda calculus (still in preview mode).

_______________________

UPDATE (09.01.2013): There is now a web tutorial about graphic lambda calculus on this blog.  At some point a continuation of “Computing with space …” will follow, with explicit use of this calculus, as well as applications which were mentioned only briefly, like why the diagram explaining the “emergence” of the Reidemeister III move gives a discretized notion of scalar curvature for a metric space with dilations.

_______________________

Explanations.  In the “Computing with space…” paper I claimed that:

1. – there is a “computing” part hidden behind the idea of emergent algebras

2. – which is analogous  with the hypothetical computation taking place in the front-end visual system.

The 1. part is done, essentially. The graphic version of \lambda-Scale is in fact very powerful, because it contains as sectors:

– lambda calculus

– (discrete, abstract) differential calculus

– the formalism of tangle diagrams.

These “sectors” appear as subsets S of graphs in GRAPH (see the preview paper for definitions), for which the condition $G \in  S$ is global, together with  respective selections of  local or global graphic moves (from those available on GRAPH) which transform elements of S into elements of S.

For example, for lambda calculus the relevant set is \lambda-GRAPH and the moves are (ASSOC) and  the graphic \beta move (actually, in this way we obtain a formalism a bit nicer than lambda calculus; in order to obtain exactly lambda calculus we have to add the stupid global FAN-OUT and global pruning moves).

For differential calculus we need to restrict to graphs like those in \lambda-GRAPH, but also admitting dilation gates. We may directly go to \lambda-Scale, which contains lambda calculus (made weaker by adding the (ext) rules, corresponding to \eta-conversion) and differential calculus (via emergent algebras). The moves are (ASSOC), graphic \beta move, (R1), (R2), (ext1), (ext2) and, if we want a dumber version,  some global FAN-OUT and pruning moves.

For tangle diagrams see the post Four symbols and wait for the final version of the graphic calculus paper.

SO  now, I declare part 1. CLOSED. It amounts to patiently writing all details, which is an interesting activity by itself.

Part 2. is open, albeit now I have much more hope to give a graph model for the front-end of the visual system, which is not relying on assumptions about the geometric structure of the space, linear algebra, tasks and other niceties of the existing models.

UPDATE  02.07.2012. I put on arxiv the graphical formalism paper, it should appear on 03.07.2012. I left outside of the paper a big chunk of very intriguing facts about various possible crossing definitions, for another paper.

A geometric viewpoint on computation?

Let me try to explain what I am trying to do in this work related to “computing with space“. The goal is to understand the process of emergence, in its various precise mathematical forms, like:

– how the dynamics of a big number of particles becomes the dynamics of a continuous system? Apart the physics BS of neglecting infinities, I know of very few mathematically correct approaches. From my mixed background of calculus of variations and continuous media mechanics, I can mention an example of such an approach  in the work of Andrea Braides    on the \Gamma-convergence of the energy functional of a discrete system to the energy functional of a continuous system and atomistic models of solids.

– how to endow a metric space (like a fractal, or sub-riemannian space) with a theory of differential calculus? Translated: how to invent “smoothness” in spaces where there is none, apparently? Because smoothness is certainly emergent. This is part of the field of non-smooth calculus.

– how to explain the profound resemblance between geometrical results of Gromov on groups with polynomial growth and combinatorial results of Breuillard, Gree, Tao on approximate groups? In both cases a nilpotent structure emerges from considering larger and larger scales. The word “explain” means here: identify a general machine at work in both results.

– how to explain the way our brain deals with visual input?  This is a clear case of emergence because the input is the excitation of some receptors of the retina and the output is almost completely not understood, except that we all know that we see objects which are moving and complex geometrical relations among them. A fly sees as well, read From insect vision to robot vision by N. Franceschini, J.M. Pichon, C. Blanes. Related to this paper, I cite from the abstract (boldfaced by me):

  We designed, simulated, and built a complete terrestrial creature which moves about and avoids obstacles solely by evaluating the relative motion between itself and the environment. The compound eye uses an array of elementary motion detectors (EMDS) as smart, passive ranging sensors. Like its physiological counterpart, the visuomotor system is based on analogue, continuous-time processing and does not make use of conventional computers. It uses hardly any memory to adjust the robot’s heading in real time via a local and intermittent visuomotor feedback loop.

More generally, there seems to be a “computation” involved in vision, massively parallel and taking very few steps (up to six), but it is not understood how this is a computation in the mathematical, or computer science sense. Conversely, the visual performances of any device based on computer science computation up to now, are dwarfed by any fly.

I identified a “machine of emergence” which is in work in some of the examples given above. Mathematically, this machine should have something to do with emergent algebras, but what about the computation part?

Probably geometers reason like flies: by definition, a geometrical statement is invariant up to the choice of maps. A sphere is not, geometrically speaking, a particular atlas of maps on the sphere. For a geometer, reproducing whatever it does by using ad-hoc enumeration by  natural numbers, combinatorics  and Turing machines is nonsense, because profoundly not geometrical.

On the other hand, the powerful use and control of abstraction is appealing to the geometer. This justifies the effort to import abstraction techniques from computer science and to replace the non-geometrical stuff by … whatever is more of a geometrical character.

For the moment, such efforts are mostly a source of frustration, a familiar feeling for any mathematician.

But at some point, in these times of profound changes in, mathematics as well as in the society, from all these collective efforts will emerge something beautiful, clear and streamlined.

Scaled lambda epsilon

My first attempt to introduce a scaled version of lambda epsilon turned out to be wrong, but now I think I have found a way. It is a bit trickier than I thought. Let me explain.

In lambda epsilon calculus we have three operations (which are not independent), namely the lambda abstraction, the application and the emergent algebra (one parameter family of) operation(s), called dilations. If we want to obtain a scaled version then we have to “conjugate” with dilations. Looking at terms as being syntactic trees, this amounts to:

– start with a term A and a scale \varepsilon \in \Gamma,

– transform a tree T such that FV(T) \cap FV(A) = \emptyset,  into a tree A_{\varepsilon}[T], by conjugating with A \circ_{\varepsilon} \cdot.

This can be done by recursively defining the transform T \mapsto A_{\varepsilon}[T]. Graphically, we would like to transform the elementary syntactic trees of the three operations into this:


The problem is that, while (c) is just the familiar scaled dilation, the scaled \lambda from (a) does not make sense, because A \circ_{\varepsilon} u is not a variable. Also, the scaled application (b) is somehow misterious.

The solution is to exploit the fact that it makes sense to make substitutions of the form B[ A \circ_{\varepsilon} u : = C] because of the invertibility of dilations. Indeed, A \circ_{\varepsilon} u = C is equivalent with u = A \circ_{\varepsilon^{-1}} C, therefore we may define B[ A \circ_{\varepsilon} u : = C] to mean B[u : = A \circ_{\varepsilon^{-1}} C].

If we look to the rule (ext2) here, the discussion about substitution becomes:

Therefore the correct scaled lambda, instead of (a)  from the first figure, should be this:

The term (syntactic tree) from the LHS should be seen as a notation for the term from the RHS.

And you know what? The scaled application, (b) from the first figure, becomes less misterious, because we can prove the following.

1.  Any u \in X \setminus FV(A) defined a relative variable u^{\varepsilon}_{A} := A \circ_{\varepsilon} u (remark that relative variables are terms!).The set of relative variables is denoted by X_{\varepsilon}(A).

2. The function B \mapsto A_{\varepsilon}[B] is defined for any term B \in T such that FV(A) \cap FV(B) = \emptyset. The definition is this:

–  A_{\varepsilon}[A] = A,

–  A_{\varepsilon}[u] = u for any u \in X \setminus FV(A)

A_{\varepsilon}[ B \mu C] = A \circ_{\varepsilon^{-1}} ((A \circ_{\varepsilon} A_{\varepsilon}[B]) \mu (A \circ_{\varepsilon} A_{\varepsilon}[C]))  for  any B, C \in T such that FV(A) \cap (FV(B) \cup FV(C))= \emptyset

–  A_{\varepsilon}[ u \lambda B] is given by:

 

 

3. B is a scaled term, notation B \in T_{\varepsilon} (A), if there is a term B' \in T such that FV(A) \cap FV(B') = \emptyset and such that B = A_{\varepsilon}[B'].

4. Finally, the operations on scaled terms are these:

– for any \mu \in \Gamma and B, C \in T_{\varepsilon}(A) the scaled application (of coefficient \mu) is

B \mu^{\varepsilon}_{A} C = A \circ_{\varepsilon^{-1}} ((A \circ_{\varepsilon} B) \mu (A \circ_{\varepsilon} C))

– for any scaled variable  u^{\varepsilon}_{A} \in X_{\varepsilon}(A)  and any scaled term B \in T_{\varepsilon}(A) the scaled abstraction is

5.    With this, we can prove that (u^{\varepsilon}_{A} \lambda^{\varepsilon}_{A} B) 1^{\varepsilon}_{A} C = (u \lambda B) 1 C = B [ u: = C], which is remarkable, I think!

The neuron unit

I am updating the paper on \lambda \epsilon almost daily.  It is the first time when I am doing such a thing, it is maybe interesting to see what comes out of this way of writing.

The last addition is something I was thinking about for a long time, something which is probably well known in some circles, but maybe not. It is about eliminating variable (names) from such a calculus. This has been done in several ways, here is another (or the same as a previous one?).

The idea is simple. let T be any term and x \in Var(T) a variable. Look at the syntactic tree of T, then glue to all leaves decorated by x the leaves of a tree with nodes consisting of FAN-OUT  gates.

Further on I shall identify syntactic  trees with terms. I shall add to such trees a new family
(of trees), constructed from the elementary tree depicted at (a) in the next figure. At (b) we see an example of such a tree. We consider also the trivial tree (c).

We have to think about such trees as devices for multiplying the occurences of a variable. I call them FAN-OUT trees. All these trees, with the exception of the trivial one (c), are planar binary trees. We shall add the following rule of associativity:

(ASSOC) any two FAN-OUT trees with the same number of leaves are identified.

This rule will be applied under the following form: we are free to pass from a FAN-OUT tree to an equivalent one which has the same number of leaves. The name “associativity” comes from the fact that a particular instance of this rule (which deserves the name  “elementary associativity move”) is this:

With the help of these FAN-OUT trees we shall replace the multiple occurences of a variable by such trees. Let us see what become the rules of \lambda \varepsilon calculus by using this notation.

\alpha conversion is no longer needed, because variables have no longer names. Instead, we are free to graft usual trees to FAN-OUT trees. This way, instead of terms we shall use “neurons”.

Definition:   The forgetful form (b) of a syntactic tree (a) (of a term) is the tree with the name variables deleted.

A neuron is the result of gluing the root of a forgetful form of a syntactic tree to the root of a FAN-OUT tree, like in the following figure.

The axon of the neuron is the FAN-OUT tree. The dendrites of the neuron are the undecorated edges of the forgetful form of the syntactic tree. A dendrite is bound if it is a left  edge pointing to a node decorated by \lambda.   For any bound dendrite, the set of dependent dendrides are those of the sub-tree starting from the right edge of the \lambda node (where the bound dendrite is connected via a left edge), which are moreover not bound. Otherwise a dendrite is free.  The soma of the neuron is the forgetful form of the syntactic tree.

Substitution. We are free to connect leaves of axons of neurons with dendrites of other neurons.
In order to explain the substitution we have to add the following rule:

(subst) The leaves of an axon cannot be glued to more than one bounded dendrite of another neuron. If a leaf of the axon of the neuron A is connected to a bound dendrite of the neuron B, then it has to be the leftmost leaf of the axon of A. Moreover, in this case all other leaves of A which are connected with B have to be connected only via dendrites which are dependent on the mentioned bound dendrite of B, possibly via adding leaves to the axon of A by using (assoc).

Substitution is therefore assimilated to connecting neurons.

New (beta*). The rule (beta*) takes the following graphical form. In the next figure appear only the leaf of the neuron A connecting to the \lambda node (the other leaves of the axon of A not drawn) and only some of the dendrites depending on the bound one relative to the \lambda node which is figured.

The neuron A may have other dendrites, not figured. According to the definition of the neuron, A together with the \lambda node and adiacent edges form a bigger neuron. Also figured is another leaf of the axon of the neuron B, which may point to another neuron. Finally, in the RHS, the bound dendrite looses all dependents.

Important remark. Notice that there may be multiple outcomes from (subst) and (beta*)! Indeed, this is due to the fact that connections could be made in several ways, by using all or only of part of the dependent dendrites. Because of that, it seems that this version of the calculus is richer than the previous one, but I am not at all sure if this is the case.

Another thing to be remarked is that the “=” sign in this version of these  rules is reflexive and transitive, but not symmetric. Previously the “$ latex =$” sign was supposed to be symmetric.

(R1)  That is easy, we use the emergent algebra gates:

(R2)    Easy as well:

The rules (ext1), (ext2) take obvious forms.

Therefore, if we think about computation as reduction to a normal form (if any), in this graphical notation with “neurons”, computation amounts to re-wiring of neurons or changing the rewiring inside the soma of some neurons.

Variables dissapeared, with the price of introducing FAN-OUT trees.

As concerns the  remark  previously made, we could obtain a calculus which is clearly equivalent with \lambda \varepsilon by modifying the definition of the neuron, in this way.
In order to clearly specify which are the dependent dendrites, we could glue to any bound dendrite a FAN-OUT tree, such that the leaves of this tree connect again with a set of dependent dendrites of the same neuron. In this way, substitution and (beta*) will amount of erasing such a FAN-OUT tree and then perform the moves, as previously explained, but using this time all the dependent dendrites which were connected to the bound dendrite by the erased tree.

Rules of lambda epsilon calculus

I updated the paper on lambda epsilon calculus.  See the link to the actual version (updated daily) or check the arxiv article, which will be updated as soon as a stable version will emerge.

Here are the rules of this calculus:

(beta *)   (x \lambda A) \varepsilon B= (y \lambda (A[x:=B])) \varepsilon B for any fresh variable y,

(R1) (Reidemeister one) if x \not \in FV(A) then (x \lambda A) \varepsilon A = A

(R2) (Reidemeister two) if x \not \in FV(B) then (x \lambda ( B \mu x)) \varepsilon A = B (\varepsilon \mu ) A

(ext1) (extensionality one)  if  x \not \in FV(A) then x \lambda (A 1 x) = A

(ext2) (extensionality two) if  x \not \in FV(B) then (x \lambda B) 1 A = B

These are taken together with usual substitution and \alpha-conversion.

The relation between the operations from \lambda \varepsilon calculus and emergent algebras is illustrated in the next figure.

Lambda epsilon calculus, an attempt for a language of “computing with space”

Just posted on the arxiv the paper Lambda calculus combined with emergent algebras.   Also, I updated my page on emergent algebras and computing with space, taking into consideration what I try to do in the mentioned paper. Here are some excerpts (from the mentioned page) about what I think now “computing with space” may be:

Let’s look at some examples of spaces, like: the real world or a virtual world of a game, as seen by a fly or by a human, “abstract” mathematical spaces as manifolds, fractals, symmetric spaces, groups, linear spaces.
To know what a space is, to define it mathematically, is less interesting than to know what one can do in such a space. I try to look at spaces from a kind of a “computational viewpoint”.
A model of such a computation could be the following process: Alice and Bob share a class of primitives of spaces (like a common language which Alice can use in order to communicate to Bob what she sees when exploring the unknown space). Alice explores an unknown territory and sends to Bob the operational content of the exploration (i.e. maps of what she sees and how she moves, expresses in the common language ). Then Bob, who is placed in a familiar space, tries to make sense of the maps received from Alice. Usually, he can’t put together in the familiar the received information (for example because there is a limit of accuracy of maps of the unknown territory into the familiar one). Instead, Bob tries to simulate the operational content of Alice exploration by interpreting the messages from Alice (remember, expressed in a language of primitives of space) in his space.
A language of space primitives could be (or contain as a part) the emergent algebras. Ideally, such a language of primitives should be described by:
A – a class of gates (operations), which represent the “map-making” relation, with an abstract scale parameter attached, maybe also with supplementary operations (like lambda abstraction?),
B – a class of variables (names) which represent generic points of a space, and a small class of terms (words) expressed in terms of variables and gates from (A) (resembling to combinators in lambda calculus?). These are the “generators” of the space and they have the emergent algebras property, namely that as the scale goes to zero, uniformly with respect to the variables, the output converges to a new operation.
C – a class of rewriting rules saying that some simple assemblies of generators of space have equivalent function, or saying that relations between those simple assemblies converge to relations of the space as the scale goes to zero.

The article on lambda epsilon calculus is a first for me in this field, I would be grateful for any comments and suggestions of improvements.

The gnomon in the greek theater of vision, I

In the post Theatron as an eye I proposed the Greek Theater, or Theatron (as opposed to the “theater in a box”, or Cartesian Theater, see further) as a good model for   vision.

Any model of vision should avoid the homunculus fallacy. What looks less understood is that any good model of vision should avoid the scenic space fallacy. The Cartesian Theater argument against the existence of the homunculus is not, by construction, an argument against the scenic space. Or, in the Cartesian Theater, homunculus and scenic space come to existence in a pair. As a conclusion, it seems that there could not be a model of vision which avoids the homunculus but is not avoiding the scenic space. This observation is confirmed by facts: there is no good, rigorous  model of vision up to date, because all proposed models rely on the a priori existence of a scenic space. There is, on the contrary, a great quantity of experimental data and theoretical partial models which show just how complex the problem of vision is. But, essentially, from a mathematician viewpoint, it is not known how to even formulate the problem of vision.

In the influent paper “The brain a geometry engine”  J. Koenderink proposes that (at least a part of) the visual mechanism is doing a kind of massively parallel computation, by using an embodiment of the geometry of jet spaces (the euclidean infinitesimal geometry of a smooth manifold)  of the scenic space. Jean Petitot continues along this idea, by proposing a neurogeometry of vision based essentially on the sub-riemannian geometry of those jet spaces. This an active mathematical area of research, see for example “Antropomorphic image reconstruction via hypoelliptic diffusion“, by Ugo Boscain et al.

Sub-riemannian geometry is one of my favorite mathematical subjects, because it  is just a  particular model of a metric space with dilations.  Such spaces are somehow fundamental for the problem of vision, I think. Why? because there is behind them a purely relational formalism, called “emergent algebra“, which allow to understand “understanding space” in a purely relational way. Thus I hope emergent algebras could be used in order to formulate the problem of vision as the problem of computing with space, which in turn could be used for getting a good model of vision.

To my surprise, some time ago I have found that this  very complex subject has a respectable age, starting with Pythagora  and Plato!  This is how I arrived to write this blog, as an effort to disseminate what I progressively understand.

This brings me back to the theater and, finally, to gnomon. I cite from previous wiki link:

Hero defined a gnomon as that which, added to an entity (number or shape), makes a new entity similar to the starting entity.

In the greek theater, a gnomon sits in the center of the orchestra (which is the circular place where things happen in the greek thater, later replaced by the scene in the theater in a box). Why?

Three problems and a disclaimer

In this post I want to summarize the list of problems I am currently thinking about. This is not a list of regular mathematical problems, see the disclaimer on style written at the end of the post.

Here is the list:

1. what is “computing with space“? There is something happening in the brain (of a human or of a fly) which is akin to a computation, but is not a logical computation: vision. I call this “computing with space”. In the head there are a bunch of neurons chirping one to another, that’s all. There is no euclidean geometry, there are no a priori coordinates (or other extensive properties), there are no problems to solve for them neurons, there is  no homunculus and no outer space, only a dynamical network of gates (neurons and their connections). I think that a part of an answer is the idea of emergent algebras (albeit there should be something more than this).  Mathematically, a closely related problem is this: Alice is exploring a unknown space and then sends to Bob enough information so that Bob could “simulate” the space in the lab. See this, or this, or this.

Application: give the smallest hint of a purely relational  model of vision  without using any a priori knowledge of the (euclidean or other) geometry of outer space or any  pre-defined charting of the visual system (don’t give names to neurons, don’t give them “tasks”, they are not engineers).

2. non-commutative Baker-Campbell-Hausdorff formula. From the solution of the Hilbert’s fifth problem we know that any locally compact topological group without small subgroups can be endowed with the structure of a “infinitesimally commutative” normed group with dilations. This is true because  one parameter sub-groups  and Gleason metrics are used to solve the problem.  The BCH formula solves then another problem: from the infinitesimal structure of a (Lie) group (that is the vector space structure of the tangent space at the identity and the maniflod structure of the Lie group) and from supplementary infinitesimal data (that is the Lie bracket), construct the group operation.

The problem of the non-commutative BCH is the following: suppose you are in a normed group with dilations. Then construct the group operation from the infinitesimal data (the conical group structure of the tangent space at identity and the dilation structure) and supplementary data (the halfbracket).

The classical BCH formula corresponds to the choice of the dilation structure coming from the manifold structure of the Lie group.

In the case of a Carnot group (or a conical group), the non-commutative BCH formula should be trivial (i.e. x y = x \cdot y, the equivalent of xy = x+y in the case of a commutative Lie group, where by convention we neglect all “exp” and “log” in formulae).

3. give a notion of curvature which is meaningful for sub-riemannian spaces. I propose the pair curvdimension- curvature of a metric profile. There is a connection with problem 1: there is a link between the curvature of the metric profile and the “emergent Reidemeister 3 move” explained in section 6 of the computing with space paper. Indeed, at page 36 there is this figure. Yes, R^{x}_{\epsilon \mu \lambda} (u,v) w is a curvature!

Disclaimer on style. I am not a problem solver, in the sense that I don’t usually like to find the solution of an already formulated problem. Instead, what I do like to do is to understand some phenomenon and prove something about it in the simplest way possible.  When thinking about a subject, I like to polish the partial understanding I have by renouncing to use any “impure” tools, that is any (mathematical) fact which is strange to the subject. I know that this is not the usual way of doing the job, but sometimes less is more.

Spacebook: a facebook for space

I follow the work of Mark Changizi on vision. Previously I mentioned one of his early papers on this subject, “Harnessing vision for computation” .

One of the applications of computing with space  could be to SHARE THE SPATIAL EXPERIENCE ON THE WEB.

Background. When I was writing the paper on the problem of computing with space, I stumbled upon this article by Mark in Psychology Today

The Problem With the Web and E-Books Is That There’s No Space for Them

The title says a lot. I was intrigued by the following passage

“My personal library serves as extension of my brain. I may have read all my books, but I don’t remember most of the information. What I remember is where in my library my knowledge sits, and I can look it up when I need it. But I can only look it up because my books are geographically arranged in a fixed spatial organization, with visual landmarks. I need to take the integral of an arctangent? Then I need my Table of Integrals book, and that’s in the left bookshelf, upper middle, adjacent to the large, colorful Intro Calculus book.”

So I posted the following comment:  Is your library my library?

“Good point, but you have converted a lot of time into understanding, exploring and using the space of your library. To me the brain-spatial interface of your library is largely incomprehensible. I have to spend time in order to reconstruct it in my head.

Then, your excellent suggestion may give somebody the idea to do a “facebook” for our personal libraries. How to share spatial competences, that is a question!”

In the section 2.7 (“Spacebook”) of the paper on computing with space I mention this as an intriguing application of this type of computing (the name itself was suggested by Mark Changizi after I sent him a first version of the paper).

What more?Again from browsing Mark Changizi site, I learned that in fact this problem of non-spatiality (say) of e-books has measurable effects. Indeed, see this article by Maia Szalavitz

Do E-Books Make It Harder to Remember What You Just Read?

Nice! But in order to do a spacebook we need first to understand the primitives of space (as represented in the human brain) and then how to “port” them by using the web.