Tag Archives: spacebook

Computing with space: done!

The project of giving a meaning to “computing” part of “Computing with space” is done, via the \lambda-Scale calculus and its graphic lambda calculus (still in preview mode).

_______________________

UPDATE (09.01.2013): There is now a web tutorial about graphic lambda calculus on this blog.  At some point a continuation of “Computing with space …” will follow, with explicit use of this calculus, as well as applications which were mentioned only briefly, like why the diagram explaining the “emergence” of the Reidemeister III move gives a discretized notion of scalar curvature for a metric space with dilations.

_______________________

Explanations.  In the “Computing with space…” paper I claimed that:

1. – there is a “computing” part hidden behind the idea of emergent algebras

2. – which is analogous  with the hypothetical computation taking place in the front-end visual system.

The 1. part is done, essentially. The graphic version of \lambda-Scale is in fact very powerful, because it contains as sectors:

– lambda calculus

– (discrete, abstract) differential calculus

– the formalism of tangle diagrams.

These “sectors” appear as subsets S of graphs in GRAPH (see the preview paper for definitions), for which the condition $G \in  S$ is global, together with  respective selections of  local or global graphic moves (from those available on GRAPH) which transform elements of S into elements of S.

For example, for lambda calculus the relevant set is \lambda-GRAPH and the moves are (ASSOC) and  the graphic \beta move (actually, in this way we obtain a formalism a bit nicer than lambda calculus; in order to obtain exactly lambda calculus we have to add the stupid global FAN-OUT and global pruning moves).

For differential calculus we need to restrict to graphs like those in \lambda-GRAPH, but also admitting dilation gates. We may directly go to \lambda-Scale, which contains lambda calculus (made weaker by adding the (ext) rules, corresponding to \eta-conversion) and differential calculus (via emergent algebras). The moves are (ASSOC), graphic \beta move, (R1), (R2), (ext1), (ext2) and, if we want a dumber version,  some global FAN-OUT and pruning moves.

For tangle diagrams see the post Four symbols and wait for the final version of the graphic calculus paper.

SO  now, I declare part 1. CLOSED. It amounts to patiently writing all details, which is an interesting activity by itself.

Part 2. is open, albeit now I have much more hope to give a graph model for the front-end of the visual system, which is not relying on assumptions about the geometric structure of the space, linear algebra, tasks and other niceties of the existing models.

UPDATE  02.07.2012. I put on arxiv the graphical formalism paper, it should appear on 03.07.2012. I left outside of the paper a big chunk of very intriguing facts about various possible crossing definitions, for another paper.

Advertisements

A geometric viewpoint on computation?

Let me try to explain what I am trying to do in this work related to “computing with space“. The goal is to understand the process of emergence, in its various precise mathematical forms, like:

– how the dynamics of a big number of particles becomes the dynamics of a continuous system? Apart the physics BS of neglecting infinities, I know of very few mathematically correct approaches. From my mixed background of calculus of variations and continuous media mechanics, I can mention an example of such an approach  in the work of Andrea Braides    on the \Gamma-convergence of the energy functional of a discrete system to the energy functional of a continuous system and atomistic models of solids.

– how to endow a metric space (like a fractal, or sub-riemannian space) with a theory of differential calculus? Translated: how to invent “smoothness” in spaces where there is none, apparently? Because smoothness is certainly emergent. This is part of the field of non-smooth calculus.

– how to explain the profound resemblance between geometrical results of Gromov on groups with polynomial growth and combinatorial results of Breuillard, Gree, Tao on approximate groups? In both cases a nilpotent structure emerges from considering larger and larger scales. The word “explain” means here: identify a general machine at work in both results.

– how to explain the way our brain deals with visual input?  This is a clear case of emergence because the input is the excitation of some receptors of the retina and the output is almost completely not understood, except that we all know that we see objects which are moving and complex geometrical relations among them. A fly sees as well, read From insect vision to robot vision by N. Franceschini, J.M. Pichon, C. Blanes. Related to this paper, I cite from the abstract (boldfaced by me):

  We designed, simulated, and built a complete terrestrial creature which moves about and avoids obstacles solely by evaluating the relative motion between itself and the environment. The compound eye uses an array of elementary motion detectors (EMDS) as smart, passive ranging sensors. Like its physiological counterpart, the visuomotor system is based on analogue, continuous-time processing and does not make use of conventional computers. It uses hardly any memory to adjust the robot’s heading in real time via a local and intermittent visuomotor feedback loop.

More generally, there seems to be a “computation” involved in vision, massively parallel and taking very few steps (up to six), but it is not understood how this is a computation in the mathematical, or computer science sense. Conversely, the visual performances of any device based on computer science computation up to now, are dwarfed by any fly.

I identified a “machine of emergence” which is in work in some of the examples given above. Mathematically, this machine should have something to do with emergent algebras, but what about the computation part?

Probably geometers reason like flies: by definition, a geometrical statement is invariant up to the choice of maps. A sphere is not, geometrically speaking, a particular atlas of maps on the sphere. For a geometer, reproducing whatever it does by using ad-hoc enumeration by  natural numbers, combinatorics  and Turing machines is nonsense, because profoundly not geometrical.

On the other hand, the powerful use and control of abstraction is appealing to the geometer. This justifies the effort to import abstraction techniques from computer science and to replace the non-geometrical stuff by … whatever is more of a geometrical character.

For the moment, such efforts are mostly a source of frustration, a familiar feeling for any mathematician.

But at some point, in these times of profound changes in, mathematics as well as in the society, from all these collective efforts will emerge something beautiful, clear and streamlined.

Scaled lambda epsilon

My first attempt to introduce a scaled version of lambda epsilon turned out to be wrong, but now I think I have found a way. It is a bit trickier than I thought. Let me explain.

In lambda epsilon calculus we have three operations (which are not independent), namely the lambda abstraction, the application and the emergent algebra (one parameter family of) operation(s), called dilations. If we want to obtain a scaled version then we have to “conjugate” with dilations. Looking at terms as being syntactic trees, this amounts to:

– start with a term A and a scale \varepsilon \in \Gamma,

– transform a tree T such that FV(T) \cap FV(A) = \emptyset,  into a tree A_{\varepsilon}[T], by conjugating with A \circ_{\varepsilon} \cdot.

This can be done by recursively defining the transform T \mapsto A_{\varepsilon}[T]. Graphically, we would like to transform the elementary syntactic trees of the three operations into this:


The problem is that, while (c) is just the familiar scaled dilation, the scaled \lambda from (a) does not make sense, because A \circ_{\varepsilon} u is not a variable. Also, the scaled application (b) is somehow misterious.

The solution is to exploit the fact that it makes sense to make substitutions of the form B[ A \circ_{\varepsilon} u : = C] because of the invertibility of dilations. Indeed, A \circ_{\varepsilon} u = C is equivalent with u = A \circ_{\varepsilon^{-1}} C, therefore we may define B[ A \circ_{\varepsilon} u : = C] to mean B[u : = A \circ_{\varepsilon^{-1}} C].

If we look to the rule (ext2) here, the discussion about substitution becomes:

Therefore the correct scaled lambda, instead of (a)  from the first figure, should be this:

The term (syntactic tree) from the LHS should be seen as a notation for the term from the RHS.

And you know what? The scaled application, (b) from the first figure, becomes less misterious, because we can prove the following.

1.  Any u \in X \setminus FV(A) defined a relative variable u^{\varepsilon}_{A} := A \circ_{\varepsilon} u (remark that relative variables are terms!).The set of relative variables is denoted by X_{\varepsilon}(A).

2. The function B \mapsto A_{\varepsilon}[B] is defined for any term B \in T such that FV(A) \cap FV(B) = \emptyset. The definition is this:

–  A_{\varepsilon}[A] = A,

–  A_{\varepsilon}[u] = u for any u \in X \setminus FV(A)

A_{\varepsilon}[ B \mu C] = A \circ_{\varepsilon^{-1}} ((A \circ_{\varepsilon} A_{\varepsilon}[B]) \mu (A \circ_{\varepsilon} A_{\varepsilon}[C]))  for  any B, C \in T such that FV(A) \cap (FV(B) \cup FV(C))= \emptyset

–  A_{\varepsilon}[ u \lambda B] is given by:

 

 

3. B is a scaled term, notation B \in T_{\varepsilon} (A), if there is a term B' \in T such that FV(A) \cap FV(B') = \emptyset and such that B = A_{\varepsilon}[B'].

4. Finally, the operations on scaled terms are these:

– for any \mu \in \Gamma and B, C \in T_{\varepsilon}(A) the scaled application (of coefficient \mu) is

B \mu^{\varepsilon}_{A} C = A \circ_{\varepsilon^{-1}} ((A \circ_{\varepsilon} B) \mu (A \circ_{\varepsilon} C))

– for any scaled variable  u^{\varepsilon}_{A} \in X_{\varepsilon}(A)  and any scaled term B \in T_{\varepsilon}(A) the scaled abstraction is

5.    With this, we can prove that (u^{\varepsilon}_{A} \lambda^{\varepsilon}_{A} B) 1^{\varepsilon}_{A} C = (u \lambda B) 1 C = B [ u: = C], which is remarkable, I think!

The neuron unit

I am updating the paper on \lambda \epsilon almost daily.  It is the first time when I am doing such a thing, it is maybe interesting to see what comes out of this way of writing.

The last addition is something I was thinking about for a long time, something which is probably well known in some circles, but maybe not. It is about eliminating variable (names) from such a calculus. This has been done in several ways, here is another (or the same as a previous one?).

The idea is simple. let T be any term and x \in Var(T) a variable. Look at the syntactic tree of T, then glue to all leaves decorated by x the leaves of a tree with nodes consisting of FAN-OUT  gates.

Further on I shall identify syntactic  trees with terms. I shall add to such trees a new family
(of trees), constructed from the elementary tree depicted at (a) in the next figure. At (b) we see an example of such a tree. We consider also the trivial tree (c).

We have to think about such trees as devices for multiplying the occurences of a variable. I call them FAN-OUT trees. All these trees, with the exception of the trivial one (c), are planar binary trees. We shall add the following rule of associativity:

(ASSOC) any two FAN-OUT trees with the same number of leaves are identified.

This rule will be applied under the following form: we are free to pass from a FAN-OUT tree to an equivalent one which has the same number of leaves. The name “associativity” comes from the fact that a particular instance of this rule (which deserves the name  “elementary associativity move”) is this:

With the help of these FAN-OUT trees we shall replace the multiple occurences of a variable by such trees. Let us see what become the rules of \lambda \varepsilon calculus by using this notation.

\alpha conversion is no longer needed, because variables have no longer names. Instead, we are free to graft usual trees to FAN-OUT trees. This way, instead of terms we shall use “neurons”.

Definition:   The forgetful form (b) of a syntactic tree (a) (of a term) is the tree with the name variables deleted.

A neuron is the result of gluing the root of a forgetful form of a syntactic tree to the root of a FAN-OUT tree, like in the following figure.

The axon of the neuron is the FAN-OUT tree. The dendrites of the neuron are the undecorated edges of the forgetful form of the syntactic tree. A dendrite is bound if it is a left  edge pointing to a node decorated by \lambda.   For any bound dendrite, the set of dependent dendrides are those of the sub-tree starting from the right edge of the \lambda node (where the bound dendrite is connected via a left edge), which are moreover not bound. Otherwise a dendrite is free.  The soma of the neuron is the forgetful form of the syntactic tree.

Substitution. We are free to connect leaves of axons of neurons with dendrites of other neurons.
In order to explain the substitution we have to add the following rule:

(subst) The leaves of an axon cannot be glued to more than one bounded dendrite of another neuron. If a leaf of the axon of the neuron A is connected to a bound dendrite of the neuron B, then it has to be the leftmost leaf of the axon of A. Moreover, in this case all other leaves of A which are connected with B have to be connected only via dendrites which are dependent on the mentioned bound dendrite of B, possibly via adding leaves to the axon of A by using (assoc).

Substitution is therefore assimilated to connecting neurons.

New (beta*). The rule (beta*) takes the following graphical form. In the next figure appear only the leaf of the neuron A connecting to the \lambda node (the other leaves of the axon of A not drawn) and only some of the dendrites depending on the bound one relative to the \lambda node which is figured.

The neuron A may have other dendrites, not figured. According to the definition of the neuron, A together with the \lambda node and adiacent edges form a bigger neuron. Also figured is another leaf of the axon of the neuron B, which may point to another neuron. Finally, in the RHS, the bound dendrite looses all dependents.

Important remark. Notice that there may be multiple outcomes from (subst) and (beta*)! Indeed, this is due to the fact that connections could be made in several ways, by using all or only of part of the dependent dendrites. Because of that, it seems that this version of the calculus is richer than the previous one, but I am not at all sure if this is the case.

Another thing to be remarked is that the “=” sign in this version of these  rules is reflexive and transitive, but not symmetric. Previously the “$ latex =$” sign was supposed to be symmetric.

(R1)  That is easy, we use the emergent algebra gates:

(R2)    Easy as well:

The rules (ext1), (ext2) take obvious forms.

Therefore, if we think about computation as reduction to a normal form (if any), in this graphical notation with “neurons”, computation amounts to re-wiring of neurons or changing the rewiring inside the soma of some neurons.

Variables dissapeared, with the price of introducing FAN-OUT trees.

As concerns the  remark  previously made, we could obtain a calculus which is clearly equivalent with \lambda \varepsilon by modifying the definition of the neuron, in this way.
In order to clearly specify which are the dependent dendrites, we could glue to any bound dendrite a FAN-OUT tree, such that the leaves of this tree connect again with a set of dependent dendrites of the same neuron. In this way, substitution and (beta*) will amount of erasing such a FAN-OUT tree and then perform the moves, as previously explained, but using this time all the dependent dendrites which were connected to the bound dendrite by the erased tree.

Rules of lambda epsilon calculus

I updated the paper on lambda epsilon calculus.  See the link to the actual version (updated daily) or check the arxiv article, which will be updated as soon as a stable version will emerge.

Here are the rules of this calculus:

(beta *)   (x \lambda A) \varepsilon B= (y \lambda (A[x:=B])) \varepsilon B for any fresh variable y,

(R1) (Reidemeister one) if x \not \in FV(A) then (x \lambda A) \varepsilon A = A

(R2) (Reidemeister two) if x \not \in FV(B) then (x \lambda ( B \mu x)) \varepsilon A = B (\varepsilon \mu ) A

(ext1) (extensionality one)  if  x \not \in FV(A) then x \lambda (A 1 x) = A

(ext2) (extensionality two) if  x \not \in FV(B) then (x \lambda B) 1 A = B

These are taken together with usual substitution and \alpha-conversion.

The relation between the operations from \lambda \varepsilon calculus and emergent algebras is illustrated in the next figure.

Lambda epsilon calculus, an attempt for a language of “computing with space”

Just posted on the arxiv the paper Lambda calculus combined with emergent algebras.   Also, I updated my page on emergent algebras and computing with space, taking into consideration what I try to do in the mentioned paper. Here are some excerpts (from the mentioned page) about what I think now “computing with space” may be:

Let’s look at some examples of spaces, like: the real world or a virtual world of a game, as seen by a fly or by a human, “abstract” mathematical spaces as manifolds, fractals, symmetric spaces, groups, linear spaces.
To know what a space is, to define it mathematically, is less interesting than to know what one can do in such a space. I try to look at spaces from a kind of a “computational viewpoint”.
A model of such a computation could be the following process: Alice and Bob share a class of primitives of spaces (like a common language which Alice can use in order to communicate to Bob what she sees when exploring the unknown space). Alice explores an unknown territory and sends to Bob the operational content of the exploration (i.e. maps of what she sees and how she moves, expresses in the common language ). Then Bob, who is placed in a familiar space, tries to make sense of the maps received from Alice. Usually, he can’t put together in the familiar the received information (for example because there is a limit of accuracy of maps of the unknown territory into the familiar one). Instead, Bob tries to simulate the operational content of Alice exploration by interpreting the messages from Alice (remember, expressed in a language of primitives of space) in his space.
A language of space primitives could be (or contain as a part) the emergent algebras. Ideally, such a language of primitives should be described by:
A – a class of gates (operations), which represent the “map-making” relation, with an abstract scale parameter attached, maybe also with supplementary operations (like lambda abstraction?),
B – a class of variables (names) which represent generic points of a space, and a small class of terms (words) expressed in terms of variables and gates from (A) (resembling to combinators in lambda calculus?). These are the “generators” of the space and they have the emergent algebras property, namely that as the scale goes to zero, uniformly with respect to the variables, the output converges to a new operation.
C – a class of rewriting rules saying that some simple assemblies of generators of space have equivalent function, or saying that relations between those simple assemblies converge to relations of the space as the scale goes to zero.

The article on lambda epsilon calculus is a first for me in this field, I would be grateful for any comments and suggestions of improvements.