Tag Archives: infinitesimal

Intrinsic characterizations of riemannian and sub-riemannian spaces (I)

In this post I explain what is the problem of intrinsic characterization of riemannian manifolds, in what sense has been solved in full generality by Nikolaev, then I shall comment on the proof of the Hilbert’s fifth problem by Tao.

In the next post there will be then some comments about Gromov’s problem of giving an intrinsic characterization of sub-riemannian manifolds, in what sense I solved this problem by adding a bit of algebra to it. Finally, I shall return to the characterization of riemannian manifolds, seen as particular sub-riemannian manifolds, and comment on the differences between this characterization and Nikolaev’ one.

1. History of the problem for riemannian manifolds. The problem of giving an intrinsic characterization of riemannian manifolds is a classic and fertile one.

Problem: give a metric description of a Riemannian manifold.

Background: A complete riemannian manifold is a length metric space (or geodesic, or intrinsic metric space) by Hopf-Rinow theorem. The problem asks for the recovery of the manifold structure from the distance function (associated to the length functional).

For 2-dim riemannian manifolds the problem has been solved by A. Wald [Begrundung einer koordinatenlosen Differentialgeometrie der Flachen, Erg. Math. Colloq. 7 (1936), 24-46] (“Begrundung” with umlaut u, “Flachen” with umlaut a, sorry for this).

In 1948 A.D. Alexandrov [Intrinsic geometry of convex surfaces, various editions] introduces its famous curvature (which uses comparison triangles)  and proves that, under mild smoothness conditions  on this curvature, one is capable to recover the differential structure and the metric of the 2-dim riemannian manifold. In 1982 Alexandrov proposes as a conjecture that a characterization of a riemannian manifold (of any dimension) is possible in terms of metric (sectional) curvatures (of the type introduced by Alexandrov) and weak smoothness assumptions formulated in metric way (as for example Holder smoothness). Many other results deserve to be mentioned (by Reshetnyak, for example).

2. Solution of the problem by Nikolaev. In 1998 I.G. Nikolaev [A metric characterization of riemannian spaces, Siberian Adv. Math. , 9 (1999), 1-58] solves the general problem of intrinsic characterization of C^{m,\alpha} riemannian spaces:

every locally compact length metric space M, not linear at one of its points,  with \alpha Holder continuous metric sectional curvature of the “generalized tangent bundle” T^{m}(M) (for some $m=1,2,…$, which admits local geodesic extendability, is isometric to a C^{m+2} smooth riemannian manifold..

Therefore:

  • he defines a generalized tangent bundle in metric sense
  • he defines a notion of sectional curvature
  • he asks some metric smoothness of this curvature

and he gets the result.

3. Gleason metrics and Hilbert’s fifth problem. Let us compare this with the formulation of the solution of the Hilbert’s fifth problem by Terence Tao. THe problem is somehow similar, namely recover the differential structure of a Lie group from its algebraic structure. This time the “intrinsic” object is the group operation, not the distance, as previously.

Tao shows that the proof of the solution may be formulated in metric terms. Namely, he introduces a Gleason metric (definition 4 in the linked post), which will turn to be a left invariant riemannian metric on the (topological) group. I shall not insist on this, instead read the post of Tao and also, for the riemannian metric description, read this previous post by me.

A geometric viewpoint on computation?

Let me try to explain what I am trying to do in this work related to “computing with space“. The goal is to understand the process of emergence, in its various precise mathematical forms, like:

– how the dynamics of a big number of particles becomes the dynamics of a continuous system? Apart the physics BS of neglecting infinities, I know of very few mathematically correct approaches. From my mixed background of calculus of variations and continuous media mechanics, I can mention an example of such an approach  in the work of Andrea Braides    on the \Gamma-convergence of the energy functional of a discrete system to the energy functional of a continuous system and atomistic models of solids.

– how to endow a metric space (like a fractal, or sub-riemannian space) with a theory of differential calculus? Translated: how to invent “smoothness” in spaces where there is none, apparently? Because smoothness is certainly emergent. This is part of the field of non-smooth calculus.

– how to explain the profound resemblance between geometrical results of Gromov on groups with polynomial growth and combinatorial results of Breuillard, Gree, Tao on approximate groups? In both cases a nilpotent structure emerges from considering larger and larger scales. The word “explain” means here: identify a general machine at work in both results.

– how to explain the way our brain deals with visual input?  This is a clear case of emergence because the input is the excitation of some receptors of the retina and the output is almost completely not understood, except that we all know that we see objects which are moving and complex geometrical relations among them. A fly sees as well, read From insect vision to robot vision by N. Franceschini, J.M. Pichon, C. Blanes. Related to this paper, I cite from the abstract (boldfaced by me):

  We designed, simulated, and built a complete terrestrial creature which moves about and avoids obstacles solely by evaluating the relative motion between itself and the environment. The compound eye uses an array of elementary motion detectors (EMDS) as smart, passive ranging sensors. Like its physiological counterpart, the visuomotor system is based on analogue, continuous-time processing and does not make use of conventional computers. It uses hardly any memory to adjust the robot’s heading in real time via a local and intermittent visuomotor feedback loop.

More generally, there seems to be a “computation” involved in vision, massively parallel and taking very few steps (up to six), but it is not understood how this is a computation in the mathematical, or computer science sense. Conversely, the visual performances of any device based on computer science computation up to now, are dwarfed by any fly.

I identified a “machine of emergence” which is in work in some of the examples given above. Mathematically, this machine should have something to do with emergent algebras, but what about the computation part?

Probably geometers reason like flies: by definition, a geometrical statement is invariant up to the choice of maps. A sphere is not, geometrically speaking, a particular atlas of maps on the sphere. For a geometer, reproducing whatever it does by using ad-hoc enumeration by  natural numbers, combinatorics  and Turing machines is nonsense, because profoundly not geometrical.

On the other hand, the powerful use and control of abstraction is appealing to the geometer. This justifies the effort to import abstraction techniques from computer science and to replace the non-geometrical stuff by … whatever is more of a geometrical character.

For the moment, such efforts are mostly a source of frustration, a familiar feeling for any mathematician.

But at some point, in these times of profound changes in, mathematics as well as in the society, from all these collective efforts will emerge something beautiful, clear and streamlined.

Scaled lambda epsilon

My first attempt to introduce a scaled version of lambda epsilon turned out to be wrong, but now I think I have found a way. It is a bit trickier than I thought. Let me explain.

In lambda epsilon calculus we have three operations (which are not independent), namely the lambda abstraction, the application and the emergent algebra (one parameter family of) operation(s), called dilations. If we want to obtain a scaled version then we have to “conjugate” with dilations. Looking at terms as being syntactic trees, this amounts to:

– start with a term A and a scale \varepsilon \in \Gamma,

– transform a tree T such that FV(T) \cap FV(A) = \emptyset,  into a tree A_{\varepsilon}[T], by conjugating with A \circ_{\varepsilon} \cdot.

This can be done by recursively defining the transform T \mapsto A_{\varepsilon}[T]. Graphically, we would like to transform the elementary syntactic trees of the three operations into this:


The problem is that, while (c) is just the familiar scaled dilation, the scaled \lambda from (a) does not make sense, because A \circ_{\varepsilon} u is not a variable. Also, the scaled application (b) is somehow misterious.

The solution is to exploit the fact that it makes sense to make substitutions of the form B[ A \circ_{\varepsilon} u : = C] because of the invertibility of dilations. Indeed, A \circ_{\varepsilon} u = C is equivalent with u = A \circ_{\varepsilon^{-1}} C, therefore we may define B[ A \circ_{\varepsilon} u : = C] to mean B[u : = A \circ_{\varepsilon^{-1}} C].

If we look to the rule (ext2) here, the discussion about substitution becomes:

Therefore the correct scaled lambda, instead of (a)  from the first figure, should be this:

The term (syntactic tree) from the LHS should be seen as a notation for the term from the RHS.

And you know what? The scaled application, (b) from the first figure, becomes less misterious, because we can prove the following.

1.  Any u \in X \setminus FV(A) defined a relative variable u^{\varepsilon}_{A} := A \circ_{\varepsilon} u (remark that relative variables are terms!).The set of relative variables is denoted by X_{\varepsilon}(A).

2. The function B \mapsto A_{\varepsilon}[B] is defined for any term B \in T such that FV(A) \cap FV(B) = \emptyset. The definition is this:

–  A_{\varepsilon}[A] = A,

–  A_{\varepsilon}[u] = u for any u \in X \setminus FV(A)

A_{\varepsilon}[ B \mu C] = A \circ_{\varepsilon^{-1}} ((A \circ_{\varepsilon} A_{\varepsilon}[B]) \mu (A \circ_{\varepsilon} A_{\varepsilon}[C]))  for  any B, C \in T such that FV(A) \cap (FV(B) \cup FV(C))= \emptyset

–  A_{\varepsilon}[ u \lambda B] is given by:

 

 

3. B is a scaled term, notation B \in T_{\varepsilon} (A), if there is a term B' \in T such that FV(A) \cap FV(B') = \emptyset and such that B = A_{\varepsilon}[B'].

4. Finally, the operations on scaled terms are these:

– for any \mu \in \Gamma and B, C \in T_{\varepsilon}(A) the scaled application (of coefficient \mu) is

B \mu^{\varepsilon}_{A} C = A \circ_{\varepsilon^{-1}} ((A \circ_{\varepsilon} B) \mu (A \circ_{\varepsilon} C))

– for any scaled variable  u^{\varepsilon}_{A} \in X_{\varepsilon}(A)  and any scaled term B \in T_{\varepsilon}(A) the scaled abstraction is

5.    With this, we can prove that (u^{\varepsilon}_{A} \lambda^{\varepsilon}_{A} B) 1^{\varepsilon}_{A} C = (u \lambda B) 1 C = B [ u: = C], which is remarkable, I think!

Baker-Campbell-Hausdorff polynomials and Menelaus theorem

This is a continuation of the previous post on the noncommutative BCH formula. For the “Menelaus theorem” part see this post.

Everything is related to “noncommutative techniques” for approximate groups, which hopefully will apply sometimes in the future to real combinatorial problems, like the Tao’ project presented here, and also to the problem of understanding curvature (in non-riemannian settings), see a hint here, and finally to the problem of higher order differential calculus in sub-riemannian geometry, see this for a comment on this blog.

Remark: as everything this days can be retrieved on the net, if you find in this blog something worthy to include in a published paper, then don’t be shy and mention this. I believe strongly in fair practices relating to this new age of scientific collaboration opened by the www, even if in the past too often ideas which I communicated freely were taken in published papers without attribution. Hey, I am happy to help! but unfortunately I have an ego too (not only an ergobrain, as any living creature).

For the moment we stay in a Lie group , with the convention to take the exponential equal to identity, i.e. to consider that the group operation can be written in terms of Lie brackets according to the BCH formula:

x y = x + y + \frac{1}{2} [x,y] + \frac{1}{12}[x,[x,y]] - \frac{1}{12}[y,[y,x]]+...

For any \varepsilon \in (0,1] we define

x \cdot_{\varepsilon} y = \varepsilon^{-1} ((\varepsilon x) (\varepsilon y))

and we remark that x \cdot_{\varepsilon} y \rightarrow x+y uniformly with respect to x,y in a compact neighbourhood of the neutral element e=0. The BCH formula for the operation labeled with \varepsilon is the following

x \cdot_{\varepsilon} y = x + y + \frac{\varepsilon}{2} [x,y] + \frac{\varepsilon^{2}}{12}[x,[x,y]] - \frac{\varepsilon^{2}}{12}[y,[y,x]]+...

Let us define the BCH functions. We start with

BCH^{0}_{\varepsilon} (x,y) = x \cdot_{\varepsilon} y

and BCH^{0}_{0}(x,y) = \lim_{\varepsilon \rightarrow 0} BCH^{0}_{\varepsilon}(x,y) = x + y.

Define the “linearized dilation\delta^{x}_{\varepsilon} y = x + \varepsilon (-x+y) (written like this on purpose, without using the commutativity of the “+” operation; due to limitations of my knowledge to use latex in this environment, I am shying away to put a bar over this dilation, to emphasize that it is different from the “group dilation”, equal to x (\varepsilon(x^{-1}y))).

Consider the family of \beta > 0 such that there is an uniform limit w.r.t. x,y in compact set of the expression

\delta_{\varepsilon^{-\beta}}^{BCH^{0}_{\varepsilon}(x,y)}  BCH^{0}_{0}(x,y)

and remark that this family has a maximum \beta = 1. Call this maximum \alpha_{0} and define

BCH^{1}_{\varepsilon}(x,y) = \delta_{\varepsilon^{-\alpha_{1}}}^{BCH^{0}_{\varepsilon}(x,y)}  BCH^{0}_{0}(x,y)

and BCH^{1}_{0}(x,y) = \lim_{\varepsilon \rightarrow 0} BCH^{1}_{\varepsilon}(x,y).

Let us compute BCH^{1}_{0}(x,y):

BCH^{1}_{0}(x,y) = x + y + \frac{1}{2}[x,y]

and also remark that

BCH^{1}_{\varepsilon}(x,y) = x+y + \varepsilon^{-1} ( -(x+y) + (x \cdot_{\varepsilon} y)).

We recognize in the right hand side an expression which is a relative of what I have called in the previous post an “approximate bracket”, relations (2) and (3). A better name for it is a halfbracket.

We may continue indefinitely this recipe. Namely for any natural number i\geq 1 we first define the maximal number \alpha_{i} among all \beta > 0 with the property that the (uniform) limit exists

\lim_{\varepsilon \rightarrow 0} \delta_{\varepsilon^{-\beta}}^{BCH^{i}_{\varepsilon}(x,y)}  BCH^{i}_{0}(x,y)

Generically we shall find \alpha_{i} = 1. We define then

BCH^{i+1}_{\varepsilon}(x,y) = \delta_{\varepsilon^{-\alpha_{i}}}^{BCH^{i}_{\varepsilon}(x,y)}  BCH^{i}_{0}(x,y)

and BCH^{i+1}_{0}(x,y) = \lim_{\varepsilon \rightarrow 0} BCH^{i+1}_{\varepsilon}(x,y).

It is time to use Menelaus theorem. Take a natural number N > 0. We may write (pretending we don’t know that all \alpha_{i} = 1, for i = 0, ... N):

x \cdot_{\varepsilon} y = BCH^{0}_{\varepsilon}(x,y) = \delta^{BCH^{0}_{0}(x,y)}_{\varepsilon^{\alpha_{0}}} \delta^{BCH^{1}_{0}(x,y)}_{\varepsilon^{\alpha_{1}}} ... \delta^{BCH^{N}_{0}(x,y)}_{\varepsilon^{\alpha_{N}}} BCH^{N+1}_{\varepsilon}(x,y)

Let us denote \alpha_{0} + ... + \alpha_{N} = \gamma_{N} and introduce the BCH polynomial PBCH^{N}(x,y)(\mu) (the variable of the polynomial is \mu), defined by: PBCH^{N}(x,y)(\mu) is the unique element of the group with the property that for any other element z (close enough to the neutral element) we have

\delta^{BCH^{0}_{0}(x,y)}_{\mu^{\alpha_{0}}} \delta^{BCH^{1}_{0}(x,y)}_{\mu^{\alpha_{1}}} ... \delta^{BCH^{N}_{0}(x,y)}_{\mu^{\alpha_{N}}} z = \delta^{PBCH^{N}(x,y)(\mu)}_{\mu^{\gamma_{N}}} z

Such an element exists and it is unique due to (Artin’ version of the) Menelaus theorem.

Remark that PBCH^{N}(x,y)(\mu) is not a true polynomial in \mu, but it is a rational function of \mu which is a polynomial up to terms of order \mu^{\gamma_{N}}. A straightforward computation shows that the BCH polynomial (up to terms of the mentioned order) is a truncation of the BCH formula up to terms containing N-1 brackets, when we take \mu =1.

It looks contorted, but written this way it works verbatim for normed groups with dilations! There are several things which are different in detail. These are:

1. the coefficients \alpha_{i} are not equal to 1, in general. Moreover, I can prove that the \alpha_{i} exist (as a maximum of numbers \beta such that …) for a sub-riemannian Lie group, that is for a Lie group endowed with a left-invariant dilation structure, by using the classical BCH formula, but I don’t think that one can prove the existence of these numbers for a general group with dilations! Remark that the numbers \alpha_{i} are defined in a similar way as Hausdorff dimension is!

2. one has to define noncommutative polynomials, i.e. polynomials in the frame of Carnot groups (at least). This can be done, it has been sketched in a previous paper of mine, Tangent bundles to sub-riemannian groups, section 6.

UPDATE: (30.10.2011) See the post of Tao

Associativity of the Baker-Campbell-Hausdorff formula

where a (trained) eye may see the appearance of several ingredients, in the particular commutative case, of the mechanism of definition of the BCH formula.

The associativity is rephrased, in a well known way,  in proposition 2 as a commutativity of say left and  right actions. From there signs of commutativity (unconsciously assumed) appear:  the obvious first are the “radial  homogeneity  identities”, but already at this stage a lot of familiar  machinery is put in place and the following is more and more heavy of  the same. I can only wonder:  is this  all necessary? My guess is: not. Because for starters, as explained here and in previous posts, Lie algebras are of a commutative blend, like the BCH formula. And (local, well known from the beginning) groups are not.

Entering “chora”, the infinitesimal place

There is a whole discussion around the key phrases “The map is not the territory” and “The map is the territory”. From the wiki entry on the map-territory relation, we learn that Korzybski‘s dictum “the map is not the territory” means that:

A) A map may have a structure similar or dissimilar to the structure of the territory,

B) A map is not the territory.

Bateson, in “Form, Substance and Difference” has a different take on this: he starts by explaining the pattern-substance dichotomy

Let us go back to the original statement for which Korzybski is most famous—the statement that the map is not the territory. This statement came out of a very wide range of philosophic thinking, going back to Greece, and wriggling through the history of European thought over the last 2000 years. In this history, there has been a sort of rough dichotomy and often deep controversy. There has been violent enmity and bloodshed. It all starts, I suppose, with the Pythagoreans versus their predecessors, and the argument took the shape of “Do you ask what it’s made of—earth, fire, water, etc.?” Or do you ask, “What is its pattern?” Pythagoras stood for inquiry into pattern rather than inquiry into substance.1 That controversy has gone through the ages, and the Pythagorean half of it has, until recently, been on the whole the submerged half.

Then he states his point of view:

We say the map is different from the territory. But what is the territory? […] What is on the paper map is a representation of what was in the retinal representation of the man who made the map–and as you push the question back, what you find is an infinite regress, an infinite series of maps. The territory never gets in at all.

Always the process of representation will filter it out so that the mental world is only maps of maps of maps, ad infinitum.

At this point Bateson puts a very interesting footnote:

Or we may spell the matter out and say that at every step, as a difference is transformed and propagated along its pathways, the embodiment of the difference before the step is a “territory” of which the embodiment after the step is a “map.” The map-territory relation obtains at every step.

Inspired by Bateson, I want to explore from the mathematical side the point of view that there is no difference between the map and the territory, but instead the transformation of one into another can be understood by using tangle diagrams.

Let us imagine that the exploration of the territory provides us with an atlas, a collection of maps, mathematically understood as a family of two operations (an “emergent algebra”). We want to organize this spatial information in a graphical form which complies with Bateson’s footnote: map and territory have only local meaning in the graphical representation, being only the left-hand-side (and r-h-s respectively) of the “making map” relation.

Look at the following figure:

In the figure from the left, the “v” which decorates an arc, represents a point in the “territory”, that is the l-h-s of the relation, the “u” represents a “pixel in the map”, that is the r-h-s of a relation. The relation itself is represented by a crossing decorated by an epsilon, the “scale” of the map.

The opposite crossing, see figure from the right, is the inverse relation.

Imagine now a complex diagram, with lots of crossings, decorated by various
scale parameters, and segments decorated with points from a space X which
is seen both as territory (to explore) and map (of it).

In such a diagram the convention map-territory can be only local, around each crossing.

There is though a diagram which could unambiguously serve as a symbol for
“the place (near) the point x, at scale epsilon” :

In this diagram, all crossings which are not decorated have “epsilon” as a decoration, but this decoration can be unambiguously placed near the decoration “x” of the closed arc. Such a diagram will bear the name “infinitesimal place (or chora) x at scale epsilon”.