Tag Archives: sub-riemannian geometry

How non-commutative geometry does not work well when applied to non-commutative analysis

I expressed several times the belief that sub-riemannian geometry represents an example of a mathematically new phenomenon, which I call “non-commutative analysis”. Repeatedly happened that apparently general results simply don’t work well when applied to sub-riemannian geometry. This “strange” (not for me) phenomenon leads to negative statements, like rigidity results (Mostow, Margulis), non-rectifiability results (like for example the failure of the theory of metric currents for Carnot groups).  And now, to this adds the following,  arXiv:1404.5494 [math.OA]

“the unexpected result that the theory of spectral triples does not apply to the Carnot manifolds in the way one would expect. [p. 11] ”

i.e.

“We will prove in this thesis that any horizontal Dirac operator on an arbitrary Carnot manifold cannot be hypoelliptic. This is a big difference to the classical case, where any Dirac operator is elliptic. [p. 12]”

It appears that the author reduces the problems to the Heisenberg groups. There is a solution, then, to use

R. Beals, P.C. Greiner, Calculus on Heisenberg manifolds, Princeton University Press, 1988

which gives something resembling spectral triples, but not quite all works, still:

“and show how hypoelliptic Heisenberg pseudodifferential operators furnishing a spectral triple and detecting in addition the Hausdorff dimension of the Heisenberg manifold can be constructed. We will suggest a few concrete operators, but it remains unclear whether one can detect or at least estimate the Carnot-Caratheodory metric from them. [p. 12]”

______________________________

This seems to be an excellent article, more than that, because it is a phd dissertation  many things are written clearly.

I am not surprised at all by this, it just means that, as in the case with the metric currents, there is an ingredient in the spectral triples theory which introduces by the backdoor some commutativity, which messes then with the non-commutative analysis  (or calculus).

Instead I am even more convinced than ever that the minimal (!) description of sub-riemannian manifolds, as models of a non-commutative analysis, is given by dilation structures, explained most recently in arXiv:1206.3093 [math.MG].

A corollary of this is: sub-riemannian geometry (i.e. non-commutative analysis of dilation structures)  is more non-commutative than non-commutative geometry .

I’m waiting for a negative result concerning the application of quantum groups to sub-riemannian geometry.

__________________________________________

 

 

 

Sometimes an anonymous review is “a tale told by an idiot …”

… “full of sound and fury, signifying nothing.” And the editor believes it, even if it is self-contradictory, after sitting on the article for half a year.

There are two problems:

  • the problem of time; you write a long and dense article, which may be hard to review and the referee, instead of declining to review it, it keeps it until the editor presses him to write a review, then he writes some fast, crappy report, much below the quality of the work required.
  • the problem of communication: there is no two way communication with the author. After waiting a considerable amount of time, the author has nothing else to do than to re-submit the article to another journal.

Both problems could be easily solved by open peer-review. See Open peer-review as a service.

The referee can well be anonymous, if he wishes, but a dialogue with the author and, more important, with other participants could only improve the quality of the review (and by way of consequence, the quality of the article).

I reproduce further such a review, with comments. It is about the article “Sub-riemannian geometry from intrinsic viewpoint” arXiv:1206.3093 .  You don’t need to read it, maybe excepting the title, abstract and contents pages, which I reproduce here:

Sub-riemannian geometry from intrinsic viewpoint
Marius Buliga
Institute of Mathematics, Romanian Academy
P.O. BOX 1-764, RO 014700
Bucuresti, Romania
Marius.Buliga@imar.ro
This version: 14.06.2012

Abstract

Gromov proposed to extract the (differential) geometric content of a sub-riemannian space exclusively from its Carnot-Caratheodory distance. One of the most striking features of a regular sub-riemannian space is that it has at any point a metric tangent space with the algebraic structure of a Carnot group, hence a homogeneous Lie group. Siebert characterizes homogeneous Lie groups as locally compact groups admitting a contracting and continuous one-parameter group of automorphisms. Siebert result has not a metric character.
In these notes I show that sub-riemannian geometry may be described by about 12 axioms, without using any a priori given differential structure, but using dilation structures instead.
Dilation structures bring forth the other intrinsic ingredient, namely the dilations, thus blending Gromov metric point of view with Siebert algebraic one.
MSC2000: 51K10, 53C17, 53C23

1 Introduction       2
2 Metric spaces, groupoids, norms    4
2.1 Normed groups and normed groupoids      5
2.2 Gromov-Hausdorff distance     7
2.3 Length in metric spaces       8
2.4 Metric profiles. Metric tangent space      10
2.5 Curvdimension and curvature     12

3 Groups with dilations      13
3.1 Conical groups     14
3.2 Carnot groups     14
3.3 Contractible groups   15

4 Dilation structures  16
4.1 Normed groupoids with dilations     16
4.2 Dilation structures, definition    18

5 Examples of dilation structures 20
5.1 Snowflakes, nonstandard dilations in the plane    20
5.2 Normed groups with dilations    21
5.3 Riemannian manifolds    22

6 Length dilation structures 22
7 Properties of dilation structures    24
7.1 Metric profiles associated with dilation structures    24
7.2 The tangent bundle of a dilation structure    26
7.3 Differentiability with respect to a pair of dilation structures    29
7.4 Equivalent dilation structures     30
7.5 Distribution of a dilation structure     31

8 Supplementary properties of dilation structures 32
8.1 The Radon-Nikodym property    32
8.2 Radon-Nikodym property, representation of length, distributions     33
8.3 Tempered dilation structures    34
9 Dilation structures on sub-riemannian manifolds   37
9.1 Sub-riemannian manifolds    37
9.2 Sub-riemannian dilation structures associated to normal frames     38

 

10 Coherent projections: a dilation structure looks down on another   41
10.1 Coherent projections     42
10.2 Length functionals associated to coherent projections    44
10.3 Conditions (A) and (B)     45

11 Distributions in sub-riemannian spaces as coherent projections    45
12 An intrinsic description of sub-riemannian geometry    47
12.1 The generalized Chow condition     47
12.2 The candidate tangent space    50
12.3 Coherent projections induce length dilation structures  53

Now the report:

 

Referee report for the paper


 Sub-riemannian geometry from intrinsic viewpoint

Marius Buliga
for

New York Journal of Mathematics (NYJM).

One of the important theorems in sub-riemannian geometry is a result
credited to Mitchell that says that Gromov-Hausdorff metric tangents
to sub-riemannian manifolds are Carnot groups.
For riemannian manifolds, this result is an exercise, while for
sub-riemannian manifolds it is quite complicate. The only known
strategy is to define special coordinates and using them define some
approximate dilations. With this dilations, the rest of the argument
becomes very easy.
Initially, Buliga isolates the properties required for such dilations
and considers
more general settings (groupoids instead of metric spaces).
However, all the theory is discussed for metric spaces, and the
groupoids leave only confusion to the reader.
His claims are that
1) when this dilations are present, then the tangents are Carnot groups,
[Rmk. The dilations are assumed to satisfy 5 very strong conditions,
e.g., A3 says that the tangent exists – A4 says that the tangent has a
multiplication law.]
2) the only such dilation structures (with other extra assumptios) are
the riemannian manifolds.
He misses to discuss the most important part of the theory:
sub-riemannian manifolds admit such dilations (or, equivalently,
normal frames).
His exposition is not educational and is not a simplification of the
paper by Mitchell (nor of the one by Bellaiche).




The paper is a cut-and-past process from previous papers of the
author. The paper does not seem reorganised at all. It is not
consistent, full of typos, English mistakes and incomplete sentences.
The referee (who is not a spellchecker nor a proofread) thinks that
the author himself could spot plenty of things to fix, just by reading
the paper (below there are some important things that needs to be
fixed).


The paper contains 53 definitions – fifty-three!.
There are 15 Theorems (6 of which are already present in other papers
by the author of by other people. In particular 3 of the theorems are
already present in [4].)
The 27 proofs are not clear, incomplete, or totally obvious.

The author consider thm 8.10 as the main result. However, after
unwrapping the definitions, the statement is: a length space that is
locally bi-lipschitz to a commutative Lie group is locally
bi-lipschitz to a Riemannian manifold. (The proof refers to Cor 8.9,
which I was unable to judge, since it seems that the definition of
“tempered” obviously implies “length” and “locally bi-lipschitz to the
tangent”)


The author confuses the reader with long definitions, which seems very
general, but are only satisfied by sub-riemannian manifolds.
The definitions are so complex that the results are tautologies, after
having understood the assumptions. Indeed, the definitions are as long
as the proofs. Just two examples: thm 7.1 is a consequence of def 4.4,
thm 9.9 is a consequence of def 9.7.

Some objects/notions are not defined or are defined many pages after
they are used.



Small remarks for the author:

def 2.21 is a little o or big O?


page 13 line 2. Which your convention, the curvdim of a come in infinite.
page 13 line -2. an N is missing in the norm


page 16 line 2, what is \nu?

prop 4.2 What do you mean with separable norm?

page 18 there are a couple of “dif” which should be fixed.
in the formula before (15), A should be [0,A]

pag 19 A4. there are uncompleted sentences.

Regarding the line before thm 7.1, I don’t agree that the next theorem
is a generalisation of Mitchell’s, since the core of his thm is the
existence of dilation structures.

Prop 7.2 What is a \Gamma -irq

Prop 8.2 what is a geodesic spray?

Beginning of sec 8.3 This is a which -> This is a

Beginning of sec 9 contains a lot of English mistakes.

Beginning of sec 9.1 “we shall suppose that the dimension of the
distribution is globally constant..” is not needed since the manifold
is connected

thm 9.2 rank -> step

In the second sentence of def 9.4, the existence of the orthonormal
frame is automatic.

 

Now, besides some of the typos, the report is simply crap:

  • the referee complains that I’m doing it for groupoids, then says that what I am doing applies only to subriemannian spaces.
  • before, he says that in fact I’m doing it only for riemannian spaces.
  • I never claim that there is a main result in this long article, but somehow the referee mentions one of the theorems as the main result, while I am using it only as an example showing what the theory says in the trivial case, the one of riemannian manifolds.
  • the referee says that I don’t treat the sub-riemannian case. Should decide which is true, among the various claims, but take a look at the contents to get an opinion.
  • I never claim what the referee thinks are my two claims, both being of course wrong,
  • in the claim 1) (of the referee) he does not understand that the problem is not the definition of an operation, but the proof that the operation is a Carnot group one (I pass the whole story that in fact the operation is a conical group one, for regular sub-riemannian manifolds this translates into a Carnot group operation by using Siebert, too subtle for the referee)
  • the claim 2) is self-contradictory just by reading only the report.
  • 53 definitions (it is a very dense course), 15 theorems and 27 proofs, which are with no argument: “ not clear, incomplete, or totally obvious
  • but he goes on hunting the typos, thanks, that’s essential to show that he did read the article.

There is a part of the text which is especially perverse: The paper is a cut-and-past process from previous papers of the
author.

Mind you, this is a course based on several papers, most of them unpublished! Moreover, every contribution from previous papers is mentioned.

Tell me what to do with these papers: being unpublished, can I use them for a paper submitted to publication? Or else, they can be safely ignored because they are not published? Hmm.

This shows to me that the referee knows what I am doing, but he does not like it.

Fortunately, all the papers, published or not, are available on the arXiv with the submission dates and versions.

 

______________________________________

See also previous posts:

________________________________________

 

 

A less understood problem in sub-riemannian geometry (I)

A complete, locally compact riemannian manifold is a length metric space by the Hopf-Rinow theorem. The problem of intrinsic characterization of riemannian spaces asks for the recovery of the manifold structure and of the riemannian metric from the distance function coming from  to the length functional.

For 2-dim riemannian manifolds the problem has been solved by A. Wald in 1935. In 1948 A.D. Alexandrov  introduces his famous curvature (which uses comparison triangles) and proves that, under mild smoothness conditions on this curvature, one is capable to recover the differential structure and the metric of the 2-dim riemannian manifold. In 1982 Alexandrov proposes as a conjecture that a characterization of a riemannian manifold (of any dimension) is possible in terms of metric (sectional)  curvatures (of the type introduced by Alexandrov) and weak smoothness assumptions formulated in metric way (as for example Hölder smoothness).

The problem has been solved by Nikolaev in 1998, in the paper A metric characterization of Riemannian spaces. Siberian Adv. Math.   9,  no. (1999),  1-58.  The solution of Nikolaev can be summarized  like this: he starts with a locally compact length metric space (and some technical details), then

  •  he constructs a (family of) intrinsically defined tangent bundle(s) of the metric space, by using a generalization of the cosine formula for estimating a kind of a distance between two curves emanating from different points. This will lead him to a generalization of the tangent bundle of a riemannian manifold endowed with the canonical Sasaki metric.
  • He defines a notion of sectional curvature at a point of the metric space, as a limit of a function of nondegenerated geodesic triangles, limit taken as these triangles converge (in a precised sense)  to the point.
  • The sectional curvature function thus constructed is supposed to satisfy a Hölder continuity condition (thus a regularity formulated in metric terms)
  • He proves then that  the metric space is isometric with (the metric space associated to) a riemannian manifold of precise (weak) regularity (the regularity is related to the regularity of the sectional curvature function).

Sub-riemannian spaces are length metric spaces as well. Any riemannian space is a sub-riemannian one. It is not clear at first sight why the characterization of riemannian spaces does not extend to sub-riemannian ones. In fact, there are two problematic steps for such a program for extending Nikolaev result to sub-riemannian spaces:

  • the cosine formula, as well as the Sasaki metric on the tangent bundle don’t have a correspondent in sub-riemannian geometry (because there is, basically, no statement canonically corresponding to Pythagoras theorem);
  • the sectional curvature at a point cannot be introduced by means of comparison triangles, because sub-riemanian spaces do not behave well with respect to this comparison of triangle idea, as proved by Scott Pauls.

In 1996 M. Gromov formulates the problem of intrinsic characterization of sub-riemannian spaces.  He takes the Carnot-Caratheodory (or CC) distance (this is the name of the distance constructed on a sub-riemannian manifold from the differential geometric data we have, which generalizes the construction of the riemannian distance from the riemannian metric) as the only intrinsic object of a sub-riemannian space. Indeed, in the linked article, section 0.2.B. he writes:

If we live inside a Carnot-Caratheodory metric space V we may know nothing whatsoever about the (external) infinitesimal structures (i.e. the smooth structure on V, the subbundle H \subset T(V) and the metric g on H) which were involved in the construction of the CC metric.
He then formulates the goal:
Develop a sufficiently rich and robust internal CC language which would enable us to capture the essential external characteristics of our CC spaces.
He proposes as an example to recognize the rank of the horizontal distribution, but in my opinion this is, say, something much less essential than to “recognize” the “differential structure”, in the sense proposed here as the equivalence class under local equivalence of dilation structures.
As in Nikolaev solution for the riemannian case, the first step towards the goal is to have a well defined, intrinsic, notion of tangent bundle. The second step would be to be able to go to higher order approximations, eventually towards a curvature.
My solution is to base all on dilation structures. The solution is not “pure”, because it introduces another ingredient, besides the CC distance: the field of dilations. However, I believe that it is illusory to think that, for the general sub-riemannian case, we may be able to get a “sufficiently rich and robust” language without. As an example, even the best known thing, i.e. the fact that the metric tangent spaces of a (regular) sub-riemannian manifold are Carnot groups, was previously not known to be an intrinsic fact. Let me explain: all proofs, excepting the one by using dilation structures, use non-intrinsic ingredients, like differential calculus on the differential manifold which enters in the construction of the CC distance. Therefore, it is not known (or it was not known, even not understood as a problem) if this result is intrinsic or if it is an artifact of the proof method.
Well, it is not, it turns out, if we accept dilation structures as intrinsic.
There is a bigger question lingering behind, once we are ready to think about intrinsic properties of sub-riemannian spaces:  what is a sub-riemannian space? The construction of such spaces uses notions and results which are by no means intrinsic (again differential structures, horizontal bundles, and so on).
Therefore I understand Gromov’s stated goal as:
Give a minimal, axiomatic, description of sub-riemannian spaces.
[Adapted from the course notes Sub-riemannian geometry from intrinsic viewpoint.]

Dictionary from emergent algebra to graphic lambda calculus (I)

Because I am going to explore in future posts the emergent algebra sector, I think it is good to know where we stand with using graphic lambda calculus for describing proofs in emergent algebra as computations.  In the big map of research paths, this correspond to the black path linking “Energent algebra sector” with “Emergent algebras”.

A dictionary seems a good way to start this discussion.

Let’s see, there are three formalisms there:

  • in the first paper on spaces with dilations, Dilatation structures I. Fundamentals arXiv:math/0608536  section 4,  is introduced a formalism using binary decorated trees in order to ease the manipulations of dilation structures,
  • emergent algebras are an abstraction of dilation structures, in the sense that they don’t need a metric space to live on. The first paper on the subject is Emergent algebras arXiv:0907.1520   (see also Braided spaces with dilations and sub-riemannian symmetric spaces  arXiv:1005.5031  for explanations of the connection between dilation structures and emergent algebras, as well as for braided symmetric spaces, sub-riemannian symmetric spaces, conical groups, items you can see on the big map mentioned before). Emergent algebras is a mixture of an algebraic theory with an important part of epsilon-delta analysis.  One of the goals of graphic lambda calculus is to replace this epsilon-delta part by a computational part.
  • graphic lambda calculus, extensively described here, has an emergent algebra sector (see  arXiv:1305.5786 , equally check out the series Emergent algebras as combinatory logic part I, part II, part IIIpart IV,  ). This is not an algebraic theory, but a formalism which contains lambda calculus.

The first figure describes a dictionary of objects which appear in these three formalisms. In the first column you find objects as they appear in dilation structures – emergent algebra formalism. In the second column you find the corresponding object in the binary trees formalism. In the third column there are the respective objects as they appear in the emergent algebra sector of the graphic lambda calculus.

sum_basic

Some comments: the first two rows  are about an object called

  • “dilation (of coefficient \varepsilon, with \varepsilon \in \Gamma, a commutative group)” in dilation structures,  which is a operation in emergent algebras, indexed by \varepsilon, (the second row is about dilations of coefficient \varepsilon^{-1}),
  • it is an elementary binary tree with the node decorated by white (for \varepsilon) or black (for \varepsilon^{-1})
  • it is one of the elementary gates in graphic lambda calculus.

The third row is about the “approximate sum” in dilation structures, which is a composite operation in emergent algebras, which is a certain graph in graphic lambda calculus.

The fourth row is about the “approximate difference” and the fifth about the “approximate inverse”.

For the geometric meaning of these objects see the series on  The origin of emergent algebras part I, part II, part III,   or go directly and read arXiv:1304.3694 .

What is different between these rows?

  • In the first row we have an algebra structure based on identities between composites of operations defined on a set.
  • In the second row we have trees with leaves decorated by labels from an alphabet (of formal variables) or terms constructed recursively from those  .
  • In the third row we have graphs with no variable names. (Recall that in graphic lambda calculus there are no variable names. Everything written in red can be safely erased, it is put there only for the convenience of the reader.)

Let’s see now the dictionary of identities/moves.

moves_basic

The most important comment is that identities in emergent algebras become moves in the other two formalisms. A succession of moves is in fact a proof for an identity.

The names of the identities or moves have been commented in many places, you see there names like “Reidemeister move” which show relations to knot diagrams, etc. See this post for the names of the moves and relations to knot diagrams, as well as section 6 from  arXiv:1305.5786 .

Let’s read the first column: it says that from an algebraic viewpoint an emergent algebra is a one parameter family (indexed by \varepsilon \in \Gamma) of idempotent right quasigroups. From the geometric point of view of dilation structures, it is a formalisation of properties expected from an object called “dilation”, namely that it preserves the base-point ( “x” in the figure), that a composition of dilations of coefficients \varepsilon, \mu, with the same base-point,  is again a dilation, of coefficient \varepsilon \mu, etc.

In the second column we see two moves, R1 and R2, which can be applied anywhere in a decorated binary tree, as indicated.

In the third column we see that these moves are among the moves from graphic lambda calculus, namely that R1 is in fact related to the oriented Reidemeister move R1a, so it has the same name.

The fact that the idempotent right quasigroup indexed by the neutral element of \Gamma, denoted by 1, is trivial, has no correspondent for binary trees, but it appears as the move (ext 2) in graphic lambda calculus. Through the intermediary of this move appears the univalent termination gate.

These are the common moves. To these moves add, for the part of the emergent algebra sector, the R1b move, the local fan-out moves and some pruning moves. There is also the global fan-out move which is needed, but we are going to replace it by a local move which has the funny name of “linearity of fan-out”, but that’s for later.

The local fan-out moves and the pruning moves are needed for the emergent algebra sector but not for the binary trees or emergent algebras. They are the price we have to pay for eliminating variable names. See the algorithm for producing graphs from lambda calculus terms, for what concerns their use for solving the same problem for untyped lambda calculus. (However, the emergent algebra sector is to be compared not with the lambda calculus sector, but with the combinatory logic sector, more about this in a further post.)

We don’t need all pruning moves, but only one which, together with the local fan-out moves, form a family which could be aptly called:

co_mono_y

(notice I consider a reversible local pruning)

Grouping moves like this makes a nice symmetry with the fact that \Gamma is a commutative group, as remarked here.

As concerns the R1b move, which is the one from the next figure, I shall use it only if really needed (for the moment I don’t). It is needed for the knot diagrams made by emergent algebra gates sector, but it is not yet clear to me if we need it for the emergent algebra sector.

r1bmove

However, there is a correspondent of this move for emergent algebras. Indeed, recall that a right quasigroup is a a quasigroup if the equation x \circ a = b  has a solution, which is unique. If our emergent algebra is in fact a (family of) quasigroup(s) , as happens for the cases of conical groups or for symmetric spaces in the sense of Loos (explained in arXiv:1005.5031 ), then in particular it follows that the equation  x \circ_{\varepsilon} a = a has only the solution x = a (for \varepsilon \not = 1). This last statement has the R1b move as a correspondent in the realm of the emergent algebra sector.

Until now we have only local moves in the emergent algebra sector. We shall see that we need a global move (the global fan-out) in order to prove that the dictionary works, i.e. for proving the fundamental identities of emergent algebras within the graphic lambda calculus formalism. The goal will be to replace the global fan-out move by a new local move (i.e. one which is not a consequence of the existing moves of graphic lambda calculus). This new move will turn out to be a familiar sight, because it is related to the way we see linearity in emergent algebras.

Curvature and halfbrackets, part III

I continue from “Curvature and halfbrackets, part II“.  This post is dedicated to the application of the previously introduced notions to the case of a sub-riemannian Lie group.

_______________

1. I start with the definition of a sub-riemannian Lie group. If you look in the literature, the first reference to “sub-riemannian Lie groups” which I am aware about is the series Sub-riemannian geometry and Lie groups  arXiv:math/0210189, part II arXiv:math/0307342 , part III arXiv:math/0407099 .    However, that work predates the introduction of dilation structures, therefore there is a need to properly define this  object within the actual state of matters.

Definition 1. A sub-riemannian Lie group is a locally compact topological group G with the following supplementary structure:

  • together with the dilation structure coming from it’s one-parameter groups (by the Montgomery-Zippin construction), it has a group norm which induce a tempered dilation structure,
  • it has a left-invariant dilation structure (with dilations \delta^{x}_{\varepsilon} y = x \delta_{\varepsilon}(x^{-1}y) and group norm denoted by \| x \|) which, paired with the tempered dilation structure mentioned previously, it satisfies the hypothesis of “Sub-riemannian geometry from intrinsic viewpoint” Theorem 12.9,  arXiv:1206.3093

Remarks:

  1. there is no assumption on the tempered group norm to come from a Riemannian left-invariant distance on the group. For this reason, some people use the name sub-finsler  arXiv:1204.1613  instead of sub-riemannian, but I believe this is not a serious distinction, because the structure of a scalar product which induces the distance is simply not needed for understanding  sub-riemannian Lie groups.
  2. by Theorem 12.9, it follows that the left-invariant field of dilations induces a length dilation structure. I shall use this further. Length dilation structures are maybe a more useful object than simply dilation structures, because they explain how the length functional behaves at different scales, which is a much more detailed information about the microscopic structure of a length metric space than just the information about how the distance behaves at different scales.

This definition looks a bit mysterious, unless you read the course notes cited inside the definition. Probably, when I shall find the interest to pursue it, it would be really useful to just apply, step by step, the constructions from arXiv:1206.3093 to sub-riemannian Lie groups.

__________________

2. With the notations from the last post, I want to compute the quantities A, B, C. We already know that B is related to the curvature of G with respect to it’s sub-riemannian (sub-finsler if you like it more) distance, as introduced previously via metric profiles.  We also know that B is controlled by A and C. But let’s see the expressions of these three quantities for sub-riemannian Lie groups.

I denote by d(u,v) the left invariant sub-riemannian distance, therefore we have d(u,v) = \| u^{-1}v\|.

Now, \rho_{\varepsilon}(x,u) = \| x^{-1} u \|_{\varepsilon} , where \varepsilon \| u \|_{\varepsilon} = \| \delta_{\varepsilon} u \|  by definition.  Notice also that \Delta^{x}_{\varepsilon}(u,v) = (\delta^{x}_{\varepsilon} u ) ((u^{-1} x) *_{\varepsilon} (x^{-1} v)), where  u *_{\varepsilon} v is the deformed group operation at scale \varepsilon, i.e. it is defined by the relation:

\delta_{\varepsilon} (u *_{\varepsilon} v) = (\delta_{\varepsilon} u) (\delta_{\varepsilon} v)

With all this, it follows that:

A_{\varepsilon}(x,u) = \rho_{\varepsilon}(x,u) - d^{x}(x,u) = \|x^{-1} u \|_{\varepsilon} - \| x^{-1} u \|_{0}

A_{\varepsilon}(\delta^{x}_{\varepsilon} u, \Delta^{x}_{\varepsilon}(u,v)) = \| (u^{-1} x) *_{\varepsilon} (x^{-1} v) \|_{\varepsilon} - \| (u^{-1} x) *_{\varepsilon} (x^{-1} v)\|_{0}.

A similar computation leads us to the expression for the curvature related quantity

B_{\varepsilon}(x,u,v) = d^{x}_{\varepsilon}(u,v) - d^{x}(u,v) = \| (u^{-1}x) *_{\varepsilon} (x^{-1} v)\|_{\varepsilon} - \| (u^{-1}x) *_{0} (x^{-1}v)\|_{0}.

Finally,

C_{\varepsilon}(x,u,v) = \|(u^{-1} x) *_{\varepsilon} (x^{-1} v)\|_{0} - \|(u^{-1}x) *_{0} (x^{-1}v)\|_{0}. This last quantity is controlled by a halfbracket, via a norm inequality.

The expressions of A, B, C make transparent that the curvature-related B is the sum of A and C. In the next post I shall use the length dilation structure of the sub-riemannian Lie group in order to show that A is controlled by C, which in turn is controlled by a norm of a halfbracket. Then I shall apply all this to SO(3), as an example.

Curvature and halfbrackets, part II

I continue from “Curvature and halfbrackets, part I“, with the same notations and background.

In a metric space with dilations (X,d,\delta), there are three quantities which will play a role further.

1. The first quantity is related to the “norm” function defined as

\rho_{\varepsilon}(x,u) = d^{x}_{\varepsilon}(x,u)

Notice that this is not a distance function, instead it is more like a norm of u with respect to the basepoint x, at scale \varepsilon. Together with the field of dilations, this “norm” function contains all the information about the local and infinitesimal behaviour of the distance d. We can see this from the fact that we can recover the re-scaled distance d^{x}_{\varepsilon} from this “norm”, with the help of the approximate difference (for this notion see on this blog the definition of approximate difference in terms of emergent algebras here, or go to point 3. from the post The origin of emergent algebras (part III)):

\rho_{\varepsilon}(\delta^{x}_{\varepsilon} u , \Delta^{x}_{\varepsilon}(u,v)) = d^{x}_{\varepsilon}(u,v)

(proof left to the interested reader) This identity shows that the uniform convergence of (x,u,v) \mapsto d^{x}_{\varepsilon}(u,v) to (x,u,v) \mapsto d^{x}(u,v), as \varepsilon goes to 0, is a consequence of the following pair of uniform convergences:

  • that of the function (x,u) \mapsto \rho_{\varepsilon}(x,u) which converges to (x,u) \mapsto d^{x}(x,u)
  • that of  the pair (dilation, approximate difference)  (x,u,v) \mapsto (\delta^{x}_{\varepsilon} u , \Delta^{x}_{\varepsilon}(u,v)) to (x,u,v) \mapsto (x, \Delta^{x}(u,v)), see how this pair appears from the normed groupoid formalism, for example by reading the post from the post The origin of emergent algebras (part III).

With this definition of the “norm” function, I can now introduce the first quantity of interest, which measures the difference between the “norm” function at scale \varepsilon and the “norm” function at scale 0:

A_{\varepsilon}(x,u) = \rho_{\varepsilon}(x,u) - d^{x}(x,u)

The interpretation of this quantity is easy in the particular case of a riemannian space with dilations defined by the geodesic exponentials. In this particular case

A_{\varepsilon}(x,u) = 0

because the “norm” function \rho_{\varepsilon}(x,u) equals the distance between d(x,u) (due to the definition of dilations with respect to the geodesic exponential).

In more general situations, for example in the case of a regular sub-riemannian space, we can’t define dilations in terms of geodesic exponentials (even if we may have at disposal geodesic exponentials). The reason has to do with the fact that the geodesic exponential in the case of a regular sub-riemannian manifold, is not intrinsically defined as a function from the tangent of the geodesic at it’s starting point. That is because geodesics in regular sub-riemannian manifolds (at least those which are classically, i.e. with respect to the differential manifold structure, smooth , are bound to have tangents only in the horizontal directions.

As another example, think about a sub-riemannian Lie group. Here, we may define a left-invariant dilation structure with the help of the Lie group exponential. In this case the quantity A_{\varepsilon}(x,u) is certainly not equal to 0, excepting very particular cases, as a riemannian compact Lie group, with bi-invariant distance, where the geodesic and Lie group exponentials coincide.

_________________

2.   The second quantity is the one which is most interesting for defining (sectional like) curvature, let’s call it

B_{\varepsilon}(x,u,v) = d^{x}_{\varepsilon}(u,v) - d^{x}(u,v).

_________________

3. Finally, the third quantity of interest is a kind of a measure of the convergence of (x,u,v) \mapsto (\delta^{x}_{\varepsilon} u , \Delta^{x}_{\varepsilon}(u,v)) to (x,u,v) \mapsto (x, \Delta^{x}(u,v)), but measured with the norms from the tangent spaces.  Now, a bit of notations:

dif_{\varepsilon}(x,u,v) = (\delta^{x}_{\varepsilon} u , \Delta^{x}_{\varepsilon}(u,v)) for any three points x, u, v,

dif_{0}(x,u,v) = (x, \Delta^{x}(u,v))  for any three points x, u, v  and

g(v,w) = d^{v}(v,w) for any two points v, w.

With these notations I introduce the third quantity:

C_{\varepsilon}(x,u,v) = g( dif_{\varepsilon}(x,u,v) ) - g( dif_{0}(x,u,v) ).

_________________

The relation between these three quantities is the following:

Proposition.  B_{\varepsilon}(x,u,v) = A_{\varepsilon}(dif_{\varepsilon}(x,u,v)) + C_{\varepsilon}(x,u,v).

_________________

Suppose that we know the following estimates:

A_{\varepsilon}(x,u) = \varepsilon^{\alpha} A(x,u) + higher order terms, with A(x, u) \not = 0 and \alpha > 0,

B_{\varepsilon}(x,u,v) = \varepsilon^{\beta} B(x,u,v) + higher order terms, with B(x,u,v) \not = 0 and \beta > 0,

C_{\varepsilon}(x,u) = \varepsilon^{\gamma} C(x,u,v) + higher order terms, with C(x, u) \not = 0 and \gamma > 0,

Lemma. Let  us sort in increasing order the list of the values \alpha, \beta, \gamma and denote the sorted list by a, b, c. Then a = b.

The proof is easy. The equality from the Proposition tells us that the modules of A_{\varepsilon}, B_{\varepsilon} and C_{\varepsilon} can be taken as the edges of a triangle. Suppose then that a < b < c, use the estimates from the hypothesis and divide by \varepsilon^{a} in one of the three triangle inequalities, then go with \varepsilon to 0 in order to arrive at a contradiction 0 < 0.

_________________

The moral of the lemma is that there are at most two different coefficients in the list \alpha, \beta, \gamma. The coefficient \beta is called “curvdimension”. In the next post I shall explain why, in the case of a sub-riemannian Lie group,  the coefficient \gamma is related to the halfbracket. Moreover, we shall see that in the case of sub-riemannian Lie groups all three coefficient are equal, therefore the infinitesimal behaviour of the halfbracket determines the curvdimension.

Emergent algebras as combinatory logic (Part II)

This post continues Emergent algebras as combinatory logic (Part I).  My purpose is to introduce the calculus standing behind Theorem 1 from the mentioned post.

We have seen (Definition 2) that there are approximate sum and difference operations associated to an emergent algebra. Let me add to them a third operation, namely the approximate inverse. For clarity I repeat here the Definition 2, supplementing it with the definition of the approximate inverse. This gives:

Definition 2′.   For any \varepsilon \in \Gamma we give the following names to several combinations of operations of emergent algebras:

  • the approximate sum operation is \Sigma^{x}_{\varepsilon} (u,v) = x \bullet_{\varepsilon} ((x \circ_{\varepsilon} u) \circ_{\varepsilon} v),
  • the approximate difference operation is \Delta^{x}_{\varepsilon} (u,v) = (x \circ_{\varepsilon} u) \bullet_{\varepsilon} (x \circ_{\varepsilon} v)
  • the approximate inverse operation is inv^{x}_{\varepsilon} u = (x \circ_{\varepsilon} u) \bullet_{\varepsilon} x.

The justification for these names comes from the explanations given in the post “The origin of emergent algebras (part II)“, where I discussed the sketch of a solution to the question “What makes the (metric)  tangent space (to a sub-riemannian regular manifold) a group?”, given by Bellaiche in the last two sections of his article  The tangent space in sub-riemannian geometry, in the book Sub-riemannian geometry, eds. A. Bellaiche, J.-J. Risler, Progress in Mathematics 144, Birkhauser 1996. We have seen there that the group operation (the noncommutative,  in principle, addition of vectors) can be seen as the limit of compositions of intrinsic dilations, as \varepsilon goes to 0. It is important that this limit exists and that it is uniform, according to Gromov’s hint.

Well,  with the notation \delta^{x}_{\varepsilon} y = x \circ_{\varepsilon} y, \delta^{x}_{\varepsilon^{-1}} y = x \bullet_{\varepsilon} y, it becomes clear, for example, that the composition of intrinsic dilations described in the figure from the post “The origin of emergent algebras (part II)” is nothing but the approximate sum from Definition 2′. (This is to say that formally, if we replace the emergent algebra operations with the respective intrinsic dilations, then the approximate sum operation \Sigma^{x}_{\varepsilon}(y,z)  appears as the red point E from the mentioned  figure. It is still left to prove that intrinsic dilations from regular sub-riemannian spaces give rise to emergent algebras, this was done in arXiv:0708.4298.)

We recognize therefore the two ingredients of Bellaiche’s solution into the definition of an emergent algebra:

  • approximate operations, which are just clever compositions of intrinsic dilations in the realm of sub-riemannian spaces, which
  • converge in a uniform way to the exact operations which give the algebraic structure of the tangent space.

Therefore, a rigorous formulation of Bellaiche’s solution is Theorem 1 from the previous post, provided that we extract,  from the long differential geometric work done by Bellaiche, only the part which is necessary for proving that intrinsic dilations produce an emergent algebra structure.

Nevertheless, Theorem 1 shows that the “emergence of operations” phenomenon is not at all specific to sub-riemannian geometry. In fact, once we get the idea of the right definition of approximate operations (from sub-riemannian geometry), we can simply try to prove the theorem by “abstract nonsense”, i.e. algebraically, with a dash of uniform convergence at the end.

For this we have to identify the algebraic relations which are satisfied by these approximate operations.  For example, is the approximate sum associative? is the approximate difference the inverse of the approximate sum? is the approximate inverse of an element the inverse with respect to the approximate sum? and so on. The answer to these questions is “approximately yes”.

It is clear that in order to find the right relations (approximate associativity and so on) between these approximate operations we need to reason in a more clear way. Just by looking at the expressions of the operations from Definition 2′, it is obvious that if we start with a brute force  “shut up and compute” approach  then we will end rather quickly with a mess of parantheses and coefficients. There has to be a more easy way to deal with those approximate operations than brute force.

The way I have found has to do with a graphical representation of these operations, a way which eventually led me to graphic lambda calculus. This is for next time.

The origin of emergent algebras (part II)

I continue from the post “The origin of emergent algebras“, which revolves around the last sections of Bellaiche paper The tangent space in sub-riemannian geometry, in the book Sub-riemannian geometry, eds. A. Bellaiche, J.-J. Risler, Progress in Mathematics 144, Birkhauser 1996.

In this post we shall see how Bellaiche proposes to extract the algebraic structure of the metric  tangent space T_{p}M at a point p \in M, where M is a regular sub-riemannian manifold. Remember that the metric tangent space is defined up to arbitrary isometries fixing one point, as the limit in the Gromov-Hausdorff topology over isometry classes of compact pointed metric spaces

[T_{p} M, d^{p}, p] = \lim_{\varepsilon \rightarrow 0} [\bar{B}(p, \varepsilon), \frac{1}{\varepsilon} d, p]

where [X, d, p] is the isometry class of the compact  metric space (X,d) with a marked point p \in X. (Bellaiche’s notation is less precise but his previous explanations clarify that his relations (83), (84) are meaning exactly what I have written above).

A very important point is that moreover, this convergence is uniform with respect to the point p \in M. According to Gromov’s hint mentioned  by  Bellaiche, this is the central point of the matter. By using this and the structure of the trivial pair groupoid M \times M, Bellaiche proposes to recover the Carnot group algebraic structure of T_{p}M.

From this point on I shall pass to a personal interpretation of the  section 8.2 “A purely metric derivation of the group structure in T_{p}M for regular p” of Bellaiche article. [We don’t have to worry about “regular” points because I already supposed that the manifold is “regular”, although Bellaiche’s results are more general, in the sense that they apply also to sub-riemannian manifolds which are not regular, like the Grushin plane.]

In order to exploit the limit in the sense of Gromov-Hausdorff, he needs first an embodiment of the abstract isometry classes of pointed metric spaces. More precisely, for any \varepsilon > 0 (but sufficiently small), he uses a function denoted by \phi_{x}, which he states that it is defined on T_{x} M with values in M. But doing so would be contradictory with the goal of constructing the tangent space from the structure of the trivial pair groupoid and dilations. For the moment there is no intrinsic meaning of T_{x} M, although there is one from differential geometry, which we are not allowed to use, because it is not intrinsic to the problem.  Nevertheless, Bellaiche already has the functions \phi_{x}, by way of his lengthy proof (but up to date the best proof) of the existence of adapted coordinates. For a detailed discussion see my article “Dilatation structures in sub-riemannian geometry” arXiv:0708.4298.

Moreover, later he mentions “dilations”, but which ones? The natural dilations he has from the vector space structure of the tangent space in the usual differential geometric sense? This would have no meaning, when compared to his assertion that the structure of a Carnot group of the metric tangent space is concealed in dilations.  The correct choice is again to use his adapted coordinate systems and use intrinsic dilations.  In fewer words, what Bellaiche probably means is that his functions \phi_{x} are also decorated with the scale  parameter \varepsilon >0, so they should deserve the better notation \phi_{\varepsilon}^{x},  and that these functions behave like dilations.

A natural alternative to Bellaiche’s proposal would be to use an embodiment of the isometry class [\bar{B}(x, \varepsilon), \frac{1}{\varepsilon} d, x] on the space M, instead of the differential geometric tangent space T_{x}M.  With this choice, what Bellaiche is saying is that we should consider dilation like functions \delta^{x}_{\varepsilon} defined locally from M to M such that:

  • they preserve the point x (which will become the “0” of the metric tangent space): \delta^{x}_{\varepsilon} x = x
  • they form a one-parameter group with respect to the scale: \delta^{x}_{\varepsilon} \delta^{x}_{\mu} y = \delta^{x}_{\varepsilon \mu} y and \delta^{x}_{1} y = y,
  • for any y, z at a finite distance from x (measured with the sub-riemannian distance d, more specifically such that  d(x,y), d(x,z) \leq 1) we have

d^{x}(y,z) = \frac{1}{\varepsilon} d( \delta^{x}_{\varepsilon} y, \delta^{x}_{\varepsilon}z) + O(\varepsilon)

where O (\varepsilon) is uniform w.r.t. (does not depend on) x, y , z in compact sets.

Moreover, we have to keep in mind that the “dilation”  \delta^{x}_{\varepsilon} is defined only locally, so we have to avoid to go far from x, for example we have to avoid to apply the dilation for \varepsilon very big to points at finite distance from x.

Again, the main thing to keep in mind is the uniformity assumption. The choice of the embodiment provided by “dilations” is not essential, we may take them otherwise as we please, with the condition that at the limit \varepsilon \rightarrow 0 certain combinations of dilations converge uniformly. This idea suggested by Bellaiche reflects the hint by Gromov.  In fact this is what is left from the idea of a manifold in the realm of sub-riemannian geometry  (because adapted coordinates cannot be used for building manifold structures, due to the fact that “local” and “infinitesimal” are not the same in sub-riemannian geometry, a thing rather easy to misunderstand until you get used to it).

Let me come back to Bellaiche reasoning, in the setting I just explained. His purpose is to construct the operation in the tangent space, i.e. the addition of vectors. Only that the addition has to recover the structure of a Carnot group, as proven by Bellaiche. This means that the addition is not a commutative, but a noncommutative  nilpotent operation.

OK, so we have the base point x \in M and two near points y and z, which are fixed. The problem is how to construct an intrinsic addition of y and z with respect to x. Let us denote by y +_{x} z the result we are seeking. (The link with the trivial pair groupoid is that we want to define an operation which takes (x,y) and (x,z) as input and spills (x, y+_{x} z) as output.)

The relevant figure is the following one, which is an improved version of the Figure 5, page 76 of Bellaiche paper.

bellaiche

Bellaiche’s recipe has to do with the points in blue. He says that first we have to go far from x, by dilating the point z w.r.t. the point x, with the coefficient \varepsilon^{-1}. Here \varepsilon is considered to be small (it will go to 0), therefore \varepsilon^{-1} is big.  The result is the blue point A. Then, we dilate (or rather contract) the point A  by the coefficient \varepsilon w.r.t. the point y. The result is the blue point B.

Bellaiche claims that when \varepsilon goes to 0 the point B converges to the sum y +_{x} z. Also, from this intrinsic definition of addition, all the other properties (Carnot group structure) of the operation may be deduced from the uniformity of this convergence. He does not give a proof of this fact.

The idea of Bellaiche is partially correct (in regards to the emergence of the algebraic properties of the operation from uniformity of the convergence of its definition) and partially wrong (this is not the correct definition of the operation). Let me start with the second part. The definition of the operation has the obvious default that it uses the point A which is far from x. This is in contradiction with the local character of the definition of the metric tangent space (and in contradiction with the local definition of dilations).  But he is wrong from interesting reasons, as we shall see.

Instead, a slightly different path could be followed, figured by the red points C, D, E. Indeed, instead of going far away first (the blue point A), then coming back at finite distance from x (the blue point B), we may first come close to x (by using  the red points C, D), then inflate the point D to finite distance from x and get the point E. The recipe is a bit more complicated, it involves three dilations instead of two, but I can prove that it works (and leads to the definition of dilation structures and later to the definition of emergent algebras).

The interesting part is that if we draw, as in the figure here,  the constructions in the euclidean plane, then we get E = B, so actually in this case there is no difference between the outcome of these constructions. At further examination this looks like an affine feature, right? But in fact this is true in non-affine situations, for example in the case of intrinsic dilations in Carnot groups, see the examples from the post “Emergent algebra as combinatory logic (part I)“.

Let’s think again about these dilations, which are central to our discussion, as being operations. We may change the notations like this:

\delta^{x}_{\varepsilon} y = x \circ_{\varepsilon} y

Then, it is easy to verify that the equality between the red point E and the blue point B is a consequence of the fact that in usual vector spaces (as well as in their non-commutative version, which are Carnot groups), the dilations, seen as operations, are self-distributive! That is why Bellaiche is actually right in his definition of the tangent space addition operation, provided that it is used only for self-distributive dilation operations. (But this choice limits the applications of his definition of addition operation only to Carnot groups).

Closing remark: I was sensible to these two last sections of Bellaiche’s paper because I was prepared by one of my previous obsessions, namely how to construct differentiability only from topological data.  This was the subject of my first paper, see the story told in the post “Topological substratum of the derivative“, there is still some mystery to it, see arXiv:0911.4619.

The origin of emergent algebras

In the last section “Why is the tangent space a group?” (section 8) of the great article by A. Bellaiche, The tangent space in sub-riemannian geometry*, the author explains a very interesting story, where names of Gromov and Connes appear, which is the first place, to my knowledge, where the idea of emergent algebras appear.

In a future post I shall comment more consistently on the math, but this time let me give you the relevant passages.

[p. 73] “Why is the tangent space a group at regular points? […] I have been puzzled by this question. Drawing a Lie algebra from the bracket structure of some X_{i}‘s did not seem to me the appropriate answer. I remember having, at last, asked M. Gromov about it (1982). The answer came under the form of a little apologue:

Take a map f: \mathbb{R}^{n} \rightarrow \mathbb{R}^{n}. Define its differential as

(79)              D_{x} f(u) = \lim_{\varepsilon \rightarrow 0} \varepsilon^{-1} \left[ f(x+\varepsilon u) - f(x) \right],

provided convergence holds. Then D_{x}f is certainly homogeneous:

D_{x}f(\lambda u) = \lambda D_{x}f(u),

but it need not satisfy the additivity condition

D_{x}f(u+v) = D_{x}f(u) + D_{x}f(v).

[…] However, if the convergence in (79)  is uniform on some neghbourhood of (x,0)  […]  then D_{x}f is additive, hence linear. So, uniformity was the key. The tangent space at p is a limit, in the [Gromov-]Hausdorff sense, of pointed spaces […] It certainly is a homogeneous space — in the sense of a metric space having a 1-parameter group of dilations. But when the convergence is uniform with respect to p, which is the case near regular points, in addition, it is a group.

Before giving the proof, I want to tell of another, later, hint, coming from the work of A. Connes. He has made significant use of the following observation: The tangent bundle TM to a differentiable manifold M is, like M \times M, a groupoid. […] In fact TM is simply a union of groups. In [8], II.5, it is stated that its structure may be derived from that of M \times M by blowing up the diagonal in M \times M. This suggests that, putting metrics back into the picture, one should have

(83)          TM = \lim_{\varepsilon \rightarrow 0} \varepsilon^{-1} (M \times M)

[…] in some sense to be made precise.

There is still one question. Since the differentiable structure of our manifold is the same as in Connes’ picture, why do we not get the same abelian group structure? One can answer: The differentiable structure is strongly connected to (the equivalence class of) Riemannian metrics; differentiable maps are locally Lipschitz, and Lipschitz maps are almost everywhere differentiable. There is no such connection between differentiable maps and the metric when it is sub-riemannian. Put in another way, differentiable maps have good local commutation properties with ordinary dilations, but not with sub-riemannian dilations \delta_{\lambda}.

So, one should not be abused by (83) and think that the algebraic structure of T_{p}M stems from the absolutely trivial structure of M \times M! It is concealed in dilations, as we shall now prove.

*) in the book Sub-riemannian geometry, eds. A. Bellaiche, J.-J. Risler, Progress in Mathematics 144, Birkhauser 1996

Ado’s theorem for groups with dilations?

Ado’s theorem  is equivalent with the following:

Theorem. Let G be a local Lie group. Then there is a real, finite dimensional vector space V and an injective, local group morphism from (a neighbourhood of the neutral element of) G to GL(V), the linear group of $V$.

Any proof I am aware of, (see this post for one proof and relevant links),  mixes the following ingredients:

–  the Lie bracket and the BCH formula,

– either reduction to the nilpotent case or (nonexclusive) use of differential equations,

– the universal enveloping algebra.

WARNING: further I shall not mention the “local” word, in the realm of spaces with dilations everything is local.

We may pass to the following larger frame of spaces with dilations, dilatation structures or emergent algebras:

– locally compact groups with dilations instead of Lie groups

– locally compact conical groups instead of vector spaces

– linearity in the sense of dilation structures instead of usual linearity.

Conjecture:  For any locally compact group with dilations G there is a locally compact conical group N and an injective morphism \rho: G \rightarrow GL(N) such that for every x \in N the map g \in G \mapsto \rho(g)x is differentiable.

In this frame:

– we don’t have the corresponding Lie bracket and BCH formula, see the related problem of the noncommutative BCH formula,

– what nilpotent means is no longer clear (or needed?)

– we don’t have a clear tensor product, therefore we don’t have a correspondent of the universal enveloping algebra.

Nevertheless I think the conjecture is true and actually much easier to prove than Ado’s theorem, because of the weakening of the conclusion.

Preview of two papers, thanks for comments

Here are two papers:

Local and global moves on planary trivalent graphs, lambda calculus and lambda-Scale (update 03.07.2012, final version, appears as arXiv:1207.0332)

Sub-riemannian geometry from intrinsic viewpoint    (update 14.06.2012: final version, appears as arxiv:1206.3093)

which are still subject to change.  Nevertheless most of what I am trying to communicate is there. I would appreciate  mathematical comments.

This is an experiment,  to see what happens if I make previews of papers available, like a kind of a blog of papers in the making.

Intrinsic characterizations of riemannian and sub-riemannian spaces (I)

In this post I explain what is the problem of intrinsic characterization of riemannian manifolds, in what sense has been solved in full generality by Nikolaev, then I shall comment on the proof of the Hilbert’s fifth problem by Tao.

In the next post there will be then some comments about Gromov’s problem of giving an intrinsic characterization of sub-riemannian manifolds, in what sense I solved this problem by adding a bit of algebra to it. Finally, I shall return to the characterization of riemannian manifolds, seen as particular sub-riemannian manifolds, and comment on the differences between this characterization and Nikolaev’ one.

1. History of the problem for riemannian manifolds. The problem of giving an intrinsic characterization of riemannian manifolds is a classic and fertile one.

Problem: give a metric description of a Riemannian manifold.

Background: A complete riemannian manifold is a length metric space (or geodesic, or intrinsic metric space) by Hopf-Rinow theorem. The problem asks for the recovery of the manifold structure from the distance function (associated to the length functional).

For 2-dim riemannian manifolds the problem has been solved by A. Wald [Begrundung einer koordinatenlosen Differentialgeometrie der Flachen, Erg. Math. Colloq. 7 (1936), 24-46] (“Begrundung” with umlaut u, “Flachen” with umlaut a, sorry for this).

In 1948 A.D. Alexandrov [Intrinsic geometry of convex surfaces, various editions] introduces its famous curvature (which uses comparison triangles)  and proves that, under mild smoothness conditions  on this curvature, one is capable to recover the differential structure and the metric of the 2-dim riemannian manifold. In 1982 Alexandrov proposes as a conjecture that a characterization of a riemannian manifold (of any dimension) is possible in terms of metric (sectional) curvatures (of the type introduced by Alexandrov) and weak smoothness assumptions formulated in metric way (as for example Holder smoothness). Many other results deserve to be mentioned (by Reshetnyak, for example).

2. Solution of the problem by Nikolaev. In 1998 I.G. Nikolaev [A metric characterization of riemannian spaces, Siberian Adv. Math. , 9 (1999), 1-58] solves the general problem of intrinsic characterization of C^{m,\alpha} riemannian spaces:

every locally compact length metric space M, not linear at one of its points,  with \alpha Holder continuous metric sectional curvature of the “generalized tangent bundle” T^{m}(M) (for some $m=1,2,…$, which admits local geodesic extendability, is isometric to a C^{m+2} smooth riemannian manifold..

Therefore:

  • he defines a generalized tangent bundle in metric sense
  • he defines a notion of sectional curvature
  • he asks some metric smoothness of this curvature

and he gets the result.

3. Gleason metrics and Hilbert’s fifth problem. Let us compare this with the formulation of the solution of the Hilbert’s fifth problem by Terence Tao. THe problem is somehow similar, namely recover the differential structure of a Lie group from its algebraic structure. This time the “intrinsic” object is the group operation, not the distance, as previously.

Tao shows that the proof of the solution may be formulated in metric terms. Namely, he introduces a Gleason metric (definition 4 in the linked post), which will turn to be a left invariant riemannian metric on the (topological) group. I shall not insist on this, instead read the post of Tao and also, for the riemannian metric description, read this previous post by me.

Three problems and a disclaimer

In this post I want to summarize the list of problems I am currently thinking about. This is not a list of regular mathematical problems, see the disclaimer on style written at the end of the post.

Here is the list:

1. what is “computing with space“? There is something happening in the brain (of a human or of a fly) which is akin to a computation, but is not a logical computation: vision. I call this “computing with space”. In the head there are a bunch of neurons chirping one to another, that’s all. There is no euclidean geometry, there are no a priori coordinates (or other extensive properties), there are no problems to solve for them neurons, there is  no homunculus and no outer space, only a dynamical network of gates (neurons and their connections). I think that a part of an answer is the idea of emergent algebras (albeit there should be something more than this).  Mathematically, a closely related problem is this: Alice is exploring a unknown space and then sends to Bob enough information so that Bob could “simulate” the space in the lab. See this, or this, or this.

Application: give the smallest hint of a purely relational  model of vision  without using any a priori knowledge of the (euclidean or other) geometry of outer space or any  pre-defined charting of the visual system (don’t give names to neurons, don’t give them “tasks”, they are not engineers).

2. non-commutative Baker-Campbell-Hausdorff formula. From the solution of the Hilbert’s fifth problem we know that any locally compact topological group without small subgroups can be endowed with the structure of a “infinitesimally commutative” normed group with dilations. This is true because  one parameter sub-groups  and Gleason metrics are used to solve the problem.  The BCH formula solves then another problem: from the infinitesimal structure of a (Lie) group (that is the vector space structure of the tangent space at the identity and the maniflod structure of the Lie group) and from supplementary infinitesimal data (that is the Lie bracket), construct the group operation.

The problem of the non-commutative BCH is the following: suppose you are in a normed group with dilations. Then construct the group operation from the infinitesimal data (the conical group structure of the tangent space at identity and the dilation structure) and supplementary data (the halfbracket).

The classical BCH formula corresponds to the choice of the dilation structure coming from the manifold structure of the Lie group.

In the case of a Carnot group (or a conical group), the non-commutative BCH formula should be trivial (i.e. x y = x \cdot y, the equivalent of xy = x+y in the case of a commutative Lie group, where by convention we neglect all “exp” and “log” in formulae).

3. give a notion of curvature which is meaningful for sub-riemannian spaces. I propose the pair curvdimension- curvature of a metric profile. There is a connection with problem 1: there is a link between the curvature of the metric profile and the “emergent Reidemeister 3 move” explained in section 6 of the computing with space paper. Indeed, at page 36 there is this figure. Yes, R^{x}_{\epsilon \mu \lambda} (u,v) w is a curvature!

Disclaimer on style. I am not a problem solver, in the sense that I don’t usually like to find the solution of an already formulated problem. Instead, what I do like to do is to understand some phenomenon and prove something about it in the simplest way possible.  When thinking about a subject, I like to polish the partial understanding I have by renouncing to use any “impure” tools, that is any (mathematical) fact which is strange to the subject. I know that this is not the usual way of doing the job, but sometimes less is more.

Curvdimension and curvature of a metric profile III

I continue from the previous post “Curvdimension and curvature of a metric profile II“.

Let’s see what is happening for (X,g), a sufficiently smooth (C^{4} for example),  complete, connected  riemannian manifold.  The letter “g” denotes the metric (scalar product on the tangent space) and the letter “d” will denote the riemannian distance, that is for any two points x,y \in X the distance  d(x,y) between them is the infimum of the length of absolutely continuous curves which start from x and end in y. The length of curves is computed with the help of the metric g.

Notations.   In this example X is a differential manifold, therefore it has tangent spaces at every point, in the differential geometric sense. Further on, when I write “tangent space” it will mean tangent space in this sense. Otherwise I shall write “metric tangent space” for the metric notion of tangent space.

Let u,v be vectors in the tangent space at $x \in X$. When the basepoint x is fixed by the context then I may renounce to mention it in the various notations. For example \|u\| means the norm of the vector u with respect to the scalar product  g_{x} on the tangent space T_{x} X  at the point x. Likewise,\langle u,v \rangle may be used instead of g_{x}(u,v);  the riemannian curvature tensor at x  may be denoted by R and not by R_{x}, and so on …

Remark 2. The smoothness of the riemannian manifold (X,g) should be just enough such that the curvature tensor is C^{1} and such that for any compact subset C \subset X of X, possibly by rescaling g, the geodesic exponential exp_{x} u makes sense (exists and it is uniquely defined) for any x \in C and for any  u \in T_{x} X with \|u\| \leq 2.

Let us fix such a compact set C and let’s take a  point x \in C.

Definition 5. For any \varepsilon \in (0,1) we define on the closed ball of radius 1 centered at x (with respect to the distance d) the following distance: for any u,v \in T_{x} X with \|u\| \leq 1, \| v\| \leq 1

d^{x}_{\varepsilon} (exp_{x} \, u, exp_{x} v) \, = \, \frac{1}{\varepsilon} d((exp_{x} \, \varepsilon u, exp_{x} \varepsilon v).

(The notation used here is in line with the one used in  dilation structures.)

Recall that the sectional curvature K_{x}(u,v) is defined for any pair of vectors   u,v \in T_{x} X which are linearly independent (i.e. non collinear).

Proposition 1. Let M > 0 be greater or equal than \mid K_{x}(u,v)\mid , for any x \in C and any non-collinear pair of vectors u,v \in T_{x} X with \|u\| \leq 1, \| v\| \leq 1.  Then for any  \varepsilon \in (0,1) and any x \in Cu,v \in T_{x} X with \|u\| \leq 1, \| v\| \leq 1 we have

\mid d^{x}_{\varepsilon} (exp_{x} \, u, exp_{x} v) - \|u-v\|_{x} \mid \leq \frac{1}{3} M \varepsilon^{2} \|u-v\|_{x} \|u\|_{x} \|v\|_{x} + \varepsilon^{2} \|u-v\|_{x} O(\varepsilon).

Corollary 1. For any x \in X the metric space (X,d) has a metric tangent space at x, which is the isometry class of the unit ball in T_{x}X with the distance d^{x}_{0}(u,v) = \|u - v\|_{x}.

Corollary 2. If the sectional curvature at x \in X is non trivial then the metric profile at x has curvdimension 2 and moreover

d_{GH}(P^{m}(\varepsilon, [X,d,x]), P^{m}(0, [X,d,x]) \leq \frac{2}{3} M \varepsilon^{2} + \varepsilon^{2} O(\varepsilon).

Proofs next time, but you may figure them out by looking at the section 1.3 of these notes on comparison geometry , available from the page of  Vitali Kapovitch.

Curvdimension and curvature of a metric profile, II

This continues the previous post Curvdimension and curvature of a metric profile, I.

Definition 3. (flat space) A locally compact metric space (X,d) is locally flat around x \in X if there exists a > 0 such that for any \varepsilon, \mu \in (0,a] we have P^{m}(\varepsilon , [X,d,x]) = P^{m}(\mu , [X,d.x]). A locally compact metric space is flat if the metric profile at any point is eventually constant.

Carnot groups  and, more generally, normed conical groups are flat.

Question 1. Metric tangent spaces  are locally flat but are they locally flat everywhere? I don’t think so, but I don’t have an example.

Definition 4. Let (X,d) be a  locally compact metric space and x \in X a point where the metric space admits a metric tangent space. The curvdimension of (X,d) at x is curvdim \, (X,d,x) = \sup M, where  M \subset [0,+\infty) is the set of all \alpha \geq 0 such that

\lim_{\varepsilon \rightarrow 0} \frac{1}{\varepsilon^{\alpha}} d_{GH}(P^{m}(\varepsilon , [X,d,x]) , P^{m}( 0 , [X,d,x])) = 0

Remark that the set M always contains 0. Also, according to this definition, if the space is locally flat around x then the curvdimension at x is + \infty.

Question 2. Is there any  metric space with infinite curvdimension at a point where the space  is not locally flat? (Most likely the answer is “yes”, a possible example would be the revolution surface obtained from a  graph of a infinitely differentiable function f such that f(0) = 0 and all derivatives of f at 0 are equal to 0. This surface is taken with the distance from the 3-dimensional space, but maybe I am wrong. )

We are going to see next that the curvdimension of a sufficiently smooth riemannian manifold  at any of its points where the sectional curvature is not trivial is equal to 2.

Curvdimension and curvature of a metric profile, part I

In the notes Sub-riemannian geometry from intrinsic viewpoint    I propose two notions related to the curvature of a metric space at one of its points: the curvdimension and the curvature of a metric profile.In this post I would like to explain in detail what is this about, as well as making a number of comments and suggestions which are not in the actual version of the notes.

Related to these notions, they stem from rather vague proposals first made in earlier papers Curvature of sub-riemannian spaces and Sub-riemannian geometry and Lie groups II.

I shall start with the definition of the metric profile associated to a point x \in X of a locally compact metric space (X,d).  We need first a short preparation.

Let CMS be the collection of isometry classes of  pointed compact metric spaces.An element of CMS is denoted like [X,d,x] and is the equivalence class of a compact metric space (X,d), with a specified point x\in X, with respect to the equivalence relation: two pointed compact metric spaces (X,d,x), (Y,D,y) are equivalent if there is a surjective  isometry f: (X,d) \rightarrow (Y,D) such that f(x) = y.

The space $CMS$ is a metric space when endowed with the Gromov-Hausdorff distance between (isometry classes of) pointed compact metric spaces.

Definition 1.  Let (X,d) be a locally compact metric space. The metric profile of (X,d) at x is the function which associates to \varepsilon > 0 the element of CMS defined by

P^{m}(\varepsilon, x) = \left[\bar{B}(x,1), \frac{1}{\varepsilon} d, x\right]

(defined for small enough \varepsilon, so that the closed metric ball \bar{B}(x,\varepsilon) with respect to the distance d,  is compact).

Remark 1. See the previous post Example: Gromov-Hausdorff distance and the Heisenberg group, part II , where the behaviour of the metric profile of the physicists Heisenberg group is discussed.

The metric profile of the space at a point is therefore  a curve in another metric space, namely CMS with a Gromov-Hausdorff distance. It is not any curve, but one which has certain properties which can be expresses with the help of the GH distance. Very intriguing, what about a dynamic induced along these curves in the CMS. Nothing is known about this, strangely!

Indeed, to any element [X,d,x] of CMS it is associated the curve P^{m}(\varepsilon,x). This curve could be renamed P^{m}(\varepsilon , [X,d,x]).  Notice that P^{m}(1 , [X,d,x]) = [X,d,x].

For a fixed \varepsilon \in (0,1], take now P^{m}(\varepsilon , [X,d,x]), what is the metric profile of this element of CMS? The answer is: for any \mu \in (0,1] we have

P^{m}(\mu , P^{m}(\varepsilon , [X,d,x])) = P^{m}(\varepsilon \mu , [X,d,x])

which proves that the curves in CMS which are metric profiles are not just any curves.

Definition 2. If the metric profile P^{m}(\varepsilon ,[X,d,x]) can be extended by continuity to \varepsilon = 0, then the space $(X,d)$ admits a metric tangent space at x \in X and the isometry class of (the unit ball in) the tangent space equals  P^{m}(0 , [X,d,x]).

You see, P^{m}(0 , [X,d,x]) cannot be any point from $CMS$. It has to be the isometry class of a metric cone, namely a point of CMS which has constant metric profile.

The curvdimension and curvature explain how the the metric profile curve behaves near \varepsilon = 0. This is for the next post.

Sub-riemannian geometry from intrinsic viewpoint, course notes

Here are the course notes prepared   for a course at CIMPA research school on sub-riemannian geometry (2012):

Sub-riemannian geometry from intrinsic viewpoint ( 27.02.2012) (14.06.2012)

I want to express my thanks for the invitation and also my excuses for not being abble to attend the school (due to very bad weather conditions in this part of Europe, I had to cancel my plane travel).

On the difference of two Lipschitz functions defined on a Carnot group

Motivation for this post: the paper “Lipschitz and biLipschitz Maps on Carnot Groups” by William Meyerson. I don’t get it, even after several readings of the paper.

The proof of Fact 2.10 (page 10) starts by the statement that the difference of two Lipschitz functions is Lipschitz and the difference of two Pansu differentiable functions is differentiable.

Let us see: we have a Carnot group (which I shall assume is not commutative!) G and two functions f,g: U \subset G \rightarrow G, where U is an open set in G. (We may consider instead two Carnot groups G and H (both non commutative) and two functions f,g: U \subset G \rightarrow H.)

Denote by h the difference of these functions: for any x \in U h(x) = f(x) (g(x))^{-1}  (here the group operations  and inverses are denoted multiplicatively, thus if G = \mathbb{R}^{n} then h(x) = f(x) - g(x); but I shall suppose further that we work only in groups which are NOT commutative).

1.  Suppose f and g are Lipschitz with respect to the respective  CC left invariant distances (constructed from a choice of  euclidean norms on their respective left invariant distributions).   Is the function h Lipschitz?

NO! Indeed, consider the Lipschitz functions f(x) = x, the identity function,  and g(x) = u a constant function, with u not in the center of G. Then h is a right translation, notoriously NOT Lipschitz with respect to a CC left invariant distance.

2. Suppose instead that f and g are everywhere Pansu differentiable and let us compute the Pansu “finite difference”:

(D_{\varepsilon} h )(x,u) = \delta_{\varepsilon^{-1}} ( h(x)^{-1} h(x \delta_{\varepsilon} u) )

We get that (D_{\varepsilon} h )(x,u) is the product w.r.t. the group operation of two terms: the first is the conjugation of the finite difference (D_{\varepsilon} f )(x,u)  by \delta_{\varepsilon^{-1}} ( g(x) ) and the second term is the finite difference   (D_{\varepsilon} g^{-1} )(x,u).  (Here  Inn(u)(v) = u v u^{-1} is the conjugation of v by $u$ in the group G.)

Due to the non commutativity of the group operation, there should be some miracle in order for the finite difference of h to converge, as \varepsilon goes to zero.

We may take instead the sum of two differentiable functions, is it differentiable (in the sense of Pansu?). No, except in very particular situations,  because we can’t get rid of the conjugation, because the conjugation is not a Pansu differentiable function.

Non-Euclidean analysis, a bit of recent history

Being an admirer of bold geometers who discovered that there is more to geometry than euclidean geometry, I believe that the same is true for analysis. In my first published paper “The topological substratum of the derivative” (here is a scan of this hard to find paper), back in 1993, I advanced the idea that there are as many “analyses” as the possible fields of dilations, but I was not aware about Pierre Pansu huge paper from 1989 “Metriques de Carnot-Caratheodory et quasiisometries des espaces symmetriques de rang un” (sorry for the missing accents, I am still puzzled by the keyboard of the computer I am using to write this post), where he invents what is now called “Pansu calculus”, which is the analysis associated to a Carnot group.

The same idea is then explored in the papers “Sub-riemannian geometry and Lie groups, I“, “Tangent bundles to sub-riemannian groups“, “Sub-riemannian geometry and Lie groups II“. These papers have not been published (only put on arXiv), because at that moment I hoped that the www will change publishing quick (I still do believe this, but now I am just a bit wiser, or forced by bureaucracy to publish or perish), so one could communicate not only the very myopic technical, incremental result, but also the ideas behind, the powerful  meme.

During those years (2001-2005) I have been in Lausanne, trying to propagate the meme around, in Europe, as I said previously. There were mixed results, people were not taking this serious enough, according to my taste. Sergey Vodopyanov had ideas which were close to mine, except that he was trying to rely on what I call “euclidean analysis”, instead of intrinsic techniques, as witnessed by his outstanding results concerning detailed proofs in low-regularity sub-riemannian geometry. (I was against such results by principle, because what is C^{1,1} but euclidean regularity? but the underlying ideas were very close indeed).

In a very naive way I tried to propagate the meme further, by asking for a visit at IHES, in 2004, when I had the pleasure to meet Pierre Pansu and Andre Bellaiche, then I dared to ask for another visit immediately and submitted the project

“Non-Euclidean Analysis” start-up

which I invite you to read. (The project was rejected, for good reasons, I was already there visiting and suddenly I was asking for another, much longer visit)

Then, from 2006 I went back to basics and proposed axioms for this, that is how dilation structures appeared (even if the name and a definition containing the most difficult axiom was already proposed in the previous series of papers on sub-riemannian geometry and Lie groups.  See my homepage for further details and papers (published this time).

I see now that, at least at the level of names of grant projects, the meme is starting to spread. Here is the “Sub-riemannian geometric analysis in Lie groups” GALA project and here is the more recent “Geometric measure theory in Non Euclidean spaces” GeMeThNES project.

Two papers on arXiv

I put on arxiv two papers

The paper Computing with space contains too may ideas, is too dense, therefore much of it will not be read, as I was warned repeatedly. This is the reason to do again what I did with Introduction to metric spaces with dilations, which is a slightly edited part of the paper A characterization of sub-riemannian spaces as length dilation structures. Apparently the part (Introduction to ..), the small detail,  is much more read  than the whole (A characterization…).

Concerning the second paper “Normed groupoids…”, it is an improvement of the older paper. Why did I not updated the older paper? Because I need help, I just don’t understand where this is going (and why such direction of research was not explored before).

Escape property of the Gleason metric and sub-riemannian distances again

The last post of Tao from his series of posts on the Hilbert’s fifth problem contains interesting results which can be used for understanding the differences between Gleason distances and sub-riemannian distances or, more general, norms on groups with dilations.

For normed groups with dilations see my previous post (where links to articles are also provided). Check my homepage for more details (finally I am online again).

There is also another post of mine on the Gleason metric (distance) and the CC (or sub-riemannian) distance, where I explain why the commutator estimate (definition 3, relation (2) from the last post of Tao) forces “commutativity”, in the sense that a sub-riemannian left invariant distance on a Lie group which has the commutator estimate must be a riemannian distance.

What about the escape property (Definition 3, relation (1) from the post of Tao)?

From his Proposition 10 we see that the escape property implies the commutator estimate, therefore a sub-riemannian left invariant distance with the escape property must be riemannian.

An explanation of this phenomenon can be deduced by using the notion of “coherent projection”, section 9 of the paper

A characterization of sub-riemannian spaces as length dilation structures constructed via coherent projections, Commun. Math. Anal. 11 (2011), No. 2, pp. 70-111

in the very particular case of sub-riemannian Lie groups (or for that matter normed groups with dilations).

Suppose we have a normed group with dilations (G, \delta) which has another left invariant dilation structure on it (in the paper this is denoted by a “\delta bar”, here I shall use the notation \alpha for this supplementary dilation structure).

There is one such a dilation structure available for any Lie group (notice that I am not trying to give a proof of the H5 problem), namely for any \varepsilon > 0 (but not too big)

\alpha_{\varepsilon} g = \exp ( \varepsilon \log (g))

(maybe interesting: which famous lemma is equivalent with the fact that (G,\alpha) is a group with dilations?)
Take \delta to be a dilation structure coming from a left-invariant distribution on the group . Then \delta commutes with \alpha and moreover

(*) \lim_{\varepsilon \rightarrow 0} \alpha_{\varepsilon}^{-1} \delta_{\varepsilon} x = Q(x)

where Q is a projection: Q(Q(x)) = x for any x \in G.

It is straightforward to check that (the left-translation of) Q (over the whole group) is a coherent projection, more precisely it is the projection on the distribution!

Exercise: denote by \varepsilon = 1/n and use (*) to prove that the escape property of Tao implies that Q is (locally) injective. This implies in turn that Q = id, therefore the distribution is the tangent bundle, therefore the distance is riemannian!

UPDATE:    See the recent post 254A, Notes 4: Bulding metrics on groups, and the Gleason-Yamabe theorem by Terence Tao, for understanding in detail the role of the escape property in the proof of the Hilbert 5th problem.

Pros and cons of higher order Pansu derivatives

This interesting question from mathoverflow

Higher order Pansu derivative

is asked by nil (no website, no location). I shall try to explain the pros and cons of higher order derivatives in Carnot groups. As for a real answer to nil’s question, I could tell him but then …

For “Pansu derivative” see the paper: (mentioned in this previous post)

Métriques de Carnot-Carathéodory et quasiisométries des espaces symétriques de rang un, The Annals of Mathematics Second Series, Vol. 129, No. 1 (Jan., 1989), pp. 1-60

Such derivatives can be done in any metric space with dilations, or in any normed group with dilations in particular (see definition in this previous post).

Pros/cons: It would be interesting to have a higher order differential calculus with Pansu derivatives, for all the reasons which make higher derivatives interesting in more familiar situations. Three examples come to my mind: convexity, higher order differential operators and curvature.

1. Convexity pro: the positivity of the hessian of a function implies convexity. In the world of Carnot groups the most natural definition of convexity (at least that is what I think) is the following: a function f: N \rightarrow \mathbb{R}, defined on a Carnot group N with (homogeneous) dilations \displaystyle \delta_{\varepsilon}, is convex if for any x,y \in N and for any \varepsilon \in [0,1] we have

f( x \delta_{\varepsilon}(x^{-1} y)) \leq f(x) + \varepsilon (-f(x) + f(y)) .

There are conditions in terms of higher order horizontal derivatives (if the function is derivable in the classical sense) which are sufficient for the function to be convex (in the mentioned sense). Note that the positivity of the horizontal hessian is not enough! It would be nice to have a more intrinsic differential condition, which does not use classical horizontal derivatives. Con: as in classical analysis, we can do well without second order derivatives when we study convexity. In fact convex analysis is so funny because we can do it without the need of differentiability.

2. Differential operators Pro: Speaking about higher order horizontal derivatives, notice that the horizontal laplacian is not expressed in an intrinsic manner (i.e. as a combinaion of higher order Pansu derivatives). It would be interesting to have such a representation for the horizontal laplacian, at least for not having to use “coordinates” (well, these are families of horizontal vector fields which span the distribution) in order to be able to define the operator. Con: nevertheless the horizontal hessian can be defined intrinsically in a weak sense, using only the sub-riemannian distance (and the energy functional associated to it, as in the classical case). Sobolev spaces and others are a flourishing field of research, without the need to appeal to higher order Pansu derivatives. (pro: this regards the existence of solutions in a weak sense, but to be honest, what about the regularity business?)

3. Curvature Pro: What is the curvature of a level set of a function defined on a Carnot group? Clearly higher order derivatives are needed here. Con: level set are not even rectifiable in the Carnot world!

Besides all this, there is a general:

Con: There are not many functions, from a Carnot group to itself, which are Pansu derivable everywhere, with continuous derivative. Indeed, for most Carnot groups (excepting the Heisenberg type and the jet type) only left translations are “smooth” in this sense. So even if we could define higher order derivatives, there is not much room to apply them.

However, I think that it is possible to define derivatives of Pansu type such that always there are lots of functions derivable in this sense and moreover it is possible to introduce higher order derivatives of Pansu type (i.e. which can be expressed with dilations).

UPDATE:  This should be read in conjunction with this post. Please look at Lemma 11   from the   last post of Tao    and also at the notations made previously in that post.  Now, relation (4) contains an estimate of a kind of discretization of a second order derivative. Based on Lemma 11 and on what I explained in the linked post, the relation (4) cannot hold in the sub-riemannian world, that is there is surely no bump  function \phi such that d_{\phi} is equivalent with a sub-riemannian distance (unless the metric is riemannian). In conclusion, there are no “interesting” nontrivial C^{1,1} bump functions (say quadratic-like, see in the post of Tao how he constructs his bump function by using the distance).

There must be something going wrong with the “Taylor expansion” from the end of the proof of Lemma 11,  if instead of a norm with respect to a bump function we put a sub-riemannian distance. Presumably instead of “n”  and  “n^{2}” we have to put something else, like   “n^{a}”    and  “n^{b}” respectively, with coefficients  a, b/2 <1 and also functions of (a kind of  degree,  say) of g. Well, the coefficient b will be very interesting, because related to some notion of curvature to be discovered.

Gleason metric and CC distance

In the series of posts on Hilbert’s fifth problem, Terence Tao defines a Gleason metric, definition 4 here, which is a very important ingredient of the proof of the solution to H5 problem.

Here is Remark 1. from the post:

The escape and commutator properties are meant to capture “Euclidean-like” structure of the group. Other metrics, such as Carnot-Carathéodory metrics on Carnot Lie groups such as the Heisenberg group, usually fail one or both of these properties.

I want to explain why this is true. Look at the proof of theorem 7. The problem comes from the commutator estimate (1). I shall reproduce the relevant part of the proof because I don’t yet know how to write good-looking latex posts:

From the commutator estimate (1) and the triangle inequality we also obtain a conjugation estimate

\displaystyle  \| ghg^{-1} \| \sim \|h\|

whenever {\|g\|, \|h\| \leq \epsilon}. Since left-invariance gives

\displaystyle  d(g,h) = \| g^{-1} h \|

we then conclude an approximate right invariance

\displaystyle  d(gk,hk) \sim d(g,h)

whenever {\|g\|, \|h\|, \|k\| \leq \epsilon}.

The conclusion is that the right translations in the group are Lipschitz (with respect to the Gleason metric). Because this distance (I use “distance” instead of “metric”) is also left invariant, it follows that left and right translations are Lipschitz.

Let now G be a connected Lie group with a left-invariant distribution, obtained by left translates of a vector space D included in the Lie algebra of G. The distribution is completely non-integrable if D generates the Lie algebra by using the + and Lie bracket operations. We put an euclidean norm on D and we get a CC distance on the group defined by: the CC distance between two elements of the group equals the infimum of lengths of horizontal (a.e. derivable, with the tangent in the distribution) curves joining the said points.

The remark 1 of Tao is a consequence of the following fact: if the CC distance is right invariant then D equals the Lie algebra of the group, therefore the distance is riemannian.

Here is why: in a sub-riemannian group (that is a group with a distribution and CC distance as explained previously) the left translations are Lipschitz (they are isometries) but not all right translations are Lipschitz, unless D equals the Lie algebra of G. Indeed, let us suppose that all right translations are Lipschitz. Then, by Margulis-Mostow version (see also this) of the Rademacher theorem , the right translation by an element “a” is Pansu derivable almost everywhere. It follows that the Pansu derivative of the right translation by “a” (in almost every point) preserves the distribution. A simple calculus based on invariance (truly, some explanations are needed here) shows that by consequence the adjoint action of “a” preserves D. Because “a” is arbitrary, this implies that D is an ideal of the Lie algebra. But D generates the Lie algebra, therefore D equals the Lie algebra of G.

If you know a shorter proof please let me know.

UPDATE: See the recent post 254A, Notes 4: Bulding metrics on groups, and the Gleason-Yamabe theorem by Terence Tao, for details of the role of the Gleason metric  in the proof of the Hilbert 5th problem.

Curvature and Brunn-Minkowski inequality

A beautiful paper by Yann Ollivier and Cedric Villani

A curved BRUNN–MINKOWSKI INEQUALITY on the discrete hypercube OR: WHAT IS THE RICCI CURVATURE OF THE DISCRETE  HYPERCUBE?

The Brunn-Minkowski inequality  says that  the log  of the volume (in euclidean spaces) is concave. The concavity inequality is improved, in riemannian manifolds with Ricci curvature at least K, by a quadratic term with coefficient proportional with K.

The paper is remarkable in many ways. In particular are compared two roads towards curvature in spaces more general than riemannian: the coarse curvature introduced by Ollivier and the other based on the displacement convexity of the entropy function (Felix Otto , Cedric Villani, John Lott, Karl-Theodor Sturm), studied by many researchers. Both are related to  Wasserstein distances . NONE works for sub-riemannian spaces, which is very very interesting.

In few words, here is the description of the coarse Ricci curvature: take an epsilon and consider the application from the metric space (riemannian manifold, say) to the space of probabilities which associates to a point from the metric space the restriction of the volume measure on the epsilon-ball centered in that point (normalized to give a probability). If this application is Lipschitz with constant L(epsilon) (on the space of probabilities take the L^1 Wassertein distance) then the epsilon-coarse Ricci curvature times epsilon square is equal to 1 minus L(epsilon) (thus we get a lower bound of the Ricci curvature function, if we are in a Riemannian manifold). Same definition works in a discrete space (this time epsilon is fixed).
The second definition of Ricci curvature comes from inverse engineering of the displacement convexity inequality discovered in many particular spaces. The downside of this definition is that is hard to “compute” it.

Initially, this second definition was related to the L^2 Wasserstein distance which,  according to Otto calculus, gives to the space of probabilities (in the L^2 frame) a structure of an infinite dimensional riemannian manifold.

Concerning the sub-riemannian spaces, in the first definition the said application cannot be Lipschitz and in the second definition there is (I think) a manifestation of the fact that we cannot put, in a metrically acceptable way, a sub-riemannian space into a riemannian-like one, even infinite dimensional.