# Escape property of the Gleason metric and sub-riemannian distances again

The last post of Tao from his series of posts on the Hilbert’s fifth problem contains interesting results which can be used for understanding the differences between Gleason distances and sub-riemannian distances or, more general, norms on groups with dilations.

For normed groups with dilations see my previous post (where links to articles are also provided). Check my homepage for more details (finally I am online again).

There is also another post of mine on the Gleason metric (distance) and the CC (or sub-riemannian) distance, where I explain why the commutator estimate (definition 3, relation (2) from the last post of Tao) forces “commutativity”, in the sense that a sub-riemannian left invariant distance on a Lie group which has the commutator estimate must be a riemannian distance.

What about the escape property (Definition 3, relation (1) from the post of Tao)?

From his Proposition 10 we see that the escape property implies the commutator estimate, therefore a sub-riemannian left invariant distance with the escape property must be riemannian.

An explanation of this phenomenon can be deduced by using the notion of “coherent projection”, section 9 of the paper

A characterization of sub-riemannian spaces as length dilation structures constructed via coherent projections, Commun. Math. Anal. 11 (2011), No. 2, pp. 70-111

in the very particular case of sub-riemannian Lie groups (or for that matter normed groups with dilations).

Suppose we have a normed group with dilations $(G, \delta)$ which has another left invariant dilation structure on it (in the paper this is denoted by a “$\delta$ bar”, here I shall use the notation $\alpha$ for this supplementary dilation structure).

There is one such a dilation structure available for any Lie group (notice that I am not trying to give a proof of the H5 problem), namely for any $\varepsilon > 0$ (but not too big)

$\alpha_{\varepsilon} g = \exp ( \varepsilon \log (g))$

(maybe interesting: which famous lemma is equivalent with the fact that $(G,\alpha)$ is a group with dilations?)
Take $\delta$ to be a dilation structure coming from a left-invariant distribution on the group . Then $\delta$ commutes with $\alpha$ and moreover

(*) $\lim_{\varepsilon \rightarrow 0} \alpha_{\varepsilon}^{-1} \delta_{\varepsilon} x = Q(x)$

where $Q$ is a projection: $Q(Q(x)) = x$ for any $x \in G$.

It is straightforward to check that (the left-translation of) $Q$ (over the whole group) is a coherent projection, more precisely it is the projection on the distribution!

Exercise: denote by $\varepsilon = 1/n$ and use (*) to prove that the escape property of Tao implies that $Q$ is (locally) injective. This implies in turn that $Q = id$, therefore the distribution is the tangent bundle, therefore the distance is riemannian!

UPDATE:    See the recent post 254A, Notes 4: Bulding metrics on groups, and the Gleason-Yamabe theorem by Terence Tao, for understanding in detail the role of the escape property in the proof of the Hilbert 5th problem.

# Pros and cons of higher order Pansu derivatives

This interesting question from mathoverflow

Higher order Pansu derivative

is asked by nil (no website, no location). I shall try to explain the pros and cons of higher order derivatives in Carnot groups. As for a real answer to nil’s question, I could tell him but then …

For “Pansu derivative” see the paper: (mentioned in this previous post)

Métriques de Carnot-Carathéodory et quasiisométries des espaces symétriques de rang un, The Annals of Mathematics Second Series, Vol. 129, No. 1 (Jan., 1989), pp. 1-60

Such derivatives can be done in any metric space with dilations, or in any normed group with dilations in particular (see definition in this previous post).

Pros/cons: It would be interesting to have a higher order differential calculus with Pansu derivatives, for all the reasons which make higher derivatives interesting in more familiar situations. Three examples come to my mind: convexity, higher order differential operators and curvature.

1. Convexity pro: the positivity of the hessian of a function implies convexity. In the world of Carnot groups the most natural definition of convexity (at least that is what I think) is the following: a function $f: N \rightarrow \mathbb{R}$, defined on a Carnot group $N$ with (homogeneous) dilations $\displaystyle \delta_{\varepsilon}$, is convex if for any $x,y \in N$ and for any $\varepsilon \in [0,1]$ we have

$f( x \delta_{\varepsilon}(x^{-1} y)) \leq f(x) + \varepsilon (-f(x) + f(y))$.

There are conditions in terms of higher order horizontal derivatives (if the function is derivable in the classical sense) which are sufficient for the function to be convex (in the mentioned sense). Note that the positivity of the horizontal hessian is not enough! It would be nice to have a more intrinsic differential condition, which does not use classical horizontal derivatives. Con: as in classical analysis, we can do well without second order derivatives when we study convexity. In fact convex analysis is so funny because we can do it without the need of differentiability.

2. Differential operators Pro: Speaking about higher order horizontal derivatives, notice that the horizontal laplacian is not expressed in an intrinsic manner (i.e. as a combinaion of higher order Pansu derivatives). It would be interesting to have such a representation for the horizontal laplacian, at least for not having to use “coordinates” (well, these are families of horizontal vector fields which span the distribution) in order to be able to define the operator. Con: nevertheless the horizontal hessian can be defined intrinsically in a weak sense, using only the sub-riemannian distance (and the energy functional associated to it, as in the classical case). Sobolev spaces and others are a flourishing field of research, without the need to appeal to higher order Pansu derivatives. (pro: this regards the existence of solutions in a weak sense, but to be honest, what about the regularity business?)

3. Curvature Pro: What is the curvature of a level set of a function defined on a Carnot group? Clearly higher order derivatives are needed here. Con: level set are not even rectifiable in the Carnot world!

Besides all this, there is a general:

Con: There are not many functions, from a Carnot group to itself, which are Pansu derivable everywhere, with continuous derivative. Indeed, for most Carnot groups (excepting the Heisenberg type and the jet type) only left translations are “smooth” in this sense. So even if we could define higher order derivatives, there is not much room to apply them.

However, I think that it is possible to define derivatives of Pansu type such that always there are lots of functions derivable in this sense and moreover it is possible to introduce higher order derivatives of Pansu type (i.e. which can be expressed with dilations).

UPDATE:  This should be read in conjunction with this post. Please look at Lemma 11   from the   last post of Tao    and also at the notations made previously in that post.  Now, relation (4) contains an estimate of a kind of discretization of a second order derivative. Based on Lemma 11 and on what I explained in the linked post, the relation (4) cannot hold in the sub-riemannian world, that is there is surely no bump  function $\phi$ such that $d_{\phi}$ is equivalent with a sub-riemannian distance (unless the metric is riemannian). In conclusion, there are no “interesting” nontrivial $C^{1,1}$ bump functions (say quadratic-like, see in the post of Tao how he constructs his bump function by using the distance).

There must be something going wrong with the “Taylor expansion” from the end of the proof of Lemma 11,  if instead of a norm with respect to a bump function we put a sub-riemannian distance. Presumably instead of “$n$”  and  “$n^{2}$” we have to put something else, like   “$n^{a}$”    and  “$n^{b}$” respectively, with coefficients  $a, b/2 <1$ and also functions of (a kind of  degree,  say) of $g$. Well, the coefficient $b$ will be very interesting, because related to some notion of curvature to be discovered.

# Gleason metric and CC distance

In the series of posts on Hilbert’s fifth problem, Terence Tao defines a Gleason metric, definition 4 here, which is a very important ingredient of the proof of the solution to H5 problem.

Here is Remark 1. from the post:

The escape and commutator properties are meant to capture “Euclidean-like” structure of the group. Other metrics, such as Carnot-Carathéodory metrics on Carnot Lie groups such as the Heisenberg group, usually fail one or both of these properties.

I want to explain why this is true. Look at the proof of theorem 7. The problem comes from the commutator estimate (1). I shall reproduce the relevant part of the proof because I don’t yet know how to write good-looking latex posts:

From the commutator estimate (1) and the triangle inequality we also obtain a conjugation estimate

$\displaystyle \| ghg^{-1} \| \sim \|h\|$

whenever ${\|g\|, \|h\| \leq \epsilon}$. Since left-invariance gives

$\displaystyle d(g,h) = \| g^{-1} h \|$

we then conclude an approximate right invariance

$\displaystyle d(gk,hk) \sim d(g,h)$

whenever ${\|g\|, \|h\|, \|k\| \leq \epsilon}$.

The conclusion is that the right translations in the group are Lipschitz (with respect to the Gleason metric). Because this distance (I use “distance” instead of “metric”) is also left invariant, it follows that left and right translations are Lipschitz.

Let now G be a connected Lie group with a left-invariant distribution, obtained by left translates of a vector space D included in the Lie algebra of G. The distribution is completely non-integrable if D generates the Lie algebra by using the + and Lie bracket operations. We put an euclidean norm on D and we get a CC distance on the group defined by: the CC distance between two elements of the group equals the infimum of lengths of horizontal (a.e. derivable, with the tangent in the distribution) curves joining the said points.

The remark 1 of Tao is a consequence of the following fact: if the CC distance is right invariant then D equals the Lie algebra of the group, therefore the distance is riemannian.

Here is why: in a sub-riemannian group (that is a group with a distribution and CC distance as explained previously) the left translations are Lipschitz (they are isometries) but not all right translations are Lipschitz, unless D equals the Lie algebra of G. Indeed, let us suppose that all right translations are Lipschitz. Then, by Margulis-Mostow version (see also this) of the Rademacher theorem , the right translation by an element “a” is Pansu derivable almost everywhere. It follows that the Pansu derivative of the right translation by “a” (in almost every point) preserves the distribution. A simple calculus based on invariance (truly, some explanations are needed here) shows that by consequence the adjoint action of “a” preserves D. Because “a” is arbitrary, this implies that D is an ideal of the Lie algebra. But D generates the Lie algebra, therefore D equals the Lie algebra of G.

If you know a shorter proof please let me know.

UPDATE: See the recent post 254A, Notes 4: Bulding metrics on groups, and the Gleason-Yamabe theorem by Terence Tao, for details of the role of the Gleason metric  in the proof of the Hilbert 5th problem.

# Curvature and Brunn-Minkowski inequality

A beautiful paper by Yann Ollivier and Cedric Villani

A curved BRUNN–MINKOWSKI INEQUALITY on the discrete hypercube OR: WHAT IS THE RICCI CURVATURE OF THE DISCRETE  HYPERCUBE?

The Brunn-Minkowski inequality  says that  the log  of the volume (in euclidean spaces) is concave. The concavity inequality is improved, in riemannian manifolds with Ricci curvature at least K, by a quadratic term with coefficient proportional with K.

The paper is remarkable in many ways. In particular are compared two roads towards curvature in spaces more general than riemannian: the coarse curvature introduced by Ollivier and the other based on the displacement convexity of the entropy function (Felix Otto , Cedric Villani, John Lott, Karl-Theodor Sturm), studied by many researchers. Both are related to  Wasserstein distances . NONE works for sub-riemannian spaces, which is very very interesting.

In few words, here is the description of the coarse Ricci curvature: take an epsilon and consider the application from the metric space (riemannian manifold, say) to the space of probabilities which associates to a point from the metric space the restriction of the volume measure on the epsilon-ball centered in that point (normalized to give a probability). If this application is Lipschitz with constant L(epsilon) (on the space of probabilities take the L^1 Wassertein distance) then the epsilon-coarse Ricci curvature times epsilon square is equal to 1 minus L(epsilon) (thus we get a lower bound of the Ricci curvature function, if we are in a Riemannian manifold). Same definition works in a discrete space (this time epsilon is fixed).
The second definition of Ricci curvature comes from inverse engineering of the displacement convexity inequality discovered in many particular spaces. The downside of this definition is that is hard to “compute” it.

Initially, this second definition was related to the L^2 Wasserstein distance which,  according to Otto calculus, gives to the space of probabilities (in the L^2 frame) a structure of an infinite dimensional riemannian manifold.

Concerning the sub-riemannian spaces, in the first definition the said application cannot be Lipschitz and in the second definition there is (I think) a manifestation of the fact that we cannot put, in a metrically acceptable way, a sub-riemannian space into a riemannian-like one, even infinite dimensional.