# Escape property of the Gleason metric and sub-riemannian distances again

The last post of Tao from his series of posts on the Hilbert’s fifth problem contains interesting results which can be used for understanding the differences between Gleason distances and sub-riemannian distances or, more general, norms on groups with dilations.

For normed groups with dilations see my previous post (where links to articles are also provided). Check my homepage for more details (finally I am online again).

There is also another post of mine on the Gleason metric (distance) and the CC (or sub-riemannian) distance, where I explain why the commutator estimate (definition 3, relation (2) from the last post of Tao) forces “commutativity”, in the sense that a sub-riemannian left invariant distance on a Lie group which has the commutator estimate must be a riemannian distance.

What about the escape property (Definition 3, relation (1) from the post of Tao)?

From his Proposition 10 we see that the escape property implies the commutator estimate, therefore a sub-riemannian left invariant distance with the escape property must be riemannian.

An explanation of this phenomenon can be deduced by using the notion of “coherent projection”, section 9 of the paper

A characterization of sub-riemannian spaces as length dilation structures constructed via coherent projections, Commun. Math. Anal. 11 (2011), No. 2, pp. 70-111

in the very particular case of sub-riemannian Lie groups (or for that matter normed groups with dilations).

Suppose we have a normed group with dilations $(G, \delta)$ which has another left invariant dilation structure on it (in the paper this is denoted by a “$\delta$ bar”, here I shall use the notation $\alpha$ for this supplementary dilation structure).

There is one such a dilation structure available for any Lie group (notice that I am not trying to give a proof of the H5 problem), namely for any $\varepsilon > 0$ (but not too big)

$\alpha_{\varepsilon} g = \exp ( \varepsilon \log (g))$

(maybe interesting: which famous lemma is equivalent with the fact that $(G,\alpha)$ is a group with dilations?)
Take $\delta$ to be a dilation structure coming from a left-invariant distribution on the group . Then $\delta$ commutes with $\alpha$ and moreover

(*) $\lim_{\varepsilon \rightarrow 0} \alpha_{\varepsilon}^{-1} \delta_{\varepsilon} x = Q(x)$

where $Q$ is a projection: $Q(Q(x)) = x$ for any $x \in G$.

It is straightforward to check that (the left-translation of) $Q$ (over the whole group) is a coherent projection, more precisely it is the projection on the distribution!

Exercise: denote by $\varepsilon = 1/n$ and use (*) to prove that the escape property of Tao implies that $Q$ is (locally) injective. This implies in turn that $Q = id$, therefore the distribution is the tangent bundle, therefore the distance is riemannian!

UPDATE:    See the recent post 254A, Notes 4: Bulding metrics on groups, and the Gleason-Yamabe theorem by Terence Tao, for understanding in detail the role of the escape property in the proof of the Hilbert 5th problem.

# Pros and cons of higher order Pansu derivatives

This interesting question from mathoverflow

Higher order Pansu derivative

is asked by nil (no website, no location). I shall try to explain the pros and cons of higher order derivatives in Carnot groups. As for a real answer to nil’s question, I could tell him but then …

For “Pansu derivative” see the paper: (mentioned in this previous post)

Métriques de Carnot-Carathéodory et quasiisométries des espaces symétriques de rang un, The Annals of Mathematics Second Series, Vol. 129, No. 1 (Jan., 1989), pp. 1-60

Such derivatives can be done in any metric space with dilations, or in any normed group with dilations in particular (see definition in this previous post).

Pros/cons: It would be interesting to have a higher order differential calculus with Pansu derivatives, for all the reasons which make higher derivatives interesting in more familiar situations. Three examples come to my mind: convexity, higher order differential operators and curvature.

1. Convexity pro: the positivity of the hessian of a function implies convexity. In the world of Carnot groups the most natural definition of convexity (at least that is what I think) is the following: a function $f: N \rightarrow \mathbb{R}$, defined on a Carnot group $N$ with (homogeneous) dilations $\displaystyle \delta_{\varepsilon}$, is convex if for any $x,y \in N$ and for any $\varepsilon \in [0,1]$ we have

$f( x \delta_{\varepsilon}(x^{-1} y)) \leq f(x) + \varepsilon (-f(x) + f(y))$.

There are conditions in terms of higher order horizontal derivatives (if the function is derivable in the classical sense) which are sufficient for the function to be convex (in the mentioned sense). Note that the positivity of the horizontal hessian is not enough! It would be nice to have a more intrinsic differential condition, which does not use classical horizontal derivatives. Con: as in classical analysis, we can do well without second order derivatives when we study convexity. In fact convex analysis is so funny because we can do it without the need of differentiability.

2. Differential operators Pro: Speaking about higher order horizontal derivatives, notice that the horizontal laplacian is not expressed in an intrinsic manner (i.e. as a combinaion of higher order Pansu derivatives). It would be interesting to have such a representation for the horizontal laplacian, at least for not having to use “coordinates” (well, these are families of horizontal vector fields which span the distribution) in order to be able to define the operator. Con: nevertheless the horizontal hessian can be defined intrinsically in a weak sense, using only the sub-riemannian distance (and the energy functional associated to it, as in the classical case). Sobolev spaces and others are a flourishing field of research, without the need to appeal to higher order Pansu derivatives. (pro: this regards the existence of solutions in a weak sense, but to be honest, what about the regularity business?)

3. Curvature Pro: What is the curvature of a level set of a function defined on a Carnot group? Clearly higher order derivatives are needed here. Con: level set are not even rectifiable in the Carnot world!

Besides all this, there is a general:

Con: There are not many functions, from a Carnot group to itself, which are Pansu derivable everywhere, with continuous derivative. Indeed, for most Carnot groups (excepting the Heisenberg type and the jet type) only left translations are “smooth” in this sense. So even if we could define higher order derivatives, there is not much room to apply them.

However, I think that it is possible to define derivatives of Pansu type such that always there are lots of functions derivable in this sense and moreover it is possible to introduce higher order derivatives of Pansu type (i.e. which can be expressed with dilations).

UPDATE:  This should be read in conjunction with this post. Please look at Lemma 11   from the   last post of Tao    and also at the notations made previously in that post.  Now, relation (4) contains an estimate of a kind of discretization of a second order derivative. Based on Lemma 11 and on what I explained in the linked post, the relation (4) cannot hold in the sub-riemannian world, that is there is surely no bump  function $\phi$ such that $d_{\phi}$ is equivalent with a sub-riemannian distance (unless the metric is riemannian). In conclusion, there are no “interesting” nontrivial $C^{1,1}$ bump functions (say quadratic-like, see in the post of Tao how he constructs his bump function by using the distance).

There must be something going wrong with the “Taylor expansion” from the end of the proof of Lemma 11,  if instead of a norm with respect to a bump function we put a sub-riemannian distance. Presumably instead of “$n$”  and  “$n^{2}$” we have to put something else, like   “$n^{a}$”    and  “$n^{b}$” respectively, with coefficients  $a, b/2 <1$ and also functions of (a kind of  degree,  say) of $g$. Well, the coefficient $b$ will be very interesting, because related to some notion of curvature to be discovered.

# Noncommutative Baker-Campbell-Hausdorff formula: the problem

I come back to a problem alluded in a previous post, where the proof of the Baker-Campbell-Hausdorff formula from this post by Tao is characterized as “commutative”, because of the “radial homogeneity” condition in his Theorem 1 , which forces commutativity.

Now I am going to try to explain this, as well as what the problem of a “noncommutative” BCH formula would be.

Take a Lie group $G$ and identify a neighbourhood of its neutral element with a neighbourhood of the $0$ element of its Lie algebra. This is standard for Carnot groups (connected, simply connected nilpotent groups which admit a one parameter family of contracting automorphisms), where the exponential is bijective, so the identification is global. The advantage of this identification is that we get rid of log’s and exp’s in formulae.

For every $s > 0$ define a deformation of the group operation (which is denoted multiplicatively), by the formula

(1)                $s(x *_{s} y) = (sx) (sy)$

Then we have $x *_{s} y \rightarrow x+y$ as $s \rightarrow 0$.

Denote by $[x,y]$ the Lie bracket of the (Lie algebra of the) group $G$ with initial operation and likewise denote by $[x,y]_{s}$ the Lie bracket of the operation $*_{s}$.

The relation between these brackets is: $[x,y]_{s} = s [x,y]$.

From the Baker-Campbell-Hausdorff formula we get:

$-x + (x *_{s} y) - y = \frac{s}{2} [x,y] + o(s)$,

(for reasons which will be clear later, I am not using the commutativity of addition), therefore

(2)         $\frac{1}{s} ( -x + (x *_{s} y) - y ) \rightarrow \frac{1}{2} [x,y]$       as        $s \rightarrow 0$.

Remark that (2) looks like a valid definition of the Lie bracket which is not related to the group commutator. Moreover, it is a formula where we differentiate only once, so to say. In the usual derivation of the Lie bracket from the group commutator we have to differentiate twice!

Let us now pass to a slightly different context: suppose $G$ is a normed group with dilations (the norm is for simplicity, we can do without; in the case of “usual” Lie groups, taking a norm corresponds to taking a left invariant Riemannian distance on the group).

$G$ is a normed group with dilations if

• it is a normed group, that is there is a norm function defined on $G$ with values in $[0,+\infty)$, denoted by $\|x\|$, such that

$\|x\| = 0$ iff $x = e$ (the neutral element)

$\| x y \| \leq \|x\| + \|y\|$

$\| x^{-1} \| = \| x \|$

– “balls” $\left\{ x \mid \|x\| \leq r \right\}$ are compact in the topology induced by the distance $d(x,y) = \|x^{-1} y\|$,

• and a “multiplication by positive scalars” $(s,x) \in (0,\infty) \times G \mapsto sx \in G$ with the properties:

$s(px) = (sp)x$ , $1x = x$ and $sx \rightarrow e$ as $s \rightarrow 0$; also $s(x^{-1}) = (sx)^{-1}$,

– define $x *_{s} y$ as previously, by the formula (1) (only this time use the multiplication by positive scalars). Then

$x *_{s} y \rightarrow x \cdot y$      as      $s \rightarrow 0$

uniformly with respect to $x, y$ in an arbitrarry closed ball.

$\frac{1}{s} \| sx \| \rightarrow \|x \|_{0}$, uniformly with respect to $x$ in a closed ball, and moreover $\|x\|_{0} = 0$ implies $x = e$.

Comments:

1. In truth, everything is defined in a neighbourhood of the neutral element, also $G$ has only to be a local group.

2. the operation $x \cdot y$ is a (local) group operation and the function $\|x\|_{0}$ is a norm for this operation, which is also “homogeneous”, in the sense

$\|sx\|_{0} = s \|x\|_{0}$.

Also we have the distributivity property $s(x \cdot y) = (sx) \cdot (sy)$, but generally the dot operation is not commutative.

3. A Lie group with a left invariant Riemannian distance $d$ and with the usual multiplication by scalars (after making the identification of a neighbourhood of the neutral element with a neighbourhood in the Lie algebra) is an example of a normed group with dilations, with the norm $\|x\| = d(e,x)$.

4. Any Carnot group can be endowed with a structure of a group with dilations, by defining the multiplication by positive scalars with the help of its intrinsic dilations. Indeed, take for example a Heisenberg group $G = \mathbb{R}^{3}$ with the operation

$(x_{1}, x_{2}, x_{3}) (y_{1}, y_{2}, y_{3}) = (x_{1} + y_{1}, x_{2} + y_{2}, x_{3} + y_{3} + \frac{1}{2} (x_{1}y_{2} - x_{2} y_{1}))$

multiplication by positive scalars

$s (x_{1},x_{2},x_{3}) = (sx_{1}, sx_{2}, s^{2}x_{3})$

and norm given by

$\| (x_{1}, x_{2}, x_{3}) \|^{2} = (x_{1})^{2} + (x_{2})^{2} + \mid x_{3} \mid$

Then we have $X \cdot Y = XY$, for any $X,Y \in G$ and $\| X\|_{0} = \|X\|$ for any $X \in G$.

Carnot groups are therefore just a noncommutative generalization of vector spaces, with the addition operation $+$ replaced by a noncommutative operation!

5. There are many groups with dilations which are not Carnot groups. For example endow any Lie group with a left invariant sub-riemannian structure and hop, this gives a norm group with dilations structure.

In such a group with dilations the “radial homogeneity” condition of Tao implies that the operation $x \cdot y$ is commutative! (see the references given in this previous post). Indeed, this radial homogeneity is equivalent with the following assertion: for any $s \in (0,1)$ and any $x, y \in G$

$x s( x^{-1} ) = (1-s)x$

which is called elsewhere “barycentric condition”. This condition is false in any noncommutative Carnot group! What it is true is the following: let, in a Carnot group, $x$ be any solution of the equation

$x s( x^{-1} ) = y$

for given $y \in G$ and $s \in (0,1)$. Then

$x = \sum_{k=0}^{\infty} (s^{k}) y$ ,

(so the solution is unique) where the sum is taken with respect to the group operation (noncommutative series).

Problem of the noncommutative BCH formula: In a normed group with dilations, express the group operation $xy$ as a noncommutative series, by using instead of “$+$” the operation “$\cdot$” and by using a definition of the “noncommutative Lie bracket” in the same spirit as (2), that is something related to the asymptotic behaviour of the “approximate bracket”

(3)         $[x,y]_{s} = (s^{-1}) ( x^{-1} \cdot (x *_{s} y) \cdot y^{-1} )$.

Notice that there is NO CHANCE to have a limit like the one in (2), so the problem seems hard also from this point of view.

Also notice that if $G$ is a Carnot group then

$[x,y]_{s} = e$ (that is like it is equal to $o$, remember)

which is normal, if we think about $G$ as being a kind of noncommutative vector space, even of $G$ may be not commutative.

So this noncommutative Lie bracket is not about commutators!

# Gleason metric and CC distance

In the series of posts on Hilbert’s fifth problem, Terence Tao defines a Gleason metric, definition 4 here, which is a very important ingredient of the proof of the solution to H5 problem.

Here is Remark 1. from the post:

The escape and commutator properties are meant to capture “Euclidean-like” structure of the group. Other metrics, such as Carnot-Carathéodory metrics on Carnot Lie groups such as the Heisenberg group, usually fail one or both of these properties.

I want to explain why this is true. Look at the proof of theorem 7. The problem comes from the commutator estimate (1). I shall reproduce the relevant part of the proof because I don’t yet know how to write good-looking latex posts:

From the commutator estimate (1) and the triangle inequality we also obtain a conjugation estimate

$\displaystyle \| ghg^{-1} \| \sim \|h\|$

whenever ${\|g\|, \|h\| \leq \epsilon}$. Since left-invariance gives

$\displaystyle d(g,h) = \| g^{-1} h \|$

we then conclude an approximate right invariance

$\displaystyle d(gk,hk) \sim d(g,h)$

whenever ${\|g\|, \|h\|, \|k\| \leq \epsilon}$.

The conclusion is that the right translations in the group are Lipschitz (with respect to the Gleason metric). Because this distance (I use “distance” instead of “metric”) is also left invariant, it follows that left and right translations are Lipschitz.

Let now G be a connected Lie group with a left-invariant distribution, obtained by left translates of a vector space D included in the Lie algebra of G. The distribution is completely non-integrable if D generates the Lie algebra by using the + and Lie bracket operations. We put an euclidean norm on D and we get a CC distance on the group defined by: the CC distance between two elements of the group equals the infimum of lengths of horizontal (a.e. derivable, with the tangent in the distribution) curves joining the said points.

The remark 1 of Tao is a consequence of the following fact: if the CC distance is right invariant then D equals the Lie algebra of the group, therefore the distance is riemannian.

Here is why: in a sub-riemannian group (that is a group with a distribution and CC distance as explained previously) the left translations are Lipschitz (they are isometries) but not all right translations are Lipschitz, unless D equals the Lie algebra of G. Indeed, let us suppose that all right translations are Lipschitz. Then, by Margulis-Mostow version (see also this) of the Rademacher theorem , the right translation by an element “a” is Pansu derivable almost everywhere. It follows that the Pansu derivative of the right translation by “a” (in almost every point) preserves the distribution. A simple calculus based on invariance (truly, some explanations are needed here) shows that by consequence the adjoint action of “a” preserves D. Because “a” is arbitrary, this implies that D is an ideal of the Lie algebra. But D generates the Lie algebra, therefore D equals the Lie algebra of G.

If you know a shorter proof please let me know.

UPDATE: See the recent post 254A, Notes 4: Bulding metrics on groups, and the Gleason-Yamabe theorem by Terence Tao, for details of the role of the Gleason metric  in the proof of the Hilbert 5th problem.

# Rigidity of algebraic structure: principle of common cause

I follow with a lot of interest the stream of posts by Terence Tao on the Hilbert’s fifth problem and I am waiting impatiently to see how it connects with the field of approximate groups.

In his latest post Tao writes that

… Hilbert’s fifth problem is a manifestation of the “rigidity” of algebraic structure (in this case, group structure), which turns weak regularity (continuity) into strong regularity (smoothness).

This is something amazing and worthy of exploration!
I propose the following “explanation” of this phenomenon, taking the form of the:

Principle of common cause: an uniformly continuous algebraic structure has a smooth structure because both structures can be constructed from an underlying emergent algebra (introduced here).

Here are more explanations (adapted from the first paper on emergent algebras):

A differentiable algebra, is an algebra (set of operations A) over a manifold X with the property that all the operations of the algebra are differentiable with respect to the manifold structure of X. Let us denote by D the differential structure of the manifold X.
From a more computational viewpoint, we may think about the calculus which can be
done in a differentiable algebra as being generated by the elements of a toolbox with two compartments “A” and “D”:

– “A” contains the algebraic information, that is the operations of the algebra, as
well as algebraic relations (like for example ”the operation ∗ is associative”, or ”the operation ∗ is commutative”, and so on),
– “D” contains the differential structure informations, that is the information needed in order to formulate the statement ”the function f is differentiable”.
The compartments “A” and “D” are compatible, in the sense that any operation from “A” is differentiable according to “D”.

I propose a generalization of differentiable algebras, where the underlying differential structure is replaced by a uniform idempotent right quasigroup (irq).

Algebraically, irqs are related with racks and quandles, which appear in knot theory (the axioms of a irq correspond to the first two Reidemeister moves). An uniform  irq is a family of irqs indexed by elements of a commutative group (with an absolute), such that  the third Reidemeister move is related to a statement in terms of uniform limits of composites of operations of the family of irqs.

An emergent algebra is an algebra A over the uniform irq X such that all operations and algebraic relations from A can be constructed or deduced from combinations of operations in the uniform irq, possibly by taking limits which are uniform with respect to a set of parameters. In this approach, the usual compatibility condition between algebraic information and differential information, expressed as the differentiability of algebraic operations with respect to the differential structure, is replaced by the “emergence” of algebraic operations and relations from the minimal structure of a uniform irq.

Thus, for example, algebraic operations and the differentiation operation (taking   the triple (x,y,f) to Df(x)y , where “x, y” are  points and “f” is a function) are expressed as uniform limits of composites of more elementary operations. The algebraic operations appear to be differentiable because of algebraic abstract nonsense (obtained by exploitation of the Reidemeister moves) and because of the uniformity assumptions which allow us to freely permute limits with respect to parameters in the commutative group (as they tend to the absolute), due to the uniformity assumptions.