Example: Gromov-Hausdorff distances and the Heisenberg group, PART 3

This post continues the previous one “Gromov-Hausdorff distances and the Heisenberg group, PART 2“.

We have seen that small enough balls in physicist’ Heisenberg group G are like balls in the mathematician’ Heisenberg group H(1) and big balls in G become more and more alike (asymptotically the same) as balls in the euclidean vector space \mathbb{R}^{2}.

What is causing this?

Could it be the choice of an euclidean norm on the generating set D = \mathbb{R}^{2} \times \left\{ 1 \right\}? I don’t think so, here is why. Let us take any (vector space) norm on \mathbb{R}^{2}, instead of an euclidean one. We may repeat all the construction and the final outcome would be: same for small balls, big balls become asymptotically alike to balls in \mathbb{R}^{2} with the chosen norm. The algebraic structure of the limits in the infinitesimally small or infinitely big is the same.

Remember that the group norm is introduced only to estimate quantitatively how the set D generates the group G, so the initial choice of the norm is a kind of a gauge.

Could it be then the algebraic structure (the group operation and choice of the generating set)? Yes, but there is much flexibility here.

Instead of G = \mathbb{R}^{2} \times S^{1} with the given group operation, we may take any contact manifold structure over the set G (technically we may take any symplectic structure over \mathbb{R}^{2} and then contactify it (with the fiber S^{1}). Sounds familiar? Yes, indeed, this is a step in the recipe of geometric quantization. (If you really want to understand what is happening, then you should go and read Souriau).

Briefly said, put a norm on the kernel of the contact form and declare all directions in this kernel as horizontal, then repeat the construction of the sub-riemannian distance and metric profiles. What you get is this: small balls become asymptotically like balls in the mathematician’ Heisenberg group, big balls are alike balls in a normed vector space.

Therefore, it is not the algebraic structure per se which creates the phenomenon, but the “infinitesimal structure”. This will be treated in a later posting, but before this let me mention an amazing phenomenon.

We are again in the group G and we want to make a map of the small (i.e. of a small enough ball in G) into the big (that is into a ball in the vector space \mathbb{R}^{2}, which is the asymptotically big model of balls from G). Our macroscopic lab is in the asymptotically big, while the phenomenon happens in the small.

A good map is a bi-lipschitz one (it respects the “gauges”, the group norm) from a ball in the vector space \mathbb{R}^{2} to a ball in the Heisenberg group H(1). Surprise: there is no such map! The reason is subtle, basically the same reason as the one which leads to the algebraic structure of the infinitesimally small or infinitely large balls.

However, there are plenty of bi-lipschitz maps from a curve in the ball from the lab (one dimensional submanifold of the symplectic \mathbb{R}^{2}, this are the lagrangian submanifolds in this case) to the small ball where the phenomenon happens. This is like: you can measure the position, or the momentum, but not both…

If there are not good bi-lipschitz maps, then there are surely quasi-isometric maps . Their accuracy is bounded by the Gromov-Hausdorff distance between big balls and small balls, as explained in this pedagogical Maps of metric spaces.

Cezanne and Perelman

Let’s say is a play continuing the post arXiv for Cezanne, but do you notice any similarity:

Paul Cezanne (image taken from this biographical site):

Grigori Perelman (image taken from his wiki page):

 

UPDATE sept. 4 2012: Grothendieck work compared to Cezanne’s, taken from the Grothendieck Circle:

Example: Gromov-Hausdorff distances and the Heisenberg group, PART 2

As the title shows, this post continues the previous one

Gromov-Hausdorff distances and the Heisenberg group, PART 1

The Heisenberg group G is seen from the point of view of the generating set D. Quantitatively, the group norm “measures how” D generates G. The group norm has the following properties:

  • \| X \| = 0 if and only if X = E = (0,1), the neutral element of G. In general \| X\| \geq 0 for any X \in G.
  • \| X \cdot Y \| \leq \|X\| + \|Y\|, for any X,Y \in G (that is a consequence of the fact that if we want to go from E to X \cdot Y by using horizontal increments, then we may go first from E to X, then from X to X \cdot Y, by using horizontal strings).
  • \| X^{-1} \| = \| X \| for any X \in G (consequence of X \in D implies X^{-1} \in D).

From (group) norms we obtain distances: by definition, the distance between X and Y is

d(X,Y) = \| X^{-1} \cdot Y \|

This is the sub-riemannian distance mentioned at the end of the previous post.

The definition of this distance does not say much about the properties of it. We may use a reasoning similar with the one in (finite dimensional) normed vector spaces in order to prove that any two group norms are equivalent. In our case, the result is the following:

there are strictly positive constants a, c, C such that for any
X \in G (which has the form X = (x, e^{2\pi i z})) with \| X \| \leq a we have

c ( x_{1}^{2} + x_{2}^{2} + \mid z \mid) \leq \|X\|^{2} \leq C ( x_{1}^{2} + x_{2}^{2} + \mid z \mid).

We may take a = 1/3, for example.

For “big” norms, we have another estimate, coming from the fact that the S^{1} part of the semidirect product is compact, thus bounded:

there is a strictly positive constant A such that for any X \in G (which has the form X = (x, e^{2\pi i z})) we have

\| x\| \leq \|X \| \leq \|x\| + A

Let us look now at the ball B(R) = \left\{ X \in G \mbox{ : } \|X\| \leq R \right\} endowed with the rescaled distance

d_{R} (X,Y) = \frac{1}{R} d(X,Y)

Denote by Profile(R) = [B(R), d_{R}] the isometry class (the class of metric spaces isometric with … ) of (B(R), d_{R}). This is called a “metric profile”, see Introduction to metric spaces with dilations, section 2.3, for example.

The function which associates to R > 0 the Profile(R) can be seen as a curve in the Gromov space of (isometry classes of) compact metric spaces, endowed with the Gromov-Hausdorff distance.

This curve parameterized with R roams in this huge abstract space.
I want to see what happens when R goes to zero or infinity. The interpretation is the following: when R is small (or large, respectively), how the small (or large) balls look like?

Based on the previous estimates, we can answer this question.

When R goes to infinity, the profile Profile(R) becomes the one of the unit ball in \mathbb{R}^{2} with the euclidean norm. Indeed, this is easy, because of the second estimate, which implies that for any X = (R x, e^{2 \pi i z}) and Y = (R y, e^{2 \pi i u}) which belong to B(R), (thus \|x\|, \|y\| \leq 1) we have:

d_{euclidean}(x, y) \leq d_{R}(X,Y) \leq d_{euclidean}(x, y) + \frac{A}{R}.

Therefore, as R goes to infinity, we get the isometry result.

On the other side, if R is small enough (for example smaller or equal to 1/3, then Profile(R) becomes stationary!

Indeed, let me introduce a second Heisenberg group, baptized H(1) = \mathbb{R}^{2} \times R, with the group operation

(x, z) \cdot (y, u) = (x+ y, z + u + \frac{1}{2}\omega(x,y))

Remark that the function (x, e^{2 \pi i z}) \mapsto (x,z) is a group morphism (in fact a local group isomorphism), for z small enough! That means locally the groups G and H(1) are isomorphic. If you don’t know what a local group is then see the post Notes on local groups by Terence Tao.

By exactly the same procedure, we may put a group norm on H(1).

OK, so small balls in G are isometric with small balls in H(1). What about the rescaling with \frac{1}{R}? Well, it turns out that the group H(1) is selfsimilar, moreover, is a conical group (see for example section 6 from the paper Braided spaces with dilations… and check also the references, for the notion of conical group). Conical means that the group has a one parameter family of self-similarities: for any R > 0 the function

\delta_{R} (x,z) = (R x, R^{2} z)

is an auto-morphism of H(1) and moreover:

\| \delta_{R} (x,z) \| = R \| (x,z)\| for any (x,z) \in H(1).

As a consequence, all balls in H(1) look alike (i.e. the metric profile of the group H(1) is stationary, a hallmark of null curvature…). More precisely, for any R > 0 and any X,Y \in H(1), if we denote by d the distance in H(1) induced by the group norm, we have:

d_{R}( \delta_{R} X, \delta_{R} Y) = d(X,Y).

Conclusion for this part: Small balls in G look like balls in the Heisenberg group H(1). Asymptotically, as R goes to infinity, balls of radius R in the group G look more and more alike balls in the euclidean space \mathbb{R}^{2} (notice that this space is self-similar as well, all balls are isometric, with distances properly rescaled).

Example: Gromov-Hausdorff distances and the Heisenberg group, PART 1

This post continues the previous one “Quantum physics and the Gromov-Hausdorff distance“.

Let me take an example. We are in the following Heisenberg group (this is really a physicist Heisenberg group): the semidirect product G = \mathbb{R}^{2} \times S^{1}. Elements of the group have the form

X = (x, e^{2\pi iz}) with x \in \mathbb{R}^{2} and z \in \mathbb{R} / \mathbb{Z}

(think that z \in [0,1] with the identification o = 1).
The group operation is given by:

X \cdot Y = (x,e^{2\pi i z}) \cdot (y, e^{2 \pi i u}) = (x+y, e^{2 \pi i (u+z+ \frac{1}{2} \omega(x,y))})

where \omega: \mathbb{R}^{2} \times \mathbb{R}^{2} \rightarrow \mathbb{R} is the “symplectic form” (or area form)

\omega(x,y) = x_{1} y_{2} - x_{2} y_{1}

Remark 1. The pair (\mathbb{R}^{2}, \omega) is a symplectic (linear) space. As we well know, the Hamiltonian mechanics lives in such spaces, therefore we may think about (\mathbb{R}^{2}, \omega) as being a “classical”, or “large scale” (see further the justification, in PART 2) phase space of a mechanical system with one degree of freedom.

The group G is generated by the subset D = \mathbb{R}^{2} \times \left\{ 1 \right\}, more precisely any element of G can be expressed as a product of four elements of D (in a non-unique way).

What is the geometry of G, seen as generated by D? In order to understand this, we may put an euclidean norm on D (identified with \mathbb{R}^{2}):

\| (x, 1) \| = \| x\|, where \|x\|^{2} = x_{1}^{2} + x_{2}^{2} for example.

Then we define “horizontal strings” and their “length”: a string w = X_{1} ... X_{n} of elements of G is horizontal if for any two successive elements of the string, say X_{i}, X_{i+1} we have

X_{i}^{-1} \cdot X_{i+1} \in D, where X^{-1} denotes the inverse of X with respect to the group operation. Also, we have to ask that X_{1} \in D.

The length of the horizontal string w = X_{1} ... X_{n} is defined as:

l(w) = \|X_{1}\| + \| X_{1}^{-1} \cdot X_{2}\| + .... + \|X_{n-1}^{-1} \cdot X_{n}\|. The source of the string w is the neutral element s(w) = E = (0,1) and the target of the string is t(w) = X_{1}\cdot ... \cdot X_{n}.

OK, then let us define the “group norm” of an element of G, which is an extension of the norm defined on D. A formula for this would be:

\| X\| = \, inf \left\{ l(w) \mbox{ : } t(w) = X \right\}.

Small technicality: it is not clear to me if this definition is really good as it is, but we may improve it by the following procedure coming from the definition of the Hausdorff measure. Let us introduce the “finesse” of a horizontal string, given by

fin(w) = \max \left\{ \|X_{1}\| , \| X_{1}^{-1} \cdot X_{2}\| , ... , \|X_{n-1}^{-1} \cdot X_{n}\| \right\}

and then define, for any \varepsilon > 0, the quantity:

\| X\|_{\varepsilon} = \, inf \left\{ l(w) \mbox{ : } t(w) = X \mbox{ and } fin(w) < \varepsilon \right\}.

The correct definition of the group norm is then

\| X\| = \, sup \left\{\| X\|_{\varepsilon} \mbox{ : } \varepsilon > 0 \right\}.

With words, that means: for a given “scale” $\varepsilon > 0$, take discrete paths from E to X, made by “small” (norm smaller than \varepsilon) horizontal increments, and then take the infimum of the length of such curves. You get \| X\|_{\varepsilon}. Go with \varepsilon to o and get the norm \| X\|_{\varepsilon}.

Up to some normalization, the bigger is the norm of an element of G, the bigger is the infimal length of a horizontal curve which expresses it, therefore the group norm gives a quantitative estimate concerning how the group element is generated.

In disguise, this norm is nothing but a sub-riemannian distance!

Quantum physics and the Gromov-Hausdorff distance: more than discrete or continuous

In face of the question “is reality digital or analog?”, my first reaction was “how can one be so unimaginative?”. Actually, I know that there is at least another possibility, thanks to some mathematical results, like Pansu’ Rademacher theorem for Carnot groups and to the Gromov-Hausdorff distance.
I saw this question announced as the subject of a FQXI essay contest and I decided to give a try to explain a bit what I have in mind. It was interesting to do this. I think it was also a bit disconcerting, because it feels a lot like social interacting during the participation in a MMORPG.

Anyway, my contribution was this (see arXiv link also)

More than discrete or continous: a bird’s view

The idea is that any experiment is like making a map of a part of the reality (say at the quantum scale, for example) into another (the results, as we read them in the lab). Estimating distances are proposed as a good analogy of a measurement. Or, it may be possible that there are a priori limits of the precision (accuracy, etc) of the map (any map obtained by a exploiting a physical process) due to the fact that at different scales the space is different, like metric spaces are.

Up to some normalization, the Planck length may be just (proportional to) the Gromov-Hausdorff distance between the (unit ball in ) the Heisenberg group (as a Carnot group) and the euclidean phase space of the same system.

Reality (whatever that means, say the “real” phase space of a system) may be discrete, continuous, both or none, but at different scales, up to a (small) error, the Heisenberg group or the euclidean space, respectively, may be close to it (to the small part of reality explored). Then, as a consequence of Pansu’ Rademacher theorem, it follows that one cannot make a perfect (read “bi-lipschitz”) map of a part of reality into the other, but instead, there is a lower limit of the accuracy of any such map, quantitatively proportional with a Gromov-Hausdorff distance.

In order to explain this in more detail, I took the view that anyway the differential calculus (which is the mathematical base of physics) is an abstraction of the activity of map-making, or chorography.

The “bird’s view” is a play with a quote from Plato, I let you discover it, at the end of the paper.

Drawings

This week-end post is light, dedicated to painting. However, for me painting is a serious matter, not at all a hobby. When I was very young, I struggled to decide what to do when I shall be a grown-up: physics or math? Painting came to me out of the blue (as most of good things in my life) when I was 10. Later I entertained the idea to become a painter instead of a mathematician, but I have been told that I shall live a life of poverty, most likely. I was impressed by this argument and became a researcher, so now I am very rich indeed.

Some day I shall be a painter. As doing this requires 24 hours in a day, is for later.

Here are some unworthy experiments with Photoshop, done during my stay in Lausanne. The point is that one can produce anything starting from a random picture taken from the net and deforming (patches of) it by applying available algorithms in the mentioned program. In a way, there is not one line drawn, but only a whirl of local transformations of the original photo.

For example, one of the first drawing with photoshop was this (a part of it is put as the front image of the blog): it is a snail shell

but it means a lot of words, especially regarding to the status of space.

This drawing is a deformed picture of a dancer (so much deformed that at large scale the picture looks like it is no longer a topological transformation of an image of a simply connected dancer)

This one is originally a photo of two cats:

Finally, here you may recognize a photo from Venice, with boats floating, beyond of their stability points , in the sky.

Combinatorics versus geometric…

… is like using roman numerals versus using a positional numeral system, like the hindu-arabic numerals we all know very well. And there is evidence that our brain is NOT using combinatorial techniques, but geometric, see further.

What is this post about? Well, it is about the problem of using knowledge concerning topological groups in order to study discrete approximate groups, as Tao proposes in his new course, it is about discrete finitely generated groups with polynomial growth which, as Gromov taught us, when seen from far away they become nilpotent Lie groups, and so on. Only that there is a long way towards these subjects, so please bear me a little bit more.

This is part of a larger project to try to understand approximate groups, as well as normed groups with dilations, in a more geometric way. One point of interest is understanding the solution to the Hilbert’s fifth problem from a more general perspective, and this passes by NOT using combinatorial techniques from the start, even if they are one of the most beautiful mathematical gems which is the solution given by Gleason-Montgomery-Zippin to the problem.

What is combinatorial about this solution? Well, it reduces (in a brilliant way) the problem to counting, by using objects which are readily at hand in any topological group, namely the one-parameter subgroups. There is nothing wrong about this, only that, from this point of view, Gromov’s theorem on groups with polynomial growth appears as magical. Where is this nilpotent structure coming from?

As written in a previous post, Hilbert’s fifth problem without one-parameter subgroups, Gromov’ theorem is saying a profound geometrical thing about a finitely generated group with polynomial growth: that seen from far away this group is self-similar, that is a conical group, or a contractible group w.r.t. any of its dilations. That is all! the rest is just a Siebert’ result. This structure is deeply hidden in the proof and one of my purposes is to understand where it is coming from. A way of NOT understanding this is to use huge chunks of mathematical candy in order to make this structure appear by magic.

I cannot claim that I understand this, that I have a complete solution, but instead, for this post, I looked for an analogy and I think I have found one.

It is the one from the first lines of the post.

Indeed, what is wrong, by analogy, with the roman numeral system? Nothing, actually, we have generators, like I, V, X, L, C, D, M, and relations, like IIII = IV, and so on (yes, they are not generators and relations exactly like in a group sense). The problems appear when we want to do complex operations, like addition of large numbers. Then we have to be really clever and use very intelligent and profound combinatorial arguments in order to efficiently manage all the cancellations and whatnot coming from the massive use of relations. Relations are at very small scale, we have to bridge the way towards large scales, therefore we have to approximate the errors by counting in different ways and to be very clever about these ways.

Another solution for this, historically preferred, was to use a positional number system, which is more geometric, because it exploits a large scale property of natural numbers, which is that their collection is (approximately) self-similar. Indeed, take (as another kind of generators, again not in a group sense), a small set, like B={0, 1, 2, …, 8, 9} and count in base 10, which goes roughly like this: take a (big, preferably) natural number X and do the following

– initialize i = 1,

– find the smallest natural power a_{i} of 10 such that 10^{-a_{i}} X has a norm smaller than 10, then pick the element k_{i} of B which minimizes the distance to 10^{-a_{i}} X,

– substract (from the right or the left, it does not matter here because addition of natural numbers is commutative) 10^{a_{i}} k_{i} from X, and rename the result by X,

-repeat until X \in B and finish the algorithm by taking the last digit as X.

In the end (remember, I said “roughly”) represent X as a string which codes the string of pairs (a_{i}, k_{i}).

The advantage of this representation of natural numbers is that we can do, with controlled precision, the addition of big numbers, approximately. Indeed, take two very big numbers X, Y and take another number, like 10. Then for any natural n define

(X+Y) approx(n) = 10^{n} ( [X]_{n} + [Y]_{n})

where [X]_{n} is the number which is represented as the truncation of the string which represents X up to the first n letters, counting from left to right.
If n is small compared with X, Y, then (X+Y) approx(n) is close to the true X+Y, but the computational effort for calculating (X+Y) approx(n) is much smaller than the one for calculating X+Y.

Once we have this large scale self-similarity, then we may exploit it by using the more geometrical positional numeral system instead of the roman numeral system, that is my analogy. Notice that in this (almost correct) algorithm 10^{a} X is not understood as X+X+....+X 10^{a} times.

Let me now explain why the positional numeral system is more geometric, by giving a neuroscience argument, besides what I wrote in this previous post: “How not to get bored, by reading Gromov and Tao” (mind the comma!).

I reproduce from the wiki page on “Subitizing and counting

Subitizing, coined in 1949 by E.L. Kaufman et al.[1] refers to the rapid, accurate, and confident judgments of number performed for small numbers of items. The term is derived from the Latin adjective subitus (meaning “sudden”) and captures a feeling of immediately knowing how many items lie within the visual scene, when the number of items present falls within the subitizing range.[1] Number judgments for larger set-sizes were referred to either as counting or estimating, depending on the number of elements present within the display, and the time given to observers in which to respond (i.e., estimation occurs if insufficient time is available for observers to accurately count all the items present).

The accuracy, speed, and confidence with which observers make judgments of the number of items are critically dependent on the number of elements to be enumerated. Judgments made for displays composed of around one to four items are rapid,[2] accurate[3] and confident.[4] However, as the number of items to be enumerated increases beyond this amount, judgments are made with decreasing accuracy and confidence.[1] In addition, response times rise in a dramatic fashion, with an extra 250–350 ms added for each additional item within the display beyond about four.[5]

This is a brain competence which is spatial (geometrical) in nature, as evidenced by Simultanagnosia:

Clinical evidence supporting the view that subitizing and counting may involve functionally and anatomically distinct brain areas comes from patients with simultanagnosia, one of the key components of Balint’s syndrome.[14] Patients with this disorder suffer from an inability to perceive visual scenes properly, being unable to localize objects in space, either by looking at the objects, pointing to them, or by verbally reporting their position.[14] Despite these dramatic symptoms, such patients are able to correctly recognize individual objects.[15] Crucially, people with simultanagnosia are unable to enumerate objects outside the subitizing range, either failing to count certain objects, or alternatively counting the same object several times.[16]

From the wiki description of simultanagnosia:

Simultanagnosia is a rare neurological disorder characterized by the inability of an individual to perceive more than a single object at a time. It is one of three major components of Bálint’s syndrome, an uncommon and incompletely understood variety of severe neuropsychological impairments involving space representation (visuospatial processing). The term “simultanagnosia” was first coined in 1924 by Wolpert to describe a condition where the affected individual could see individual details of a complex scene but failed to grasp the overall meaning of the image.[1]

I rest my case.