Gleason metric and CC distance

In the series of posts on Hilbert’s fifth problem, Terence Tao defines a Gleason metric, definition 4 here, which is a very important ingredient of the proof of the solution to H5 problem.

Here is Remark 1. from the post:

The escape and commutator properties are meant to capture “Euclidean-like” structure of the group. Other metrics, such as Carnot-Carathéodory metrics on Carnot Lie groups such as the Heisenberg group, usually fail one or both of these properties.

I want to explain why this is true. Look at the proof of theorem 7. The problem comes from the commutator estimate (1). I shall reproduce the relevant part of the proof because I don’t yet know how to write good-looking latex posts:

From the commutator estimate (1) and the triangle inequality we also obtain a conjugation estimate

\displaystyle  \| ghg^{-1} \| \sim \|h\|

whenever {\|g\|, \|h\| \leq \epsilon}. Since left-invariance gives

\displaystyle  d(g,h) = \| g^{-1} h \|

we then conclude an approximate right invariance

\displaystyle  d(gk,hk) \sim d(g,h)

whenever {\|g\|, \|h\|, \|k\| \leq \epsilon}.

The conclusion is that the right translations in the group are Lipschitz (with respect to the Gleason metric). Because this distance (I use “distance” instead of “metric”) is also left invariant, it follows that left and right translations are Lipschitz.

Let now G be a connected Lie group with a left-invariant distribution, obtained by left translates of a vector space D included in the Lie algebra of G. The distribution is completely non-integrable if D generates the Lie algebra by using the + and Lie bracket operations. We put an euclidean norm on D and we get a CC distance on the group defined by: the CC distance between two elements of the group equals the infimum of lengths of horizontal (a.e. derivable, with the tangent in the distribution) curves joining the said points.

The remark 1 of Tao is a consequence of the following fact: if the CC distance is right invariant then D equals the Lie algebra of the group, therefore the distance is riemannian.

Here is why: in a sub-riemannian group (that is a group with a distribution and CC distance as explained previously) the left translations are Lipschitz (they are isometries) but not all right translations are Lipschitz, unless D equals the Lie algebra of G. Indeed, let us suppose that all right translations are Lipschitz. Then, by Margulis-Mostow version (see also this) of the Rademacher theorem , the right translation by an element “a” is Pansu derivable almost everywhere. It follows that the Pansu derivative of the right translation by “a” (in almost every point) preserves the distribution. A simple calculus based on invariance (truly, some explanations are needed here) shows that by consequence the adjoint action of “a” preserves D. Because “a” is arbitrary, this implies that D is an ideal of the Lie algebra. But D generates the Lie algebra, therefore D equals the Lie algebra of G.

If you know a shorter proof please let me know.

UPDATE: See the recent post 254A, Notes 4: Bulding metrics on groups, and the Gleason-Yamabe theorem by Terence Tao, for details of the role of the Gleason metric  in the proof of the Hilbert 5th problem.

Advertisements

Curvature and Brunn-Minkowski inequality

A beautiful paper by Yann Ollivier and Cedric Villani

A curved BRUNN–MINKOWSKI INEQUALITY on the discrete hypercube OR: WHAT IS THE RICCI CURVATURE OF THE DISCRETE  HYPERCUBE?

The Brunn-Minkowski inequality  says that  the log  of the volume (in euclidean spaces) is concave. The concavity inequality is improved, in riemannian manifolds with Ricci curvature at least K, by a quadratic term with coefficient proportional with K.

The paper is remarkable in many ways. In particular are compared two roads towards curvature in spaces more general than riemannian: the coarse curvature introduced by Ollivier and the other based on the displacement convexity of the entropy function (Felix Otto , Cedric Villani, John Lott, Karl-Theodor Sturm), studied by many researchers. Both are related to  Wasserstein distances . NONE works for sub-riemannian spaces, which is very very interesting.

In few words, here is the description of the coarse Ricci curvature: take an epsilon and consider the application from the metric space (riemannian manifold, say) to the space of probabilities which associates to a point from the metric space the restriction of the volume measure on the epsilon-ball centered in that point (normalized to give a probability). If this application is Lipschitz with constant L(epsilon) (on the space of probabilities take the L^1 Wassertein distance) then the epsilon-coarse Ricci curvature times epsilon square is equal to 1 minus L(epsilon) (thus we get a lower bound of the Ricci curvature function, if we are in a Riemannian manifold). Same definition works in a discrete space (this time epsilon is fixed).
The second definition of Ricci curvature comes from inverse engineering of the displacement convexity inequality discovered in many particular spaces. The downside of this definition is that is hard to “compute” it.

Initially, this second definition was related to the L^2 Wasserstein distance which,  according to Otto calculus, gives to the space of probabilities (in the L^2 frame) a structure of an infinite dimensional riemannian manifold.

Concerning the sub-riemannian spaces, in the first definition the said application cannot be Lipschitz and in the second definition there is (I think) a manifestation of the fact that we cannot put, in a metrically acceptable way, a sub-riemannian space into a riemannian-like one, even infinite dimensional.

Bayesian society

It is maybe a more flexible society one which is guided by a variable ideology “I”,  fine-tuned continuously by bayesian techniques. The individual would be replaced by the bayesian individual, which forms its opinions from informations coming through a controlled channel. The input informations are made more or less available to the individual by using again bayesian analysis of interests, geographical location and digital footprint (creative commons attribution 2.0 licence, free online), closing the feedback loop.

A difference which makes four differences, in two ways

Gregory Bateson , speaking about the map-territory relation

“What is in the territory that gets onto the map? […] What gets onto the map, in fact, is difference.

A difference is a very peculiar and obscure concept. It is certainly not a thing or an event. This piece of paper is different from the wood of this lectern. There are many differences between them, […] but if we start to ask about the localization of those differences, we get into trouble. Obviously the difference between the paper and the wood is not in the paper; it is obviously not in the wood; it is obviously not in the space between them .

A difference, then, is an abstract matter.

Difference travels from the wood and paper into my retina. It then gets picked up and worked on by this fancy piece of computing machinery in my head.

… what we mean by information — the elementary unit of information — is a difference which makes a difference.

(from “Form, Substance and Difference”, Nineteenth Annual Korzybski Memorial
Lecture delivered by Bateson on January 9, 1970, under the auspices of the Institute of General Semantics, re-printed from the General Semantics Bulletin, no.
37, 1970, in Steps to an Ecology of Mind (1972))

This “difference which makes a difference” statement is quite famous, although sometimes considered only a figure of speach.

I think it is not, let me show you why!

For me a difference can be interpreted as an operator which relates images of the same thing (from the territory) viewed in two different maps, like in the following picture:

This figure is taken from “Computing with space…” , see section 1 “The map is the territory” for drawing conventions.

Forget now about maps and territories and concentrate on this diagram viewed as a decorated tangle. The rules of decorations are the following: arcs are decorated with “x,y,…”, points from a space, and the crossings are decorated with epsilons, elements of a commutative group (secretly we use an emergent algebra, or an uniform idempotent right quasigroup, to decorate arcs AND crossings of a tangle diagram).

What we see is a tangle which appears in the Reidemeister move 3 from knot theory. When epsilons are fixed, this diagram defines a function called (approximate) difference.

Is this a difference which makes a difference?

Yes, in two ways:

1. We could add to this diagram an elementary unknot passing under all arcs, thus obtaining the diagram

Now we see four differences in this equivalent tangle: the initial one is made by three others.
The fact that a difference is selfsimilar is equivalent with the associativity of the INVERSE of the approximate difference operation, called approximate sum.

2. Let us add an elementary unknot over the arcs of the tangle diagram, like in the following figure

called “difference inside a chora” (you have to read the paper to see why). According to the rules of tangle diagrams, adding unknots does not change the tangle topologically (although this is not quite true in the realm of emergent algebras, where the Reidemeister move 3 is an acceptable move only in the limit, when passing with the crossing decorations to “zero”).

By using only Reidemeister moves 1 and 2, we can turn this diagram into the celtic looking figure

which shows again four differences: the initial one in the center and three others around.

This time we got a statement saying that a difference is preserved under “infinitesimal parallel transport”.

So, indeed, a difference makes four differences, in at least two ways, for a mathematician.

If you want to understand more from this crazy post, read the paper 🙂

Rigidity of algebraic structure: principle of common cause

I follow with a lot of interest the stream of posts by Terence Tao on the Hilbert’s fifth problem and I am waiting impatiently to see how it connects with the field of approximate groups.

In his latest post Tao writes that

… Hilbert’s fifth problem is a manifestation of the “rigidity” of algebraic structure (in this case, group structure), which turns weak regularity (continuity) into strong regularity (smoothness).

This is something amazing and worthy of exploration!
I propose the following “explanation” of this phenomenon, taking the form of the:

Principle of common cause: an uniformly continuous algebraic structure has a smooth structure because both structures can be constructed from an underlying emergent algebra (introduced here).

Here are more explanations (adapted from the first paper on emergent algebras):

A differentiable algebra, is an algebra (set of operations A) over a manifold X with the property that all the operations of the algebra are differentiable with respect to the manifold structure of X. Let us denote by D the differential structure of the manifold X.
From a more computational viewpoint, we may think about the calculus which can be
done in a differentiable algebra as being generated by the elements of a toolbox with two compartments “A” and “D”:

– “A” contains the algebraic information, that is the operations of the algebra, as
well as algebraic relations (like for example ”the operation ∗ is associative”, or ”the operation ∗ is commutative”, and so on),
– “D” contains the differential structure informations, that is the information needed in order to formulate the statement ”the function f is differentiable”.
The compartments “A” and “D” are compatible, in the sense that any operation from “A” is differentiable according to “D”.

I propose a generalization of differentiable algebras, where the underlying differential structure is replaced by a uniform idempotent right quasigroup (irq).

Algebraically, irqs are related with racks and quandles, which appear in knot theory (the axioms of a irq correspond to the first two Reidemeister moves). An uniform  irq is a family of irqs indexed by elements of a commutative group (with an absolute), such that  the third Reidemeister move is related to a statement in terms of uniform limits of composites of operations of the family of irqs.

An emergent algebra is an algebra A over the uniform irq X such that all operations and algebraic relations from A can be constructed or deduced from combinations of operations in the uniform irq, possibly by taking limits which are uniform with respect to a set of parameters. In this approach, the usual compatibility condition between algebraic information and differential information, expressed as the differentiability of algebraic operations with respect to the differential structure, is replaced by the “emergence” of algebraic operations and relations from the minimal structure of a uniform irq.

Thus, for example, algebraic operations and the differentiation operation (taking   the triple (x,y,f) to Df(x)y , where “x, y” are  points and “f” is a function) are expressed as uniform limits of composites of more elementary operations. The algebraic operations appear to be differentiable because of algebraic abstract nonsense (obtained by exploitation of the Reidemeister moves) and because of the uniformity assumptions which allow us to freely permute limits with respect to parameters in the commutative group (as they tend to the absolute), due to the uniformity assumptions.

The structure of visual space

Mark Changizi has an interesting post “The Visual Nerd in You Undestands Curved Space” where he explains that spherical geometry is relevant for the visual perception.

At some point he writes a paragraph which triggered my post:

Your visual field conforms to an elliptical geometry!

(The perception I am referring to is your perception of the projection, not your perception of the objective properties. That is, you will also perceive the ceiling to objectively, or distally, be a rectangle, each angle having 90 degrees. Your perception of the objective properties of the ceiling is Euclidean.)

Is it true that our visual perception senses the Euclidean space?

Look at this very interesting project

The structure of optical space under free viewing conditions

and especially at this paper:

The structure of visual spaces by J.J. Koenderink, A.J. van Doorn, Journal of mathematical imaging and vision, Volume: 31, Issue: 2-3 (2008), pp. 171-187

In particular, one of the very nice things this group is doing is to experimentally verify the perception of true facts in projective geometry (like this Pappus theorem).

From the abstract of the paper: (boldfaced by me)

The “visual space” of an optical observer situated at a single, fixed viewpoint is necessarily very ambiguous. Although the structure of the “visual field” (the lateral dimensions, i.e., the “image”) is well defined, the “depth” dimension has to be inferred from the image on the basis of “monocular depth cues” such as occlusion, shading, etc. Such cues are in no way “given”, but are guesses on the basis of prior knowledge about the generic structure of the world and the laws of optics. Thus such a guess is like a hallucination that is used to tentatively interpret image structures as depth cues. The guesses are successful if they lead to a coherent interpretation. Such “controlled hallucination” (in psychological terminology) is similar to the “analysis by synthesis” of computer vision.

So, the space is perceived to be euclidean based on prior knowledge, that is because prior controlled hallucinations led consistently to coherent interpretations.

Read arXiv every day? Yes!

This post from the Secret Blogging Seminar led me to this (now closed) question by Igor Pak at the Mathoverflow

Downsides of using the arxiv?

When my blood pressure went back to normal after reading the “downsides”, I spent some time informing myself about the answers given to this question, others than the very pertinent ones, in my opinion, from the Mathoverflow page. I think the best page to browse is the discussion from the meta.mathoverflow.net .

I have to say that really, yes!, I read arXiv every day, because it gives access to a lot of mathematical informations, which I filter according to my mathematical “nose” and not on an authority basis. One has to read papers in order to have an informed opinion.

The beautiful discussion page from meta.mathoverflow.net is an excellent example of the superiority of the new ways as compared with the older ones.

UPDATE(22.07.2011): The AMS Notices (August 2011) paper The changing Nature of Mathematical Publication is relevant for the subject of the post from the Secret Blogging Seminar. It appears to me that more or less the same strange problems concerning the arxiv are put forward in the article. Particularly this passage

If we ultimately publish our paper in a traditional journal, then how will that journal view our paper being first put on arXiv? If someone plagiarizes your work from arXiv, then what protections do you have?

seems to me to imply that there is less protection against plagiarism from arxiv than against plagiarism from traditionally published work. My take is that a paper on arxiv is more protected against plagiarism than a traditionally published paper, especially if you are not part of a politically strong team or country, because it is straightforward to prove the plagiarism (anyway easier than by relying on the publishing business and the peer review process). Besides, the “rhetorical question” seems to imply that it is not clear if there are any specific protections, like copyright, when in fact there are easy to find and clearly stated!

To finish, the purpose of the article is to announce a new publication column, “Scripta Manent”. The peer review process for this paper failed to notice that the title of this new publication column is spelled “Scripta Manet” twice!

computing with space | open notebook

Neoyuva's Blog

Reinventing myself.

MolView

computing with space | open notebook

MaidSafe

The Decentralised Internet is Here

Voxel-Engine

An experimental 3d voxel rendering algorithm

Retraction Watch

Tracking retractions as a window into the scientific process

Gödel's Lost Letter and P=NP

a personal view of the theory of computation

%d bloggers like this: