Baker-Campbell-Hausdorff polynomials and Menelaus theorem

This is a continuation of the previous post on the noncommutative BCH formula. For the “Menelaus theorem” part see this post.

Everything is related to “noncommutative techniques” for approximate groups, which hopefully will apply sometimes in the future to real combinatorial problems, like the Tao’ project presented here, and also to the problem of understanding curvature (in non-riemannian settings), see a hint here, and finally to the problem of higher order differential calculus in sub-riemannian geometry, see this for a comment on this blog.

Remark: as everything this days can be retrieved on the net, if you find in this blog something worthy to include in a published paper, then don’t be shy and mention this. I believe strongly in fair practices relating to this new age of scientific collaboration opened by the www, even if in the past too often ideas which I communicated freely were taken in published papers without attribution. Hey, I am happy to help! but unfortunately I have an ego too (not only an ergobrain, as any living creature).

For the moment we stay in a Lie group , with the convention to take the exponential equal to identity, i.e. to consider that the group operation can be written in terms of Lie brackets according to the BCH formula:

$x y = x + y + \frac{1}{2} [x,y] + \frac{1}{12}[x,[x,y]] - \frac{1}{12}[y,[y,x]]+...$

For any $\varepsilon \in (0,1]$ we define

$x \cdot_{\varepsilon} y = \varepsilon^{-1} ((\varepsilon x) (\varepsilon y))$

and we remark that $x \cdot_{\varepsilon} y \rightarrow x+y$ uniformly with respect to $x,y$ in a compact neighbourhood of the neutral element $e=0$. The BCH formula for the operation labeled with $\varepsilon$ is the following

$x \cdot_{\varepsilon} y = x + y + \frac{\varepsilon}{2} [x,y] + \frac{\varepsilon^{2}}{12}[x,[x,y]] - \frac{\varepsilon^{2}}{12}[y,[y,x]]+...$

$BCH^{0}_{\varepsilon} (x,y) = x \cdot_{\varepsilon} y$

and $BCH^{0}_{0}(x,y) = \lim_{\varepsilon \rightarrow 0} BCH^{0}_{\varepsilon}(x,y) = x + y$.

Define the “linearized dilation$\delta^{x}_{\varepsilon} y = x + \varepsilon (-x+y)$ (written like this on purpose, without using the commutativity of the “+” operation; due to limitations of my knowledge to use latex in this environment, I am shying away to put a bar over this dilation, to emphasize that it is different from the “group dilation”, equal to $x (\varepsilon(x^{-1}y))$).

Consider the family of $\beta > 0$ such that there is an uniform limit w.r.t. $x,y$ in compact set of the expression

$\delta_{\varepsilon^{-\beta}}^{BCH^{0}_{\varepsilon}(x,y)} BCH^{0}_{0}(x,y)$

and remark that this family has a maximum $\beta = 1$. Call this maximum $\alpha_{0}$ and define

$BCH^{1}_{\varepsilon}(x,y) = \delta_{\varepsilon^{-\alpha_{1}}}^{BCH^{0}_{\varepsilon}(x,y)} BCH^{0}_{0}(x,y)$

and $BCH^{1}_{0}(x,y) = \lim_{\varepsilon \rightarrow 0} BCH^{1}_{\varepsilon}(x,y)$.

Let us compute $BCH^{1}_{0}(x,y)$:

$BCH^{1}_{0}(x,y) = x + y + \frac{1}{2}[x,y]$

and also remark that

$BCH^{1}_{\varepsilon}(x,y) = x+y + \varepsilon^{-1} ( -(x+y) + (x \cdot_{\varepsilon} y))$.

We recognize in the right hand side an expression which is a relative of what I have called in the previous post an “approximate bracket”, relations (2) and (3). A better name for it is a halfbracket.

We may continue indefinitely this recipe. Namely for any natural number $i\geq 1$ we first define the maximal number $\alpha_{i}$ among all $\beta > 0$ with the property that the (uniform) limit exists

$\lim_{\varepsilon \rightarrow 0} \delta_{\varepsilon^{-\beta}}^{BCH^{i}_{\varepsilon}(x,y)} BCH^{i}_{0}(x,y)$

Generically we shall find $\alpha_{i} = 1$. We define then

$BCH^{i+1}_{\varepsilon}(x,y) = \delta_{\varepsilon^{-\alpha_{i}}}^{BCH^{i}_{\varepsilon}(x,y)} BCH^{i}_{0}(x,y)$

and $BCH^{i+1}_{0}(x,y) = \lim_{\varepsilon \rightarrow 0} BCH^{i+1}_{\varepsilon}(x,y)$.

It is time to use Menelaus theorem. Take a natural number $N > 0$. We may write (pretending we don’t know that all $\alpha_{i} = 1$, for $i = 0, ... N$):

$x \cdot_{\varepsilon} y = BCH^{0}_{\varepsilon}(x,y) = \delta^{BCH^{0}_{0}(x,y)}_{\varepsilon^{\alpha_{0}}} \delta^{BCH^{1}_{0}(x,y)}_{\varepsilon^{\alpha_{1}}} ... \delta^{BCH^{N}_{0}(x,y)}_{\varepsilon^{\alpha_{N}}} BCH^{N+1}_{\varepsilon}(x,y)$

Let us denote $\alpha_{0} + ... + \alpha_{N} = \gamma_{N}$ and introduce the BCH polynomial $PBCH^{N}(x,y)(\mu)$ (the variable of the polynomial is $\mu$), defined by: $PBCH^{N}(x,y)(\mu)$ is the unique element of the group with the property that for any other element $z$ (close enough to the neutral element) we have

$\delta^{BCH^{0}_{0}(x,y)}_{\mu^{\alpha_{0}}} \delta^{BCH^{1}_{0}(x,y)}_{\mu^{\alpha_{1}}} ... \delta^{BCH^{N}_{0}(x,y)}_{\mu^{\alpha_{N}}} z = \delta^{PBCH^{N}(x,y)(\mu)}_{\mu^{\gamma_{N}}} z$

Such an element exists and it is unique due to (Artin’ version of the) Menelaus theorem.

Remark that $PBCH^{N}(x,y)(\mu)$ is not a true polynomial in $\mu$, but it is a rational function of $\mu$ which is a polynomial up to terms of order $\mu^{\gamma_{N}}$. A straightforward computation shows that the BCH polynomial (up to terms of the mentioned order) is a truncation of the BCH formula up to terms containing $N-1$ brackets, when we take $\mu =1$.

It looks contorted, but written this way it works verbatim for normed groups with dilations! There are several things which are different in detail. These are:

1. the coefficients $\alpha_{i}$ are not equal to $1$, in general. Moreover, I can prove that the $\alpha_{i}$ exist (as a maximum of numbers $\beta$ such that …) for a sub-riemannian Lie group, that is for a Lie group endowed with a left-invariant dilation structure, by using the classical BCH formula, but I don’t think that one can prove the existence of these numbers for a general group with dilations! Remark that the numbers $\alpha_{i}$ are defined in a similar way as Hausdorff dimension is!

2. one has to define noncommutative polynomials, i.e. polynomials in the frame of Carnot groups (at least). This can be done, it has been sketched in a previous paper of mine, Tangent bundles to sub-riemannian groups, section 6.

UPDATE: (30.10.2011) See the post of Tao

Associativity of the Baker-Campbell-Hausdorff formula

where a (trained) eye may see the appearance of several ingredients, in the particular commutative case, of the mechanism of definition of the BCH formula.

The associativity is rephrased, in a well known way,  in proposition 2 as a commutativity of say left and  right actions. From there signs of commutativity (unconsciously assumed) appear:  the obvious first are the “radial  homogeneity  identities”, but already at this stage a lot of familiar  machinery is put in place and the following is more and more heavy of  the same. I can only wonder:  is this  all necessary? My guess is: not. Because for starters, as explained here and in previous posts, Lie algebras are of a commutative blend, like the BCH formula. And (local, well known from the beginning) groups are not.

Principles: randomness/structure or emergent from a common cause?

I think both.

Finally Terence Tao presented a sketch of his project relating Hilbert’ fifth problem and approximate groups. For me the most interesting part is his Theorem 12 and its relation with Gromov’ theorem on finitely generated groups with polynomial growth.

A bit dissapointing (for me) is that he seems to choose to rely on “commutative” techniques and, not surprising, he is bound to get results valid only in riemannian geometry (or spaces of Alexander type) AND also to regard the apparition of nilpotent structures as qualitatively different from smooth structures.

For clarity reasons, I copy-paste his two “broad principles”

The above three families of results exemplify two broad principles (part of what I like to call “the dichotomy between structure and randomness“):

• (Rigidity) If a group-like object exhibits a weak amount of regularity, then it (or a large portion thereof) often automatically exhibits a strong amount of regularity as well;
• (Structure) This strong regularity manifests itself either as Lie type structure (in continuous settings) or nilpotent type structure (in discrete settings). (In some cases, “nilpotent” should be replaced by sister properties such as “abelian“, “solvable“, or “polycyclic“.)

Let me contrast with my

Principle of common cause: an uniformly continuous algebraic structure has a smooth structure because both structures can be constructed from an underlying emergent algebra (introduced here).

(from this previous post).

While his “dichotomy between structure and randomness“ principle is a very profound and mysterious one, the second part (Structure) is only an illusion created by psychological choices (I guess). Indeed, both (Lie) smooth structure and nilpotence are just “noncommutative (local) linearity”, as explained previously. Both smooth structure and “conical algebraic” structure (nilpotent in particular) stem from the same underlying dilation structure. What is most mysterious, in my opinion, be it in the “exact” or “approximate” group structure, is HOW a (variant of) dilation structure appears from non-randomness (presumably as a manifestation of Tao randomness/structure principle), i.e. HOW just a little bit of approximate self-similarity bootstraps to a dilation structure, or emergent algebra, which then leads to various flavors of “smoothness” and “nilpotence”.

Penrose’ combinatorial space time as chora

Roger Penrose, among other extraordinary things he did, proposed an approach to combinatorial space-time by way of spin-networks. Here is a link to his amazing paper

Roger Penrose, Angular momentum: an approach to combinatorial space-time, in Quantum Theory and Beyond, ed. T. Bastin, Cambridge University Press, Cambridge, 1971.

With the new knowledge gradually constructed lately, I returned to read this classic and it just downed on me that a strand-network (Penrose paper page 19 in the pdf linked above) can be given a structure of a collection of choroi, see the post

Entering chora, the infinitesimal place

or better go to the paper “Computing with space“.

This is food for thought for me, I just felt the need to communicate this. It is not my intention to chase for theories of everything, better is to understand, as a mathematician, what is worthy to learn and import from this field into what interests me.

Menelaus theorem by way of Reidemeister move 3

(Half of) Menelaus theorem is equivalent with a theorem by Emil Artin, from his excellent book Geometric Algebra, Interscience Publishers (1957), saying that the inverse semigroup generated by dilations (in an affine space) is composed by dilations and translations. More specifically, if $\varepsilon, \mu > 0$ are such that $\varepsilon \mu < 1$ then the composition of two dilations, one of coefficient $\varepsilon$ and the other of coefficient $\mu$, is a dilation of coefficient $\varepsilon \mu$.

Artin contributed also to braid theory, so it may be a nice idea to give a proof of Artin interpretation of Menelaus theorem by using Reidemeister moves.

This post is related to previous ones, especially these three:

Noncommutative Baker-Campbell-Hausdorff formula

A difference which makes four differences, in two ways

Rigidity of algebraic structure: principle of common cause

which I shall use as references for what a normed group with dilations, conical group and associated decorations of tangle diagrams are.

Let’s start! I use a representation of dilations as decorations of an oriented tangle diagram.

For any $\varepsilon > 0$, dilations of coefficient $\varepsilon$ and $\varepsilon^{-1}$ provide two operations which give to the space (say $X$) the structure of an idempotent right quasigroup, which is equivalent to saying that decorations of the tangle diagrams by these rules are stable to the Reidemeister moves of type I and II.

A particular example of a space with dilations is a normed group with dilations, where the dilations are left-invariant.

If the decorations that we make are also stable with respect to the Reidemeister move 3, then it can be proved that definitely the space with dilations which I use has to be a conical group! What is a conical group? It is a non-commutative vector space, in particular it could be a real vector space or the Heisenberg group, or a Carnot group and so on. Read the previous posts about this.

Graphically, the Reidemeister move 3 is this sliding movement:

of $CC'$ under the crossing $AA'-BB'$ (remark also how the decorations of crossings $\varepsilon$ and $\mu$ switch places).
Further on I shall suppose that we use for decorations a conical group, with distance function denoted by $d$. Think about a real vector space with distance given by an euclidean norm, but don’t forget that in fact we don’t need to be so restrictive.

Take now two strings and twist them one around the other, an infinity of times, then pass a third string under the first two, then decorate everything as in the following figure

We can slide twice the red string (the one which is under) by using the Reidemeister move 3. The decorations do not change. If you want to see what is the expression of $z'$ as a function of $x,y$ then we easily write that

$z' = \delta^{x}_{\varepsilon} \delta^{y}_{\mu} z = \delta^{x_{1}}_{\varepsilon} \delta^{y_{1}}_{\mu} z$

where $x_{1}. y_{1}$ are obtained from $x,y$ according to the rules of decorations.

We may repeat $n$ times the double slide movement and we get that

$z' = \delta^{x_{n}}_{\varepsilon} \delta^{y_{n}}_{\mu} z$

If we prove that the sequences $x_{n}, y_{n}$ converge both to some point $w$, then by passing to the limit in the previous equality we would get that

$z' = \delta^{w}_{\varepsilon} \delta^{w}_{\mu} z = \delta^{w}_{\varepsilon \mu} z$

which is the conclusion of Artin’s result! Otherwise said, if we slide the red string under all the twisted pair of strings, then the outcome is the same as passing under only one string, decorated with $w$, with the crossing decorated with $\varepsilon \mu$.

The only thing left is to prove that indeed the sequences converge. But this is easy: we prove that the recurrence relation between $x_{n+1}, y_{n+1}$ and $x_{n},y_{n}$ is a contraction. See the figure:

Well, that is all.

UPDATE: I was in fact motivated to draw the figures and explain all this after seeing this very nice post of Tao, where an elementary proof of a famous result is given, by using “elementary” graphical means.

Unlimited detail: news

There are news regarding the “Unlimited detail” technology developed by Euclideon.

To be clear, I don’t think it’s a scam. It may be related to what I am describing in the paper Maps of metric spaces, which has the abstract:

This is a pedagogical introduction covering maps of metric spaces, Gromov-Hausdorff distance and its “physical” meaning, and dilation structures as a convenient simplification of an exhaustive database of maps of a metric space into another. See arXiv:1103.6007 for the context.

This is pure speculation, but it looks to me that all has to do with manipulations of maps in the screen pixels space, along the lines of using scale stable and viewpoint stable zoom sequences (definitions 4.1-4.4) and (the groupoid of) transformations between these.

But how exactly? I would really much like to know!

Some time ago, after seeing the demos (check the link to the wiki page of unlimited detail), I tried to learn more about the mathematical details, but without success (which is understandable).

Now Bruce Dell released new demos and an interview!

Don’t be fooled by the fractal looking of the territory! Probably it has more to do with the fact that in order to use Unlimited Detail, one first needs to have a territory to render, so, in my opinion, the guys generated it by using some fractal tricks.

UPDATE 06.09.2012: After a bit more than a year, now Euclideon morphed into Euclideon:Geoverse. Looks less and less as a scam, what do you think, Markus Persson?

Still looking forward to learn how exactly Unlimited detail works though.

Towards aerography, or how space is shaped to comply with the perceptions of the homunculus

In the previous post

The Cartesian Theater: philosophy of mind versus aerography

I explained why the Cartesian Theater is not well describing the appearance of the homunculus.

A “Cartesian theater”, Dennett proposes, is any theory about what happens in one’s mind which can be reduced to the model of a “tiny theater in the brain where a homunculus … performs the task of observing all the sensory data projected on a screen at a particular instant, making the decisions and sending out commands.”

This leads to infinite regression, therefore any such theory is flawed. One has to avoid the appearance of the homunculus in one’s theory, as a consequence.

The homunculus itself may appear from apparently innocuous assumptions, such as the introduction of any limen (or threshold), like supposing that (from Consciousness Explained (1991), p. 107)

“…there is a crucial finish line or boundary somewhere in the brain, marking a place where the order of arrival equals the order of “presentation” in experience because what happens there is what you are conscious of.”

By consequence such assumptions are flawed. There is no limen, boundary inside the brain (strangely, any assumption which supposes a boundary which separates the individual from the environment is not disturbing anybody excepting Varela, Maturana, or the second order cybernetics).

In the previous post I argued, based on my understanding of the excellent paper of Kenneth R Olwig

that the “Cartesian theater” model is misleading because it neglects to notice that what happens on stage is as artificial as the homunculus spectator, while, in the same time, the theater itself (a theater in a box) is designed for perception.

Therefore, while everybody (?) accepts that there is no homunculus in the brain, in the same time nobody seems to be bothered that always the perception data are modeled as if they come from the stage of the Cartesian theater.

For example, few would disagree that we see a 3-dimensional, euclidean world. But this is obviously not what we see and the proof is that we can be easily tricked by stereoscopy. These are the visual data (together with other, more subtle, auditory, posture and whatnot) which the brain uses to reconstruct the world as seen by a homunculus, created by our illusory image that there is a boundary between us (me, you) and the environment.

You would say: nobody in the right mind denies that the world is 3d, at least our familiar everyday world, not quantum or black holes or other inventions of physicists. I don’t deny it, just notice, like in this previous post, that the space is perceived as it is based on prior knowledge, that is because prior “controlled hallucinations” led consistently to coherent interpretations.

The idea is that in fact there are two things to avoid: one is the homunculus and the other one is the scenic space.

The “scenic space” is itself a model of the real space (does this exists?) and it leads itself to infinite regression. We “learn space” by relating to it and modeling it in our brains. I suppose that all (inside and outside of the brain) complies with the same physical laws and that the rational explanation for the success of the “3d scenic space” (which is consistent with our educated perception, but also with physical phenomena in our world, at least at human scale and range) should come from this understanding that brain processes are as physical as a falling apple and as mathematical as perspective is.