Uniform spaces, coarse spaces, dilation spaces

This post is related to the previous What’s the difference between uniform and coarse structures? (I).  However, it’s not part (II).

Psychological motivation.  Helge Gloecker made a clear mathscinet review of my paper Braided spaces with dilations and sub-riemannian symmetric spaces.  This part of the review motivated me to start writing some more about dilatation structures, some of it explained in the next section of this post. Here is the motivating part of the review:

“Aiming to put sub-Riemannian geometry on a purely metric footing (without recourse to differentiable structures), the author developed the concept of a dilatation structure and related notions in earlier works [see J. Gen. Lie Theory Appl. 1 (2007), no. 2, 65–95; MR2300069 (2008d:20074); Houston J. Math. 36 (2010), no. 1, 91–136; MR2610783 (2011h:53032); Commun. Math. Anal. 11 (2011), no. 2, 70–111; MR2780883 (2012e:53051)].

[…]

In a recent preprint [see “Emergent algebras”, arXiv:0907.1520], the author showed that idempotent right quasigroups are a useful auxiliary notion for the introduction (and applications) of dilatation structures.

[…]

The author sketches how dilatation structures can be defined with the help of uniform irqs, suppressing the technical and delicate axioms concerning the domains of the locally defined maps. More precisely, because the maps are only locally defined, the structures which occur are not uniform irqs at face value, but only suitable analogues using locally defined maps (a technical complication which is not mentioned in the article).”

This was done before, several times, for dilatation structures, but Helge is right that it has not been done for emergent algebras.

Which brings me to the main point: I started a paper with the same (but temporary) name as this post.  The paper is based on the following ideas.

Fields of dilations which generate their own topology, uniformity, coarse structure, whatever.    In a normed real vector space (V, \| \cdot \|) we may use dilations as replacements of metric balls. Here is the simple idea.

Let us define, for any x \in V, the domain of x as U(x),  the ball with center x, of radius one, with respect to the distance d(x,y) = \|x - y \|. In general, let B(x, r)  be the  metric ball with respect to the distance induced by the norm.

Also, for any x \in V and \varepsilon \in (0,+\infty), the dilation based at x, with coefficient \varepsilon is the function \delta^{x}_{\varepsilon} y = x + \varepsilon ( -x + y) .

I use these notations to write the  ball of radius \varepsilon \in (0,1] as B(x,\varepsilon) = \delta^{x}_{\varepsilon} U(x) and give it the fancy name dilation ball of coefficient \varepsilon.

In fact, spaces with dilations, or dilatation structures, or dilation structures, are various names for spaces endowed with fields of dilations which satisfy certain axioms. Real normed vector spaces are examples of spaces with dilations, as a subclass of the larger class of conical groups with dilations, itself just a subclass of groups with dilations. Regular sub-riemannian manifolds are examples of spaces with dilations without a predefined group structure.

For any space with dilations, then, we may ask what can we get by forgetting the distance function, but keeping the dilation balls.  With the help of those we may define the uniformity associated with a field of dilations and then ask that the field of dilations is behaves uniformly, and so on.

Another structure, as interesting as the uniformity structure, is the (bounded metric) coarse structure of the space, which  again could be expressed in terms of fields of dilations. As coarse structures and uniform structures are very much alike (only that one is interesting for the small scale, other for the large scale), is there a notion of dilation structure which is appropriate for coarse structures?

I shall be back on this, with details, but the overall idea is that a field of dilations (a notion even weaker than an emergent algebra) is somehow midway between a uniformity structure and a coarse structure.

UPDATE (28.10.2012): I just found bits of the work of Protasov, who introduced the notion of “ball structure” and “ballean”, which is clearly related with the idea of keeping the balls, not the distance. Unfortunately, for the moment I am unable to find a way to read his two monographs on the subject.

Ado’s theorem for groups with dilations?

Ado’s theorem  is equivalent with the following:

Theorem. Let G be a local Lie group. Then there is a real, finite dimensional vector space V and an injective, local group morphism from (a neighbourhood of the neutral element of) G to GL(V), the linear group of $V$.

Any proof I am aware of, (see this post for one proof and relevant links),  mixes the following ingredients:

–  the Lie bracket and the BCH formula,

– either reduction to the nilpotent case or (nonexclusive) use of differential equations,

– the universal enveloping algebra.

WARNING: further I shall not mention the “local” word, in the realm of spaces with dilations everything is local.

We may pass to the following larger frame of spaces with dilations, dilatation structures or emergent algebras:

– locally compact groups with dilations instead of Lie groups

– locally compact conical groups instead of vector spaces

– linearity in the sense of dilation structures instead of usual linearity.

Conjecture:  For any locally compact group with dilations G there is a locally compact conical group N and an injective morphism \rho: G \rightarrow GL(N) such that for every x \in N the map g \in G \mapsto \rho(g)x is differentiable.

In this frame:

– we don’t have the corresponding Lie bracket and BCH formula, see the related problem of the noncommutative BCH formula,

– what nilpotent means is no longer clear (or needed?)

– we don’t have a clear tensor product, therefore we don’t have a correspondent of the universal enveloping algebra.

Nevertheless I think the conjecture is true and actually much easier to prove than Ado’s theorem, because of the weakening of the conclusion.

Is there a positional numeral system for Carnot groups?

I advertise here a question I posed on mathoverflow, then I shall give a motivation for this question.

In the following is a reproduction of  the question (thanks for all useful comments, read them at mathoverflow) (slight modification of the title of the question here)

What is the computational complexity of multiplication in a Carnot group? 

Background:  A Carnot group is a real nilpotent Lie group N whose Lie algebra Lie(N) admits a direct sum decomposition

Lie(N) = V_{1}+ ... + V_{m}

such that if i+j \leq m then  [V_{i}, V_{j}] \subset V_{i+j},   otherwise [V_{i}, V_{j}] = 0.

Moreover, it is required that V_{1} generates the whole Lie algebra Lie(N).

Detailed question: What is the computational complexity of multiplication  in N, as a function of m, dim V_{1}, … , dim V_{m}?

More background: 

1. “Computational complexity” means giving an estimate for the running time of an algorithm for performing the multiplication operation.I am looking for nontrivial estimates, better than, for example, the one coming from matrix multiplication.Exponentiation and logarithms are not taken into account (one variant of thinking is to use BCH formula for multiplication, with a finite number of terms due to nilpotency, other variant would be to use the remark from example 2.4 below and think about multiplication as matrix multiplication).

2. Examples of Carnot groups:

2.1. When m=1, this is a real vector space and the multiplication operation is just addition of vectors (thus the complexity of the  multiplication operation is the one of the addition of vectors, much less than the one of matrix multiplication).

2.2. When m=2, the simplest example is the Heisenberg group [see posts here about it] .

2.3. The group of invertible, n \times n  upper triangular matrices, has m = n-1 and dim V_{i} = n-i. Is there any better estimate than n^{3} in this case? (I think so, but cannot find a reference to an answer.) [In fact Henry Cohn answered to this:

“The exponent for multiplication of upper triangular matrices is the same as that for full matrix multiplication. Specifically, you can reduce n \times n matrix multiplication to the 3n \times 3n upper triangular case. (Use nine n \times n  blocks, with identity matrices on the diagonal and a zero matrix in the upper right corner.) So for that case the best exponent that’s known is 2.373, the same as for the general case.

The reduction is not a morphism. Suppose A  and B are any n \times n  matrices (not necessarily invertible), and define 3n \times 3n matrices A' and B' as follows. Both are the identity matrix, except for one n \times n block. Specifically, A' has A as the middle top n \times n block [i.e. the (1,2) block is A, the diagonal blocks are n \times n identity matrices, all other blocks are n \times n  O matrices], and B' has B as the middle right n \times n  block [i.e. the (2,1) block is B, the diagonal blocks are n \times n identity matrices, all other blocks are n \times n  O matrices]. Then A'B' has AB  as its upper right block [i.e. the (3,3) block]. These larger matrices are upper triangular, so if you can efficiently multiply upper triangular matrices, then you can efficiently multiply arbitrary matrices (of one third the size).”

2.4. Any Carnot group can be seen as a subgroup of the group from example 2.3.

2.5. Carnot groups appear as metric tangent spaces for sub-riemannian, manifolds. As such, they are a kind of noncommutative vector spaces, with the “trivial” exception when they are commutative, like in the case of riemannian manifolds, when the tangent space is like in example 2.1  (other “simple” example being the metric contact manifolds, with the tangent space like in 2.2).”

What is my motivation behind this question? Actually, I wrote about this in the post “Combinatorics versus geometric …“.

Indeed, a positional numeral system can be seen from two related viewpoints:

1. as the basic ingredient for the algorithm of computing the “multiplication” operation in the abelian group of integers (or reals) (that’s like in the example 2.1 above),

2. as a “positional system” in the geometric sense, by exploiting the self-similarity of the group of integers (or reals), something which has not much to do with numbers. Let me explain: I am now in Europe, Romania, Bucharest. This is a positional notation for the place where I am, because Romania is a country in Europe and Bucharest is a city in Romania. Likewise, the number 123 (in decimal basis) is that number which is “in the continent 100“, in the “country 20” and in the “city 3“.  What makes such a positional notation useful for numbers is the self-similarity (manifested by the system of orders of magnitude) of the “world of numbers”.

Let us just put all together: is there a “positional numeral system” for Carnot groups which can give fast algorithms for computing the multiplication operation (in a Carnot group), by exploiting the self-similarity of a Carnot group?

(Graphic) beta rule as braiding

The graphic lambda calculus formalism described in the paper Local and global moves on locally planar trivalent graphs, lambda calculus and lambda-Scale  concerns moves performed on oriented trivalent graphs, which are locally planar in the sense that each node comes with a cyclic order of the three edges connected with it.

I have shown that a sector of this formalism (i.e. some of the moves applied to a certain set of trivalent graphs) is equivalent with untyped  lambda calculus.  There are two interesting facts about this, namely:

(a) the set of trivalent graphs is defined by global conditions (“global condition ” as opposed to “local condition”; a local condition is one  which  involves up to a number N of edges and nodes, with fixed  N),

(b) apart some unimportant global pruning moves,  all the other moves are local. In particular, the most important rule of lambda calculus, the beta reduction, is equivalent with the graphic beta move depicted here

which is a local move!

In this post I want to make some supplementary remarks about this graphic beta move.

1.  It is not written in the paper, but this move resembles a lot with the unzip operation from the  knotted trivalent graphs formalism, see for example the paper  The algebra of knotted trivalent graphs and Turaev’s shadow world by D.P. Thurston, section 3. There are two differences: the unzip operation works on unoriented trivalent graphs and moreover by this move we can only unzip pairs of connected nodes (so the move goes only in one direction).

2.  As explained in my paper, the graphic beta move may be used outside of the lambda calculus sector, i.e. it may be used for any oriented trivalent graph from my formalism.  The previous remark suggests that there is a connection between knot diagrams formalism and the graphic lambda calculus, because the knotted trivalent graphs formalism contains the knot diagrams one. In the last section of my paper I proposed some relations between knot diagrams and graphic lambda calculus, also in the post Four symbols, where I connect with decorated knot diagrams relevant for emergent algebras.

3. There is though one easy way to see yet another connection between these formalisms: the graphic beta move is a braiding operation!

Indeed, it is important to notice that the moves of the graphic lambda formalism are independent of the particular embeddings of trivalent graphs into the plane. This may not be obvious from the figures which explain the moves, but it is so after further inspection.

Let us look again at the graphic beta move, as depicted in the first figure. In fact, this is a move which transforms a pair of edges (from the right of the picture) into the graph from the left of the picture. The fact that the edges cross in the figure is irrelevant. What matters is that, for the sake of performing the move, one edge is temporarily decorated with 1-3 and the other with 4-2.

Here are two more equivalent depictions of the same rule, coming from different choices of  1-3, 4-2 decorations. We may see the graphic beta move as

but also as

Let me then make some notations of the figures from the left hand sides of the previous diagrams.  Here they are: for the first figure we introduce the “crossing notation”

and for the second figure the “opposite crossing notation”:

With these notations the graphic beta move may be see as this braiding operation

or as this other braiding operation

These are “notations” which encode the performing of the graphic beta move in a particular embedding (of the graph) in the plane. As such, they look dependent of the particular embedding, but the crossings satisfy the Reidemeister moves for knot diagrams (see the last section in the paper), therefore the notation system appears to be independent of the change of embedding of the graphs in the 3-dim space!