Tag Archives: spaces with dilations

Geometric Ruzsa inequality on groupoids and deformations

This is a continuation of  Geometric Ruzsa triangle inequalities and metric spaces with dilations .  Proposition 1 from that post may be applied to groupoids. Let’s see what we get.

Definition 1. A groupoid is a set G, whose elements are called arrows, together with a partially defined composition operation

(g,h) \in G^{(2)} \subset G \times G \mapsto gh \in G

and a unary “inverse” operation:

g \in G \mapsto g^{-1} \in G

which satisfy the following:

  •  (associativity of arrow composition) if (a,b) \in G^{(2)} and (b,c) \in G^{(2)}  then (a, bc) \in G^{(2)} and  (ab, c) \in G^{(2)} and moreover  we have a(bc) = (ab)c,
  •  (inverses and objects)   (a,a^{-1}) \in G^{(2)} and (a^{-1}, a) \in G^{(2)}  ; for any a \in G we define the origin of the arrow a to be  \alpha(a) = a^{-1} a and  the target of a to be  \omega(a) = a a^{-1};  origins and targets of arrows form the set of objects of the groupoid Ob(G),
  • (inverses again) if (a,b) \in G^{(2)} then a b b^{-1} = a  a^{-1} a b = b.


The definition is a bit unnecessary restrictive in the sense that I take groupoids to have sets of arrows and sets of objects. Of course there exist larger groupoids, but for the purpose of this post we don’t need them.

The most familiar examples of groupoids are:

  • the trivial groupoid associated to a non-empty set X is G = X \times X, with composition (x,y) (y,z) = (x,z) and inverse (x,y)^{-1} = (y,x). It is straightforward to notice that \alpha(x,y) = (y,y) and \omega(x,y) = (x,x), which is a way to say that the set of objects can be identified with X and the origin of the arrow (x,y) is y and the target of (x,y) is x.
  • any group G is a groupoid,  with the arrow operation being the group multiplication and the inverse being the group inverse. Let e be the neutral element of the group G. Then for any “arrow$ g \in G we have \alpha(g) = \omega(g) = e, therefore this groupoid has only one object,  e. The converse is true, namely groupoids with only one object are groups.
  • take a group G which acts at left on the set X  , with the action (g,x) \in G \times X \mapsto gx \in X   such that g(hx) = (gh)x and ex = x. Then G \times X is a groupoid with operation (h, gx) (g,x) = (hg, x) and inverse (g,x)^{-1} = (g^{-1}, gx).   We have \alpha(g,x) = (e,x), which can be identified with x \in X, and \omega(g,x) = (e,gx), which can be identified with gx \in X. This groupoid has therefore X as the set of objects.

For the relations between groupoids and dilation structures see arXiv:1107.2823  . The case of the trivial groupoid, which will be relevant soon, has been discussed in the post  The origin of emergent algebras (part III).


The following operation is well defined for any pair of arrows (g,h) \in G with \alpha(g) = \alpha(h) :

\Delta(g,h) = h g^{-1}

Let A, B, C \subset G be three subsets of a groupoid G with the property that there exists an object e \in Ob(G) such that for any arrow g \in A \cup B \cup C we have \alpha(g) = e.  We can define the sets \Delta(C,A)\Delta(B,C) and \Delta(B,A) .

Let us define now the hard functions f: \Delta(C,A) \rightarrow C and g: \Delta(C,A) \rightarrow A with the property: for any z \in \Delta(C,A) we have

(1)     \Delta(f(z), g(z)) = z

(The name “hard functions” comes from the fact that \Delta should be seen as an easy operation, while the decomposition (1) of an arrow into a “product” of another two arrows should be seen as hard.)

The following is a corollary of Proposition 1 from the post  Geometric Ruzsa triangle inequalities and metric spaces with dilations:

Corollary 1.  The function i: \Delta(C,A) \times B \rightarrow \Delta(B,C) \times \Delta(B,A)  defined by

i(z,b) = (f(z) b^{-1} , g(z) b^{-1})

is injective. In particular, if the sets A, B, C are finite then

\mid \Delta(C,A) \mid \mid B \mid \leq \mid \Delta(B,C) \mid \mid \Delta(B,A) \mid .


Proof.   With the hypothesis that all arrows from the three sets have the same origin, we notice that \Delta satisfies the conditions 1, 2 from Proposition 1, that is

  1. \Delta( \Delta(b,c), \Delta(b,a)) = \Delta(c,a)
  2. the function b \mapsto \Delta(b,a) is injective.

As a consequence, the proof of Proposition 1 may be applied verbatim. For the convenience of the readers, I rewrite the proof as a recipe about how to recover (z, b) from i(z,b). The following figure is useful.


We have f(z) b^{-1} and g(z) b^{-1} and we want to recover z and b. We use (1) and property 1 of \Delta in order to recover z. With z comes f(z). From f(z) and f(z) b^{-1} we recover b, via the property 2 of the operation \Delta. That’s it.


There are now some interesting things to mention.

Fact 1.  The proof of Proposition 2 from the Geometric Ruzsa post is related to this. Indeed, in order to properly understand what is happening, please read again   The origin of emergent algebras (part III)  . There you’ll see that a metric space with dilations can be seen as a family of defirmations of  the trivial groupoid. In the following I took one of the figures from the “origin III” post and modified it a bit.


Under the deformation of arrows given by  \delta_{\varepsilon}(y,x) = (\delta^{x}_{\varepsilon} y , x)    the operation \Delta((z,e)(y,e)) becomes the red arrow

(\Delta^{e}_{\varepsilon}(z,y), \delta^{e}_{\varepsilon} z)

The operation acting on points (not arrows of the trivial groupoid) which appears in Proposition 2  is \Delta^{e}_{\varepsilon}(z,y), but Proposition 2 does not come straightforward from Corollary 1 from this post. That is because in Proposition 2 we use only targets of arrows, so the information at our disposal is less than the one from Corrolary 1. This is supplemented by the separation hypothesis of Proposition 2. This works like this. If we deform the operation \Delta on the trivial groupoid by using dilations, then we mess the first image of this post, because the deformation keeps the origins of arrows but it does not keep the targets. So we could apply the Corollary 1 proof directly to the deformed groupoid, but the information available to us consists only in targets of the relevant arrow and not the origins. That is why we use the separation hypotheses in order to “move” all unknown arrow to others which have the same target, but origin now in e. The proof then proceeds as previously.

In this way, we obtain a statement about  algebraic operations (like additions, see Fact 2.) from the trivial groupoid operation. 

Fact 2.  It is not mentioned in the “geometric Ruzsa” post, but the geometric Ruzsa inequality contains the classical inequality, as well as it’s extension to Carnot groups. Indeed, it is enough to apply it for particular dilation structures, like the one of a real vectorspace, or the one of a Carnot group.

Fact 3.  Let’s see what Corollary 1 says in the particular case of a trivial groupoid. In this case the operation \Delta is trivial

\Delta((a,e), (c,e)) = (a,c)

and the “hard functions$ are trivial as well

f(a,c) = (c,e) and g(a,c) =(a,e)

The conclusion of the Corollary 1 is trivial as well, because \mid \Delta(C,A) \mid = \mid C \mid \mid A \mid (and so on …) therefore the conclusion is

\mid C \mid \mid A \mid \mid B \mid \leq \mid B \mid^{2} \mid A \mid \mid C \mid

However, by the magic of deformations provided by dilations structures, from this uninteresting “trivial groupoid Ruzsa inequality” we get the more interesting original one!

Towards geometric Plünnecke graphs

This post is related to “Geometric Ruzsa triangle inequalities and metric spaces with dilations“. This time the effort goes into understanding the article  arXiv:1101.2532   Plünnecke’s Inequality by  Giorgis Petridis,   but this post is only a first step towards a geometric approach to Plünnecke’s inequality in spaces with dilations (it will be eventually applied for Carnot groups). Here I shall define a class of decorated binary trees and a notion of closeness.

I shall use binary decorated trees and the moves R1a and R2a, like in the post “A roadmap to computing with space“:


To these move I add the “linearity moves”


Definition 1.  These moves act on the set of binary trees T(X) with nodes decorated with two colours (black and white) and leaves decorated with elements of the infinite set of “variable names” X.  I shall denote by A, B, C … such trees and by x, y, z, u, v, w … elements of X.

The edges of the trees are oriented upward. By convention we admit X to be a subset of T(X), thinking about x \in X as an edge pointing upwards which is also a leaf decorated with x.

The moves act locally, i.e. they can be used for any portion of a tree from T(X) which looks like one of the patterns from the moves, with the understanding that the rest of the respective tree is left unchanged.


Definition 2.  The class of finite trees  FinT(X) \subset T(X) is the smallest subset of T(X) with the  properties:

  • X \subset FinT(X),
  • if A, B \in FinT(X) then A \circ B \in FinT(X)  , where A \circ B is the tree


  • if A, B, C \in FinT(X) then Sum(A,B,C) \in FinT(X), where Sum(A,B,C) is the tree


  • if A \in FinT(X) and we can pass from A to B by one of the moves then B \in FinT(X).


The class of finite trees is closed also with respect to other operations.

Proposition 1.  If A, B, C \in T(X)  then


Let us denote by Dif(A,B,C) the tree from the LHS of the first row, by Q(A,B,C) the tree from the middle of the first row and by Inv(A,B) the tree from the LHS of the second row.

Corollary 1.   If A, B, C \in FinT(X) then Dif(A,B,C) \in FinT(X), Q(A,B,C) \in FinT(X) and Inv(A,B) \in FinT(X).

Proof: If A, B, C \in FinT(X) then Sum(A,B,C) \in FinT(X). By Proposition 1  the trees Dif(A,B,C) and Q(A,B,C) can be transformed into Sum(A,B,C), therefore they belong to FinT(X). Also, by the same proposition the tree Inv(A,B) can be transformed into Dif(A,B,A), which we proved that it belongs to FinT(X). Therefore Inv(A,B) \in FinT(X).


I define now when two graphs are close.

Definition 3. Two graphs  A, B \in FinT(X)  are close, denoted by A \sim B, if there is C \in FinT(X) such that B can be moved into A \circ C.

Proposition 3.  The closeness relation is an equivalence.

Proof:  By the move R1a we have A \sim A for any A \in FinT(X).  Take now A \sim B, therefore there is C \in FinT(X) such that B can be moved into A \circ C. We know that D = Inv(A,C) \in FinT(X) by Corollary 1. On the other hand we can prove that A can be moved into B \circ D, therefore B \sim A.

Finally, let A \sim B  , so there’s a C \in FinT(X) such that B can be moved into A \circ C  , and let B \sim D, so there’s E \in FinT(X) such that D can be moved into B \circ E . Let now F \in FinT(X) given by F = Sum(A,C,E). Check that D can be moved into A \circ F , therefore A \sim D.

 Question: do you think this  proof is more easy to understand than an equivalent proof given by drawings?

A roadmap to computing with space

I don’t know yet what exactly is “computing with space”, but I almost know. Further is a description of the road to this goal, along with an invitation to join.

Before starting this description, maybe is better to write what this explanation is NOT about.  I have arrived to the idea of “computing with space”  by branching from a beautiful geometry subject: sub-riemannian geometry. The main interest I have in this subject consists in giving an intrinsic (i.e. by using a minimal bag of tools) description of the differential structure of a sub-riemannian space. The fascinating part is that these spaces, although being  locally topologically trivial, have a differential calculus which is not amenable to the usual differential calculus on manifolds, in the same way as the hyperbolic space, say, is not a kind of euclidean space, geometrically. I consider very important and yet not well known  the discovery of the fact that there are spaces where we can define an intrinsic differential calculus fundamentally different than the usual one (locally, not globally, as it is the case with manifolds which admits different GLOBAL differential structures, although at the local level they are just pieces of an euclidean space). But in this post I shall NOT go to explain this. The road to computing with space branches from this one, however there are paths represented by mathematical hints which are criss-crossing  both these roads.

Let’s start.

1. In “Dilatation structures I. Fundamentals” I propose, in section 4 “Binary decorated trees and dilatations” a formalism for making easy various calculations with dilation structures (or “dilatation structures”, as I called them at the moment; notice that dilation vs dilatation is a battle won by dilations in math, but by dilatation in other fields, although the correct word historically is dilatations).

This formalism works with moves acting on binary decorated trees, with the leaves decorated with elements of a metric space. It was extremely puzzling that in fact the formalism worked without needing to know which metric space I use. It was also amazing to me that reasoning  with moves acting on binary trees gave proofs of generalizations of results  involving elaborate calculations with pseudo-differential operators and alike. At a close inspection it looked like somewhere in the background there is an abstract nonsense machine which is just applied to this particular case of metric spaces.

Here is an example of the formalism. The moves are (I use the names from graphic lambda calculus):




Define the following tree (and think about it as being the graphical representation of an operation):


Think that it represents u+v, with respect to the base point x.  Then we can prove that


which is a kind of associativity relation.  The proof by binary trees has nothing to do with sub-riemannian geometry, right? An indirect confirmation is that the same formalism works very well on the ultrametric space given by the boundary of the infinite dyadic tree, see  Self-similar dilatation structures and automata.

As a conclusion for this part, it seemed that  in order to unravel the abstract nonsense machine from the background, I needed to:

  • find a way to get rid of  mentioning metric spaces, so in particular to get rid of decorations of the leaves of binary trees by points in in some space, (or maybe use these decorations as a kind of names)
  • express this proof based on moves applied to binary trees as a computation, (i.e. as something like a reduction procedure).

Otherwise said, there was a need for a kind of logic, but which one?

Scale is a place in the brain

… and dilation structures might exist, physically, in some parts of the brain. (See also section 2.4.  in “Computing with space …”   arXiv:1103.6007 .)   I will surely come back to this subject, after learning more, but here are some facts.

The primary source of this post is the article “From A to Z: a potential role for grid cells in spatial navigation”   Neural Syst Circuits. 2012; 2: 6. by Caswell Barry and Daniel Bush.

From wikipedia entry for grid cell:

A grid cell is a type of neuron that has been found in the brains of rats, mice, bats, and monkeys; and it is likely to exist in other animals including humans.[1][2][3][4][5] In a typical experimental study, an electrode capable of recording the activity of an individual neuron is implanted in the cerebral cortex of a rat, in a part called the dorsomedial entorhinal cortex, and recordings are made as the rat moves around freely in an open arena. For a grid cell, if a dot is placed at the location of the rat’s head every time the neuron emits an action potential, then as illustrated in the adjoining figure, these dots build up over time to form a set of small clusters, and the clusters form the vertices of a grid of equilateral triangles. This regular triangle-pattern is what distinguishes grid cells from other types of cells that show spatial firing correlates. By contrast, if a place cell from the rat hippocampus is examined in the same way (i.e., by placing a dot at the location of the rat’s head whenever the cell emits an action potential), then the dots build up to form small clusters, but frequently there is only one cluster (one “place field”) in a given environment, and even when multiple clusters are seen, there is no perceptible regularity in their arrangement.

So, there are place cells and grid cells. Here is what wikipedia says about place cells:

Place cells are neurons in the hippocampus that exhibit a high rate of firing whenever an animal is in a specific location in an environment corresponding to the cell’s “place field”. These neurons are distinct from other neurons with spatial firing properties, such as grid cells, border cells, barrier cells,[1] conjunctive cells,[2] head direction cells, and spatial view cells. In the CA1 and CA3 hippocampal subfields, place cells are believed to be pyramidal cells, while those in the dentate gyrus are believed to be granule cells.[3]

The behaviour of these cells is explained in the Figure 1 from the mentioned article by Barry and Bush:


(Copyright ©2012 Barry and Bush; licensee BioMed Central Ltd.)

The explanation of the Figure 1 reads:

Single unit recordings made from the hippocampal formation. a) CA1 place cell recorded from a rat. The left-hand figure shows the raw data: the black line being the animal’s path as it foraged for rice in a 1 m2 arena for 20 minutes; superimposed green dots indicating the animal’s location each time the place cell fired an action potential. Right, the same data processed to show firing rate (number of spike divided by dwell time) per spatial bin. Red indicates bins with high firing rate and blue indicates low firing rate, white bins are unvisited, and peak firing rate is shown above the map. b) Raw data and corresponding rate map for a single mEC grid cell showing the multiple firing fields arranged in a hexagonal lattice. c) Three co-recorded grid cells, the center of each firing field indicated by a cross with different colors corresponding to each cell. The firing pattern of each cell is effectively a translation of the other co-recorded cells as shown by superposition of the crosses (right). d) Changes made to the geometry of a familiar environment cause grid cell firing to be distorted (rescale) demonstrating that grid firing is, at least, partially controlled by environmental cues, in this case the location of the arena’s walls. Raw data are shown on the left and the corresponding rate maps on the right. The rat was familiar with the 1 m2 arena (outlined in red). Changing the shape of the familiar arena by sliding the walls past each other produced a commensurate change in the scale of grid firing. For example, shortening the x-axis to 70 cm from 100 cm (top right) caused grid firing in the x-axis to reduce to 78% of its previous scale, while grid scale in the Y-axis was relatively unaffected. Numbers next to the rate maps indicate the proportional change in grid scale measured along that axis (figure adapted from reference [28]).

And now, the surprise: scale is indeed a place in the brain. Let’s see Figure 2. from the same article:


(Copyright ©2012 Barry and Bush; licensee BioMed Central Ltd.)

The caption of this figure is:

Grid scale increases along a dorso-ventral gradient in the mEC. Two grid cells recorded from the same animal but at different times are shown, both cells were recorded in a familiar 1 m2 arena. Approximate recording locations in the mEC are indicated. The more ventral cell exhibits a considerably larger size of firing fields and distance between firing fields than the dorsal cell.

… and, from the article,  (boldfaced by me):

The scale of the grid pattern, measured as the distance between neighboring peaks, increases along the dorso-ventral mEC gradient, mirroring a similar trend in hippocampal place fields [15,25]. The smallest, most dorsal, scale is typically 20 to 25 cm in the rat, reaching in excess of several meters in the intermediate region of the gradient [15,26] (Figure ​(Figure2).2). This may explain how this remarkable pattern was missed by early electrophysiology studies, which targeted ventral mEC and found only broadly tuned spatial firing (for example, [27]). Interestingly, grid scale increases in discontinuous increments and the increment ratio, at least between the smaller scales, is constant [28]. Grid cells recorded from the same electrode, which are, therefore, proximate in the brain, typically have a common scale and orientation but a random offset relative to each other and the environment [15]. As such, their firing patterns are effectively identical translations of one another and a small number of cells will ‘tile’ the complete environment (Figure ​(Figure1c).1c). It also appears that grids of different scale recorded ipsilaterally have a common orientation, such that the hexagonal arrangement of their firing fields share the same three axes, albeit with some localized distortions [15,28,29].

That’s just amazing!

Concerning the hypothesis (Hafting, T.; Fyhn, M.; Molden, S.; Moser, M. -B.; Moser, E. I. (2005). “Microstructure of a spatial map in the entorhinal cortex”. Nature 436 (7052): 801–806. Bibcode:2005Natur.436..801H. doi:10.1038/nature03721.) that the grid cells firing fields encode the abstract structure of an euclidean space, I think this is not following from the observations. My argument is that the translation-invariance (of firing patterns in this particular case) emerge by the mechanism of dilation structures and it is, at least up to my actual understanding, an evidence for the existence of these structures in the brain. But of course, there is much to learn and think about.

Curvature and halfbrackets, part III

I continue from “Curvature and halfbrackets, part II“.  This post is dedicated to the application of the previously introduced notions to the case of a sub-riemannian Lie group.


1. I start with the definition of a sub-riemannian Lie group. If you look in the literature, the first reference to “sub-riemannian Lie groups” which I am aware about is the series Sub-riemannian geometry and Lie groups  arXiv:math/0210189, part II arXiv:math/0307342 , part III arXiv:math/0407099 .    However, that work predates the introduction of dilation structures, therefore there is a need to properly define this  object within the actual state of matters.

Definition 1. A sub-riemannian Lie group is a locally compact topological group G with the following supplementary structure:

  • together with the dilation structure coming from it’s one-parameter groups (by the Montgomery-Zippin construction), it has a group norm which induce a tempered dilation structure,
  • it has a left-invariant dilation structure (with dilations \delta^{x}_{\varepsilon} y = x \delta_{\varepsilon}(x^{-1}y) and group norm denoted by \| x \|) which, paired with the tempered dilation structure mentioned previously, it satisfies the hypothesis of “Sub-riemannian geometry from intrinsic viewpoint” Theorem 12.9,  arXiv:1206.3093


  1. there is no assumption on the tempered group norm to come from a Riemannian left-invariant distance on the group. For this reason, some people use the name sub-finsler  arXiv:1204.1613  instead of sub-riemannian, but I believe this is not a serious distinction, because the structure of a scalar product which induces the distance is simply not needed for understanding  sub-riemannian Lie groups.
  2. by Theorem 12.9, it follows that the left-invariant field of dilations induces a length dilation structure. I shall use this further. Length dilation structures are maybe a more useful object than simply dilation structures, because they explain how the length functional behaves at different scales, which is a much more detailed information about the microscopic structure of a length metric space than just the information about how the distance behaves at different scales.

This definition looks a bit mysterious, unless you read the course notes cited inside the definition. Probably, when I shall find the interest to pursue it, it would be really useful to just apply, step by step, the constructions from arXiv:1206.3093 to sub-riemannian Lie groups.


2. With the notations from the last post, I want to compute the quantities A, B, C. We already know that B is related to the curvature of G with respect to it’s sub-riemannian (sub-finsler if you like it more) distance, as introduced previously via metric profiles.  We also know that B is controlled by A and C. But let’s see the expressions of these three quantities for sub-riemannian Lie groups.

I denote by d(u,v) the left invariant sub-riemannian distance, therefore we have d(u,v) = \| u^{-1}v\|.

Now, \rho_{\varepsilon}(x,u) = \| x^{-1} u \|_{\varepsilon} , where \varepsilon \| u \|_{\varepsilon} = \| \delta_{\varepsilon} u \|  by definition.  Notice also that \Delta^{x}_{\varepsilon}(u,v) = (\delta^{x}_{\varepsilon} u ) ((u^{-1} x) *_{\varepsilon} (x^{-1} v)), where  u *_{\varepsilon} v is the deformed group operation at scale \varepsilon, i.e. it is defined by the relation:

\delta_{\varepsilon} (u *_{\varepsilon} v) = (\delta_{\varepsilon} u) (\delta_{\varepsilon} v)

With all this, it follows that:

A_{\varepsilon}(x,u) = \rho_{\varepsilon}(x,u) - d^{x}(x,u) = \|x^{-1} u \|_{\varepsilon} - \| x^{-1} u \|_{0}

A_{\varepsilon}(\delta^{x}_{\varepsilon} u, \Delta^{x}_{\varepsilon}(u,v)) = \| (u^{-1} x) *_{\varepsilon} (x^{-1} v) \|_{\varepsilon} - \| (u^{-1} x) *_{\varepsilon} (x^{-1} v)\|_{0}.

A similar computation leads us to the expression for the curvature related quantity

B_{\varepsilon}(x,u,v) = d^{x}_{\varepsilon}(u,v) - d^{x}(u,v) = \| (u^{-1}x) *_{\varepsilon} (x^{-1} v)\|_{\varepsilon} - \| (u^{-1}x) *_{0} (x^{-1}v)\|_{0}.


C_{\varepsilon}(x,u,v) = \|(u^{-1} x) *_{\varepsilon} (x^{-1} v)\|_{0} - \|(u^{-1}x) *_{0} (x^{-1}v)\|_{0}. This last quantity is controlled by a halfbracket, via a norm inequality.

The expressions of A, B, C make transparent that the curvature-related B is the sum of A and C. In the next post I shall use the length dilation structure of the sub-riemannian Lie group in order to show that A is controlled by C, which in turn is controlled by a norm of a halfbracket. Then I shall apply all this to SO(3), as an example.

Carry propagation-free number systems and a kind of approximate groups (I)

Avizienis introduced a signed-digit number representation which allows fast, carry propagation-free addition (A. Avizienis, Signed-digit number representation for fast parallel arithmetic, IRE Trans. Comput., vol. EC-10, pp. 389-400, 1961, see also B. Parhami, Carry-Free Addition of Recoded Binary Signed-Digit Numbers, IEEE Trans. Computers, Vol. 37, No. 11, pp. 1470-1476, November 1988.)

Here, just for fun, I shall use a kind of approximate groups to define a carry-free addition-like operation.

1. The hypothesis is: you have three finite sets A, B, K, all sitting in a group G. I shall suppose that the group G is commutative, although it seems that’s not really needed further if one takes a bit of care. In order to reflect this, I shall use  + for the group operation on G, but I shall write M N  for the set of all elements of the form  x + y with x \in M and y \in N.

We know the following about the three (non-empty, of course) sets A, B, K:

  1. B \subset A,
  2. 0 \in B, 0 \in K, A = -A, B = -B, K = -K,
  3. \mid B \mid = \mid K \mid   (where \mid M \mid is the cardinal of the finite set M),
  4. A A B \subset A K  (that’s what qualifies A as a kind of an approximate group).

Let now choose a bijective function \phi: K \rightarrow B and two functions \psi: AAB \rightarrow A and \chi: AAB \rightarrow K such that

(1)   for any x, y \in A and any b \in B we have x+y+b = \psi(x+y+b) + \chi(x+y+b).


2. Let X be the family of functions defined on \mathbb{Z} with values in A, with compact support, i.e. if x: \mathbb{Z} \rightarrow A belongs to X, then only a finite number of k \in \mathbb{Z} have the property that x_{k} = x(k) \not = 0.  The element 0 \in X is defined by x_{k} = 0 for any k \in \mathbb{Z}.  If x \in X with x \not = 0 then ind(x) \in \mathbb{Z} is the smallest index k \in \mathbb{Z} with x_{k} \not = 0.


3.  The structure introduced at 1. allows the definition of an operation * on X. The definition of the operation is inspired from a carry-free addition algorithm.

Definition 1. If both x, y \in X are equal to 0 then x*y = 0. Otherwise, let j \in \mathbb{Z} be the smallest index k such that one of x_{k} or y_{k} is different than 0.  The element z = x*y is defined by the following algorithm:

  1. z_{k} = 0 for any k < j,
  2. let t_{j} = 0k = j. Repeat:

p_{k} = x_{k} + y_{k} + t_{k}

z_{k} = \psi(p_{k})

t_{k+1} = \phi(\chi(p_{k}))

k \rightarrow k+1


4.  We may choose the functions \phi, \psi, \chi such that the operation * is starting to become interesting. Before doing this, let’s remark that:

  • the operation * is commutative as a consequence of the fact that + is commutative,
  • the operation * is conical, i.e. it admits the shift \delta: X \rightarrow X, \delta(x)(k+1) = x_{k} as automorphism ( a property encountered before for dilation structures on ultrametric spaces, see  arXiv:0709.2224  )

Proposition 1.  If the functions \phi, \chi, \psi satisfy the following:

  1. \phi(0) = \chi(0) = \psi(0) = 0
  2. \chi(x) = 0 for any x \in A
  3. \psi(-x) = - \psi(x)\phi(-x) = -\phi(x) for any x \in A A B, \chi(-x) = -\chi(x) for any x \in K,

then (X, *) is a conical quasigroup.

Curvature and halfbrackets, part II

I continue from “Curvature and halfbrackets, part I“, with the same notations and background.

In a metric space with dilations (X,d,\delta), there are three quantities which will play a role further.

1. The first quantity is related to the “norm” function defined as

\rho_{\varepsilon}(x,u) = d^{x}_{\varepsilon}(x,u)

Notice that this is not a distance function, instead it is more like a norm of u with respect to the basepoint x, at scale \varepsilon. Together with the field of dilations, this “norm” function contains all the information about the local and infinitesimal behaviour of the distance d. We can see this from the fact that we can recover the re-scaled distance d^{x}_{\varepsilon} from this “norm”, with the help of the approximate difference (for this notion see on this blog the definition of approximate difference in terms of emergent algebras here, or go to point 3. from the post The origin of emergent algebras (part III)):

\rho_{\varepsilon}(\delta^{x}_{\varepsilon} u , \Delta^{x}_{\varepsilon}(u,v)) = d^{x}_{\varepsilon}(u,v)

(proof left to the interested reader) This identity shows that the uniform convergence of (x,u,v) \mapsto d^{x}_{\varepsilon}(u,v) to (x,u,v) \mapsto d^{x}(u,v), as \varepsilon goes to 0, is a consequence of the following pair of uniform convergences:

  • that of the function (x,u) \mapsto \rho_{\varepsilon}(x,u) which converges to (x,u) \mapsto d^{x}(x,u)
  • that of  the pair (dilation, approximate difference)  (x,u,v) \mapsto (\delta^{x}_{\varepsilon} u , \Delta^{x}_{\varepsilon}(u,v)) to (x,u,v) \mapsto (x, \Delta^{x}(u,v)), see how this pair appears from the normed groupoid formalism, for example by reading the post from the post The origin of emergent algebras (part III).

With this definition of the “norm” function, I can now introduce the first quantity of interest, which measures the difference between the “norm” function at scale \varepsilon and the “norm” function at scale 0:

A_{\varepsilon}(x,u) = \rho_{\varepsilon}(x,u) - d^{x}(x,u)

The interpretation of this quantity is easy in the particular case of a riemannian space with dilations defined by the geodesic exponentials. In this particular case

A_{\varepsilon}(x,u) = 0

because the “norm” function \rho_{\varepsilon}(x,u) equals the distance between d(x,u) (due to the definition of dilations with respect to the geodesic exponential).

In more general situations, for example in the case of a regular sub-riemannian space, we can’t define dilations in terms of geodesic exponentials (even if we may have at disposal geodesic exponentials). The reason has to do with the fact that the geodesic exponential in the case of a regular sub-riemannian manifold, is not intrinsically defined as a function from the tangent of the geodesic at it’s starting point. That is because geodesics in regular sub-riemannian manifolds (at least those which are classically, i.e. with respect to the differential manifold structure, smooth , are bound to have tangents only in the horizontal directions.

As another example, think about a sub-riemannian Lie group. Here, we may define a left-invariant dilation structure with the help of the Lie group exponential. In this case the quantity A_{\varepsilon}(x,u) is certainly not equal to 0, excepting very particular cases, as a riemannian compact Lie group, with bi-invariant distance, where the geodesic and Lie group exponentials coincide.


2.   The second quantity is the one which is most interesting for defining (sectional like) curvature, let’s call it

B_{\varepsilon}(x,u,v) = d^{x}_{\varepsilon}(u,v) - d^{x}(u,v).


3. Finally, the third quantity of interest is a kind of a measure of the convergence of (x,u,v) \mapsto (\delta^{x}_{\varepsilon} u , \Delta^{x}_{\varepsilon}(u,v)) to (x,u,v) \mapsto (x, \Delta^{x}(u,v)), but measured with the norms from the tangent spaces.  Now, a bit of notations:

dif_{\varepsilon}(x,u,v) = (\delta^{x}_{\varepsilon} u , \Delta^{x}_{\varepsilon}(u,v)) for any three points x, u, v,

dif_{0}(x,u,v) = (x, \Delta^{x}(u,v))  for any three points x, u, v  and

g(v,w) = d^{v}(v,w) for any two points v, w.

With these notations I introduce the third quantity:

C_{\varepsilon}(x,u,v) = g( dif_{\varepsilon}(x,u,v) ) - g( dif_{0}(x,u,v) ).


The relation between these three quantities is the following:

Proposition.  B_{\varepsilon}(x,u,v) = A_{\varepsilon}(dif_{\varepsilon}(x,u,v)) + C_{\varepsilon}(x,u,v).


Suppose that we know the following estimates:

A_{\varepsilon}(x,u) = \varepsilon^{\alpha} A(x,u) + higher order terms, with A(x, u) \not = 0 and \alpha > 0,

B_{\varepsilon}(x,u,v) = \varepsilon^{\beta} B(x,u,v) + higher order terms, with B(x,u,v) \not = 0 and \beta > 0,

C_{\varepsilon}(x,u) = \varepsilon^{\gamma} C(x,u,v) + higher order terms, with C(x, u) \not = 0 and \gamma > 0,

Lemma. Let  us sort in increasing order the list of the values \alpha, \beta, \gamma and denote the sorted list by a, b, c. Then a = b.

The proof is easy. The equality from the Proposition tells us that the modules of A_{\varepsilon}, B_{\varepsilon} and C_{\varepsilon} can be taken as the edges of a triangle. Suppose then that a < b < c, use the estimates from the hypothesis and divide by \varepsilon^{a} in one of the three triangle inequalities, then go with \varepsilon to 0 in order to arrive at a contradiction 0 < 0.


The moral of the lemma is that there are at most two different coefficients in the list \alpha, \beta, \gamma. The coefficient \beta is called “curvdimension”. In the next post I shall explain why, in the case of a sub-riemannian Lie group,  the coefficient \gamma is related to the halfbracket. Moreover, we shall see that in the case of sub-riemannian Lie groups all three coefficient are equal, therefore the infinitesimal behaviour of the halfbracket determines the curvdimension.