FoM: denied publication

After the 15 months delay experience had with G&T which was told in the post “Anonymous peer-review after 15 months“,  I decided to submit to FoM the article arXiv:0907.1520 “Emergent algebra”, even if this decision seemed to go against the view I hold, namely that gold OA is not the right OA. So I took the risk to disappoint people which have views which I respect, like Orr Shalit with his “Worse than Elsevier, worse than …“. Let me explain why:

  1. The article Emergent algebra deserves a “stamp of quality”. Providing such stamps is one of the roles of FoM, according to Timothy Gowers. So, I went for such a stamp, because really that’s all this article needs. Moreover,
  2. I highly respect the mathematicians who initiated FoM and I would be very glad to hear about their opinion on this piece of research which looks like it does not finds it’s place (because it’s revolutionary, I say, but hey, I’m the author, I am allowed to say this).
  3. I was expecting to get a detailed, useful, fair review from this new journal started by people described at point 2.
  4. I was curious what will happen, if it will matter that I expressed publicly my dislike for a new gold OA journal, in posts like this ones:  Quick reaction on Gowers’ “Why I’ve joined the bad guys” and Second thoughts on Gowers’ “Why I’ve joined the bad guys”. I was NOT expecting to matter, after all math is math and opinions are opinions. But I was still a bit curious.

Today, 30 April 2013, I just received an e-mail from FoM.   In a sense, I got my stamp of quality and I express my thanks for it.  I reproduce the message:

30-Apr-2013

Dear Dr. Buliga:

I write you in regards to manuscript # Sigma-2013-0027 entitled
“Emergent algebras” which you submitted to the Forum of Mathematics,
Sigma.

Unfortunately, your manuscript has been denied publication in Forum of
Mathematics. Although it is an interesting line of investigation,
based on advice from experts in the area, it was felt that the results
are not compelling enough for publication in Sigma.

Thank you for considering Forum of Mathematics, Sigma for the
publication of your research. I hope the outcome of this specific
submission will not discourage you from the submission of future
manuscripts.

Sincerely,
Dr. Bruce Kleiner
Forum of Mathematics, Sigma

Editors: Dr. Simon Donaldson, Dr. Bruce Kleiner, Prof. Curtis McMullen

As I said, if people like Donaldson, Kleiner and McMullen say that’s “an interesting line of investigation”, what could I ask more? Ah, maybe a referee report? As I was expecting, see previous point 3? (As for “experts in the area”, I would like to meet them, because it’s a new area, I invented it.) Or at least, which interpretation is correct, ” although it is an interesting line of investigation, based on advice from experts in the area”,  or this one “based on advice from experts in the area, it was felt that …”?

More intrigued I was by the expression, which I never encountered before in a message from a publisher: “your manuscript has been denied publication in Forum of Mathematics”.

It’s a coincidence, it may have no meaning, but I can’t help to notice that  in the morning I posted “Research banana republic“, where I take the side of Mike Taylor’s post “Predatory publishers: a real problem“. In that post Mike Taylor criticizes among others the Cambridge University Press, which is the publisher of FoM.  In the evening I got the previously written message from FoM concerning “denied” publication of my article. But, but … it’s math, not politics! Nah, it has to be  a coincidence.

Advertisements

Geometric Ruzsa inequality on groupoids and deformations

This is a continuation of  Geometric Ruzsa triangle inequalities and metric spaces with dilations .  Proposition 1 from that post may be applied to groupoids. Let’s see what we get.

Definition 1. A groupoid is a set G, whose elements are called arrows, together with a partially defined composition operation

(g,h) \in G^{(2)} \subset G \times G \mapsto gh \in G

and a unary “inverse” operation:

g \in G \mapsto g^{-1} \in G

which satisfy the following:

  •  (associativity of arrow composition) if (a,b) \in G^{(2)} and (b,c) \in G^{(2)}  then (a, bc) \in G^{(2)} and  (ab, c) \in G^{(2)} and moreover  we have a(bc) = (ab)c,
  •  (inverses and objects)   (a,a^{-1}) \in G^{(2)} and (a^{-1}, a) \in G^{(2)}  ; for any a \in G we define the origin of the arrow a to be  \alpha(a) = a^{-1} a and  the target of a to be  \omega(a) = a a^{-1};  origins and targets of arrows form the set of objects of the groupoid Ob(G),
  • (inverses again) if (a,b) \in G^{(2)} then a b b^{-1} = a  a^{-1} a b = b.

____________________

The definition is a bit unnecessary restrictive in the sense that I take groupoids to have sets of arrows and sets of objects. Of course there exist larger groupoids, but for the purpose of this post we don’t need them.

The most familiar examples of groupoids are:

  • the trivial groupoid associated to a non-empty set X is G = X \times X, with composition (x,y) (y,z) = (x,z) and inverse (x,y)^{-1} = (y,x). It is straightforward to notice that \alpha(x,y) = (y,y) and \omega(x,y) = (x,x), which is a way to say that the set of objects can be identified with X and the origin of the arrow (x,y) is y and the target of (x,y) is x.
  • any group G is a groupoid,  with the arrow operation being the group multiplication and the inverse being the group inverse. Let e be the neutral element of the group G. Then for any “arrow$ g \in G we have \alpha(g) = \omega(g) = e, therefore this groupoid has only one object,  e. The converse is true, namely groupoids with only one object are groups.
  • take a group G which acts at left on the set X  , with the action (g,x) \in G \times X \mapsto gx \in X   such that g(hx) = (gh)x and ex = x. Then G \times X is a groupoid with operation (h, gx) (g,x) = (hg, x) and inverse (g,x)^{-1} = (g^{-1}, gx).   We have \alpha(g,x) = (e,x), which can be identified with x \in X, and \omega(g,x) = (e,gx), which can be identified with gx \in X. This groupoid has therefore X as the set of objects.

For the relations between groupoids and dilation structures see arXiv:1107.2823  . The case of the trivial groupoid, which will be relevant soon, has been discussed in the post  The origin of emergent algebras (part III).

____________________

The following operation is well defined for any pair of arrows (g,h) \in G with \alpha(g) = \alpha(h) :

\Delta(g,h) = h g^{-1}

Let A, B, C \subset G be three subsets of a groupoid G with the property that there exists an object e \in Ob(G) such that for any arrow g \in A \cup B \cup C we have \alpha(g) = e.  We can define the sets \Delta(C,A)\Delta(B,C) and \Delta(B,A) .

Let us define now the hard functions f: \Delta(C,A) \rightarrow C and g: \Delta(C,A) \rightarrow A with the property: for any z \in \Delta(C,A) we have

(1)     \Delta(f(z), g(z)) = z

(The name “hard functions” comes from the fact that \Delta should be seen as an easy operation, while the decomposition (1) of an arrow into a “product” of another two arrows should be seen as hard.)

The following is a corollary of Proposition 1 from the post  Geometric Ruzsa triangle inequalities and metric spaces with dilations:

Corollary 1.  The function i: \Delta(C,A) \times B \rightarrow \Delta(B,C) \times \Delta(B,A)  defined by

i(z,b) = (f(z) b^{-1} , g(z) b^{-1})

is injective. In particular, if the sets A, B, C are finite then

\mid \Delta(C,A) \mid \mid B \mid \leq \mid \Delta(B,C) \mid \mid \Delta(B,A) \mid .

____________________

Proof.   With the hypothesis that all arrows from the three sets have the same origin, we notice that \Delta satisfies the conditions 1, 2 from Proposition 1, that is

  1. \Delta( \Delta(b,c), \Delta(b,a)) = \Delta(c,a)
  2. the function b \mapsto \Delta(b,a) is injective.

As a consequence, the proof of Proposition 1 may be applied verbatim. For the convenience of the readers, I rewrite the proof as a recipe about how to recover (z, b) from i(z,b). The following figure is useful.

bellaiche_5

We have f(z) b^{-1} and g(z) b^{-1} and we want to recover z and b. We use (1) and property 1 of \Delta in order to recover z. With z comes f(z). From f(z) and f(z) b^{-1} we recover b, via the property 2 of the operation \Delta. That’s it.

____________________

There are now some interesting things to mention.

Fact 1.  The proof of Proposition 2 from the Geometric Ruzsa post is related to this. Indeed, in order to properly understand what is happening, please read again   The origin of emergent algebras (part III)  . There you’ll see that a metric space with dilations can be seen as a family of defirmations of  the trivial groupoid. In the following I took one of the figures from the “origin III” post and modified it a bit.

bellaiche_4

Under the deformation of arrows given by  \delta_{\varepsilon}(y,x) = (\delta^{x}_{\varepsilon} y , x)    the operation \Delta((z,e)(y,e)) becomes the red arrow

(\Delta^{e}_{\varepsilon}(z,y), \delta^{e}_{\varepsilon} z)

The operation acting on points (not arrows of the trivial groupoid) which appears in Proposition 2  is \Delta^{e}_{\varepsilon}(z,y), but Proposition 2 does not come straightforward from Corollary 1 from this post. That is because in Proposition 2 we use only targets of arrows, so the information at our disposal is less than the one from Corrolary 1. This is supplemented by the separation hypothesis of Proposition 2. This works like this. If we deform the operation \Delta on the trivial groupoid by using dilations, then we mess the first image of this post, because the deformation keeps the origins of arrows but it does not keep the targets. So we could apply the Corollary 1 proof directly to the deformed groupoid, but the information available to us consists only in targets of the relevant arrow and not the origins. That is why we use the separation hypotheses in order to “move” all unknown arrow to others which have the same target, but origin now in e. The proof then proceeds as previously.

In this way, we obtain a statement about  algebraic operations (like additions, see Fact 2.) from the trivial groupoid operation. 

Fact 2.  It is not mentioned in the “geometric Ruzsa” post, but the geometric Ruzsa inequality contains the classical inequality, as well as it’s extension to Carnot groups. Indeed, it is enough to apply it for particular dilation structures, like the one of a real vectorspace, or the one of a Carnot group.

Fact 3.  Let’s see what Corollary 1 says in the particular case of a trivial groupoid. In this case the operation \Delta is trivial

\Delta((a,e), (c,e)) = (a,c)

and the “hard functions$ are trivial as well

f(a,c) = (c,e) and g(a,c) =(a,e)

The conclusion of the Corollary 1 is trivial as well, because \mid \Delta(C,A) \mid = \mid C \mid \mid A \mid (and so on …) therefore the conclusion is

\mid C \mid \mid A \mid \mid B \mid \leq \mid B \mid^{2} \mid A \mid \mid C \mid

However, by the magic of deformations provided by dilations structures, from this uninteresting “trivial groupoid Ruzsa inequality” we get the more interesting original one!

Research banana republic

Think about universities as governments, ruling over researchers and their virtual children, the students. Think about research results as bananas. The “universitary ” governments rule that the only good bananas are those accepted by publishers (mainly private entities, or even intimately associated with universities). In exchange for good bananas the researchers get vanity points, which they exchange for universitary positions or grant funds. They feed their virtual children, the students, some of the good bananas, namely their published books, or published books (validated bananas) from researchers of another, more prestigious university. These books, produced by researchers of one university are bought by another university library from a publisher, by default.

It’s a banana republic:

a banana republic is a country operated as a commercial enterprise for private profit, effected by a collusion between the State and favoured monopolies, in which the profit derived from the private exploitation of public lands is private property, while the debts incurred thereby are a public responsibility.

State = universities

Favoured monopoly = publisher

This post is triggered by Mike Taylor’s post “Predatory publishers: a real problem“.

__________________

See also: Traditional publishing works because academics support it.

Towards geometric Plünnecke graphs

This post is related to “Geometric Ruzsa triangle inequalities and metric spaces with dilations“. This time the effort goes into understanding the article  arXiv:1101.2532   Plünnecke’s Inequality by  Giorgis Petridis,   but this post is only a first step towards a geometric approach to Plünnecke’s inequality in spaces with dilations (it will be eventually applied for Carnot groups). Here I shall define a class of decorated binary trees and a notion of closeness.

I shall use binary decorated trees and the moves R1a and R2a, like in the post “A roadmap to computing with space“:

roadmap_4

To these move I add the “linearity moves”

roadmap_6

Definition 1.  These moves act on the set of binary trees T(X) with nodes decorated with two colours (black and white) and leaves decorated with elements of the infinite set of “variable names” X.  I shall denote by A, B, C … such trees and by x, y, z, u, v, w … elements of X.

The edges of the trees are oriented upward. By convention we admit X to be a subset of T(X), thinking about x \in X as an edge pointing upwards which is also a leaf decorated with x.

The moves act locally, i.e. they can be used for any portion of a tree from T(X) which looks like one of the patterns from the moves, with the understanding that the rest of the respective tree is left unchanged.

____________________

Definition 2.  The class of finite trees  FinT(X) \subset T(X) is the smallest subset of T(X) with the  properties:

  • X \subset FinT(X),
  • if A, B \in FinT(X) then A \circ B \in FinT(X)  , where A \circ B is the tree

roadmap_7

  • if A, B, C \in FinT(X) then Sum(A,B,C) \in FinT(X), where Sum(A,B,C) is the tree

roadmap_8

  • if A \in FinT(X) and we can pass from A to B by one of the moves then B \in FinT(X).

____________________

The class of finite trees is closed also with respect to other operations.

Proposition 1.  If A, B, C \in T(X)  then

roadmap_9

Let us denote by Dif(A,B,C) the tree from the LHS of the first row, by Q(A,B,C) the tree from the middle of the first row and by Inv(A,B) the tree from the LHS of the second row.

Corollary 1.   If A, B, C \in FinT(X) then Dif(A,B,C) \in FinT(X), Q(A,B,C) \in FinT(X) and Inv(A,B) \in FinT(X).

Proof: If A, B, C \in FinT(X) then Sum(A,B,C) \in FinT(X). By Proposition 1  the trees Dif(A,B,C) and Q(A,B,C) can be transformed into Sum(A,B,C), therefore they belong to FinT(X). Also, by the same proposition the tree Inv(A,B) can be transformed into Dif(A,B,A), which we proved that it belongs to FinT(X). Therefore Inv(A,B) \in FinT(X).

____________________

I define now when two graphs are close.

Definition 3. Two graphs  A, B \in FinT(X)  are close, denoted by A \sim B, if there is C \in FinT(X) such that B can be moved into A \circ C.

Proposition 3.  The closeness relation is an equivalence.

Proof:  By the move R1a we have A \sim A for any A \in FinT(X).  Take now A \sim B, therefore there is C \in FinT(X) such that B can be moved into A \circ C. We know that D = Inv(A,C) \in FinT(X) by Corollary 1. On the other hand we can prove that A can be moved into B \circ D, therefore B \sim A.

Finally, let A \sim B  , so there’s a C \in FinT(X) such that B can be moved into A \circ C  , and let B \sim D, so there’s E \in FinT(X) such that D can be moved into B \circ E . Let now F \in FinT(X) given by F = Sum(A,C,E). Check that D can be moved into A \circ F , therefore A \sim D.

 Question: do you think this  proof is more easy to understand than an equivalent proof given by drawings?

A roadmap to computing with space

I don’t know yet what exactly is “computing with space”, but I almost know. Further is a description of the road to this goal, along with an invitation to join.

Before starting this description, maybe is better to write what this explanation is NOT about.  I have arrived to the idea of “computing with space”  by branching from a beautiful geometry subject: sub-riemannian geometry. The main interest I have in this subject consists in giving an intrinsic (i.e. by using a minimal bag of tools) description of the differential structure of a sub-riemannian space. The fascinating part is that these spaces, although being  locally topologically trivial, have a differential calculus which is not amenable to the usual differential calculus on manifolds, in the same way as the hyperbolic space, say, is not a kind of euclidean space, geometrically. I consider very important and yet not well known  the discovery of the fact that there are spaces where we can define an intrinsic differential calculus fundamentally different than the usual one (locally, not globally, as it is the case with manifolds which admits different GLOBAL differential structures, although at the local level they are just pieces of an euclidean space). But in this post I shall NOT go to explain this. The road to computing with space branches from this one, however there are paths represented by mathematical hints which are criss-crossing  both these roads.

Let’s start.

1. In “Dilatation structures I. Fundamentals” I propose, in section 4 “Binary decorated trees and dilatations” a formalism for making easy various calculations with dilation structures (or “dilatation structures”, as I called them at the moment; notice that dilation vs dilatation is a battle won by dilations in math, but by dilatation in other fields, although the correct word historically is dilatations).

This formalism works with moves acting on binary decorated trees, with the leaves decorated with elements of a metric space. It was extremely puzzling that in fact the formalism worked without needing to know which metric space I use. It was also amazing to me that reasoning  with moves acting on binary trees gave proofs of generalizations of results  involving elaborate calculations with pseudo-differential operators and alike. At a close inspection it looked like somewhere in the background there is an abstract nonsense machine which is just applied to this particular case of metric spaces.

Here is an example of the formalism. The moves are (I use the names from graphic lambda calculus):

 

roadmap_4

 

Define the following tree (and think about it as being the graphical representation of an operation):

roadmap_2

Think that it represents u+v, with respect to the base point x.  Then we can prove that

roadmap_5

which is a kind of associativity relation.  The proof by binary trees has nothing to do with sub-riemannian geometry, right? An indirect confirmation is that the same formalism works very well on the ultrametric space given by the boundary of the infinite dyadic tree, see  Self-similar dilatation structures and automata.

As a conclusion for this part, it seemed that  in order to unravel the abstract nonsense machine from the background, I needed to:

  • find a way to get rid of  mentioning metric spaces, so in particular to get rid of decorations of the leaves of binary trees by points in in some space, (or maybe use these decorations as a kind of names)
  • express this proof based on moves applied to binary trees as a computation, (i.e. as something like a reduction procedure).

Otherwise said, there was a need for a kind of logic, but which one?

Scale is a place in the brain

… and dilation structures might exist, physically, in some parts of the brain. (See also section 2.4.  in “Computing with space …”   arXiv:1103.6007 .)   I will surely come back to this subject, after learning more, but here are some facts.

The primary source of this post is the article “From A to Z: a potential role for grid cells in spatial navigation”   Neural Syst Circuits. 2012; 2: 6. by Caswell Barry and Daniel Bush.

From wikipedia entry for grid cell:

A grid cell is a type of neuron that has been found in the brains of rats, mice, bats, and monkeys; and it is likely to exist in other animals including humans.[1][2][3][4][5] In a typical experimental study, an electrode capable of recording the activity of an individual neuron is implanted in the cerebral cortex of a rat, in a part called the dorsomedial entorhinal cortex, and recordings are made as the rat moves around freely in an open arena. For a grid cell, if a dot is placed at the location of the rat’s head every time the neuron emits an action potential, then as illustrated in the adjoining figure, these dots build up over time to form a set of small clusters, and the clusters form the vertices of a grid of equilateral triangles. This regular triangle-pattern is what distinguishes grid cells from other types of cells that show spatial firing correlates. By contrast, if a place cell from the rat hippocampus is examined in the same way (i.e., by placing a dot at the location of the rat’s head whenever the cell emits an action potential), then the dots build up to form small clusters, but frequently there is only one cluster (one “place field”) in a given environment, and even when multiple clusters are seen, there is no perceptible regularity in their arrangement.

So, there are place cells and grid cells. Here is what wikipedia says about place cells:

Place cells are neurons in the hippocampus that exhibit a high rate of firing whenever an animal is in a specific location in an environment corresponding to the cell’s “place field”. These neurons are distinct from other neurons with spatial firing properties, such as grid cells, border cells, barrier cells,[1] conjunctive cells,[2] head direction cells, and spatial view cells. In the CA1 and CA3 hippocampal subfields, place cells are believed to be pyramidal cells, while those in the dentate gyrus are believed to be granule cells.[3]

The behaviour of these cells is explained in the Figure 1 from the mentioned article by Barry and Bush:

2042-1001-2-6-1

(Copyright ©2012 Barry and Bush; licensee BioMed Central Ltd.)

The explanation of the Figure 1 reads:

Single unit recordings made from the hippocampal formation. a) CA1 place cell recorded from a rat. The left-hand figure shows the raw data: the black line being the animal’s path as it foraged for rice in a 1 m2 arena for 20 minutes; superimposed green dots indicating the animal’s location each time the place cell fired an action potential. Right, the same data processed to show firing rate (number of spike divided by dwell time) per spatial bin. Red indicates bins with high firing rate and blue indicates low firing rate, white bins are unvisited, and peak firing rate is shown above the map. b) Raw data and corresponding rate map for a single mEC grid cell showing the multiple firing fields arranged in a hexagonal lattice. c) Three co-recorded grid cells, the center of each firing field indicated by a cross with different colors corresponding to each cell. The firing pattern of each cell is effectively a translation of the other co-recorded cells as shown by superposition of the crosses (right). d) Changes made to the geometry of a familiar environment cause grid cell firing to be distorted (rescale) demonstrating that grid firing is, at least, partially controlled by environmental cues, in this case the location of the arena’s walls. Raw data are shown on the left and the corresponding rate maps on the right. The rat was familiar with the 1 m2 arena (outlined in red). Changing the shape of the familiar arena by sliding the walls past each other produced a commensurate change in the scale of grid firing. For example, shortening the x-axis to 70 cm from 100 cm (top right) caused grid firing in the x-axis to reduce to 78% of its previous scale, while grid scale in the Y-axis was relatively unaffected. Numbers next to the rate maps indicate the proportional change in grid scale measured along that axis (figure adapted from reference [28]).

And now, the surprise: scale is indeed a place in the brain. Let’s see Figure 2. from the same article:

2042-1001-2-6-2

(Copyright ©2012 Barry and Bush; licensee BioMed Central Ltd.)

The caption of this figure is:

Grid scale increases along a dorso-ventral gradient in the mEC. Two grid cells recorded from the same animal but at different times are shown, both cells were recorded in a familiar 1 m2 arena. Approximate recording locations in the mEC are indicated. The more ventral cell exhibits a considerably larger size of firing fields and distance between firing fields than the dorsal cell.

… and, from the article,  (boldfaced by me):

The scale of the grid pattern, measured as the distance between neighboring peaks, increases along the dorso-ventral mEC gradient, mirroring a similar trend in hippocampal place fields [15,25]. The smallest, most dorsal, scale is typically 20 to 25 cm in the rat, reaching in excess of several meters in the intermediate region of the gradient [15,26] (Figure ​(Figure2).2). This may explain how this remarkable pattern was missed by early electrophysiology studies, which targeted ventral mEC and found only broadly tuned spatial firing (for example, [27]). Interestingly, grid scale increases in discontinuous increments and the increment ratio, at least between the smaller scales, is constant [28]. Grid cells recorded from the same electrode, which are, therefore, proximate in the brain, typically have a common scale and orientation but a random offset relative to each other and the environment [15]. As such, their firing patterns are effectively identical translations of one another and a small number of cells will ‘tile’ the complete environment (Figure ​(Figure1c).1c). It also appears that grids of different scale recorded ipsilaterally have a common orientation, such that the hexagonal arrangement of their firing fields share the same three axes, albeit with some localized distortions [15,28,29].

That’s just amazing!

Concerning the hypothesis (Hafting, T.; Fyhn, M.; Molden, S.; Moser, M. -B.; Moser, E. I. (2005). “Microstructure of a spatial map in the entorhinal cortex”. Nature 436 (7052): 801–806. Bibcode:2005Natur.436..801H. doi:10.1038/nature03721.) that the grid cells firing fields encode the abstract structure of an euclidean space, I think this is not following from the observations. My argument is that the translation-invariance (of firing patterns in this particular case) emerge by the mechanism of dilation structures and it is, at least up to my actual understanding, an evidence for the existence of these structures in the brain. But of course, there is much to learn and think about.

Curvature and halfbrackets, part III

I continue from “Curvature and halfbrackets, part II“.  This post is dedicated to the application of the previously introduced notions to the case of a sub-riemannian Lie group.

_______________

1. I start with the definition of a sub-riemannian Lie group. If you look in the literature, the first reference to “sub-riemannian Lie groups” which I am aware about is the series Sub-riemannian geometry and Lie groups  arXiv:math/0210189, part II arXiv:math/0307342 , part III arXiv:math/0407099 .    However, that work predates the introduction of dilation structures, therefore there is a need to properly define this  object within the actual state of matters.

Definition 1. A sub-riemannian Lie group is a locally compact topological group G with the following supplementary structure:

  • together with the dilation structure coming from it’s one-parameter groups (by the Montgomery-Zippin construction), it has a group norm which induce a tempered dilation structure,
  • it has a left-invariant dilation structure (with dilations \delta^{x}_{\varepsilon} y = x \delta_{\varepsilon}(x^{-1}y) and group norm denoted by \| x \|) which, paired with the tempered dilation structure mentioned previously, it satisfies the hypothesis of “Sub-riemannian geometry from intrinsic viewpoint” Theorem 12.9,  arXiv:1206.3093

Remarks:

  1. there is no assumption on the tempered group norm to come from a Riemannian left-invariant distance on the group. For this reason, some people use the name sub-finsler  arXiv:1204.1613  instead of sub-riemannian, but I believe this is not a serious distinction, because the structure of a scalar product which induces the distance is simply not needed for understanding  sub-riemannian Lie groups.
  2. by Theorem 12.9, it follows that the left-invariant field of dilations induces a length dilation structure. I shall use this further. Length dilation structures are maybe a more useful object than simply dilation structures, because they explain how the length functional behaves at different scales, which is a much more detailed information about the microscopic structure of a length metric space than just the information about how the distance behaves at different scales.

This definition looks a bit mysterious, unless you read the course notes cited inside the definition. Probably, when I shall find the interest to pursue it, it would be really useful to just apply, step by step, the constructions from arXiv:1206.3093 to sub-riemannian Lie groups.

__________________

2. With the notations from the last post, I want to compute the quantities A, B, C. We already know that B is related to the curvature of G with respect to it’s sub-riemannian (sub-finsler if you like it more) distance, as introduced previously via metric profiles.  We also know that B is controlled by A and C. But let’s see the expressions of these three quantities for sub-riemannian Lie groups.

I denote by d(u,v) the left invariant sub-riemannian distance, therefore we have d(u,v) = \| u^{-1}v\|.

Now, \rho_{\varepsilon}(x,u) = \| x^{-1} u \|_{\varepsilon} , where \varepsilon \| u \|_{\varepsilon} = \| \delta_{\varepsilon} u \|  by definition.  Notice also that \Delta^{x}_{\varepsilon}(u,v) = (\delta^{x}_{\varepsilon} u ) ((u^{-1} x) *_{\varepsilon} (x^{-1} v)), where  u *_{\varepsilon} v is the deformed group operation at scale \varepsilon, i.e. it is defined by the relation:

\delta_{\varepsilon} (u *_{\varepsilon} v) = (\delta_{\varepsilon} u) (\delta_{\varepsilon} v)

With all this, it follows that:

A_{\varepsilon}(x,u) = \rho_{\varepsilon}(x,u) - d^{x}(x,u) = \|x^{-1} u \|_{\varepsilon} - \| x^{-1} u \|_{0}

A_{\varepsilon}(\delta^{x}_{\varepsilon} u, \Delta^{x}_{\varepsilon}(u,v)) = \| (u^{-1} x) *_{\varepsilon} (x^{-1} v) \|_{\varepsilon} - \| (u^{-1} x) *_{\varepsilon} (x^{-1} v)\|_{0}.

A similar computation leads us to the expression for the curvature related quantity

B_{\varepsilon}(x,u,v) = d^{x}_{\varepsilon}(u,v) - d^{x}(u,v) = \| (u^{-1}x) *_{\varepsilon} (x^{-1} v)\|_{\varepsilon} - \| (u^{-1}x) *_{0} (x^{-1}v)\|_{0}.

Finally,

C_{\varepsilon}(x,u,v) = \|(u^{-1} x) *_{\varepsilon} (x^{-1} v)\|_{0} - \|(u^{-1}x) *_{0} (x^{-1}v)\|_{0}. This last quantity is controlled by a halfbracket, via a norm inequality.

The expressions of A, B, C make transparent that the curvature-related B is the sum of A and C. In the next post I shall use the length dilation structure of the sub-riemannian Lie group in order to show that A is controlled by C, which in turn is controlled by a norm of a halfbracket. Then I shall apply all this to SO(3), as an example.