Tag Archives: mathematics

More experiments with Open Science

I still don’t know which format is better for Open Science. I’m long past the article format for obvious reasons. Validation is a good word and concept because you don’t have to rely absolutely on opinions of others and that’s how the world works. This is not all the story though.

I am very fortunate to be a mathematician, not a biologist or biochemist. Still I long for the good format for Open Science, even if, as a mathematician, I don’t have the problems biologists or chemists have, namely loads and loads of experimental data and empirical approaches. I do have a world of my own to experiment with, where I do have loads of data and empirical constructs. My mind, my brain are real and I could understand myself by using tools of chemists and biologists to explore the outcomes of my research. Funny right? I can look at myself from the outside.

That is why  I chose to not jump directly to make Hydrogen, but instead to treat the chemlambda  world, again, as a guinea pig for Open Science.

There are 427 well written molecules in the chemlambda library of molecules on Github. There are 385 posts in the chemlambda collection on Google+, most of them with animations from simulations of those molecules. It is a world, how big is it?

It is easy to make first a one page direct access to the chemlambda collection. It is funnier to build a phylogenetic tree of the molecules, based on their genes. That’s what I am doing now, based on a work in progress.

Each molecule can be decomposed in “genes” say, by a sequencer program. Then one can use a distance between these genes to estimate first how they cluster and later to make a phylogenetic tree.

Here is the first heatmap (using the edit distance between single occurrences of genes in molecules) of the 427 molecules.

Screenshot-22

Is a screenshot, proving that my custom programs work 🙂 (one understands more by writing some scripts than by taking tools ready made from others, at least at this stage of research).

By using the edit distance I can map the explored chemlambda molecules. In the following image the 427 molecules from the library are represented as nodes and for each pair of molecules at an edit distance at most 20 there is a link. The nodes are in a central gravitational field, each node has the same charge and the links between nodes act as springs.

Screenshot-29_map

This is a screenshot of the result, showing clusters and trees, connecting them. Not very sophisticated, but enough to give a sense of the explored territory. In the curated collection, such a map would be useful to navigate through the molecules, as well as for giving ideas about which parts are not as well explored. I have not yet made clear which parts of the map cover lambda terms, which cover quines, etc.

Moreover, I see structure! The 427 molecules are made of copies of  605 different linear “genes” (i.e. sticks with colored ends)  and 38 ring shaped ones.  (Is easy to prove that lambda terms have no rings, when turned into molecules.) There are some interesting curved features visible in the edit distance of the sticks.

screenshot-23

They don’t look random enough.

Is clear that a phylogenetic tree is in reach, then what else than connecting the G+ collection posts with the molecules used, arranged along the tree…?

Can I discover which molecules are coming from lambda terms?

Can I discover how my mind worked when building these molecules?

Which are the neglected sides, the blind places?

I hope to be able to tell by the numbers.

Which brings me to the main subject of this post: which is a good format for an Open Science piece of research?

Right now I am in between two variants, which may turn out to not be as different as they seem. An OS research vehicle could be:

  • like a viable living organism, literary
  • or like a viable world, literary.

Only the future will tell which is which. Maybe both!

Mathematics, things, objects and brains

This is about my understanding of the post Mathematics and the Real by Louis Kauffman.

I start from this quote:

One might hypothesize that any mathematical system will find natural realizations. This is not the same as saying that the mathematics itself is realized. The point of an abstraction is that it is not, as an abstraction, realized. The set { { }, { { } } } has 2 elements, but it is not the number 2. The number 2 is nowhere “in the world”.

Recall that there are things and objects. Objects are real, things are discussions. Mathematics is made of things. In Kauffman’s example the number 2 is a thing and the set { { }, { { } } } is an object of that thing.

Because an object is a reification of a thing. It is therefore real, but less interesting than the thing, because it is obtained by forgetting (much of) the discussion about it.

Reification is not a forgetful functor, though. There are interactions in both directions, from things to objects and from objects to things.

Indeed, in the rhino thing story, a living rhinoceros is brought in Europe. The  sight of it was new. There were remnants of ancient discussions about this creature.

At the beginning that rhinoceros was not an object, not a thing. For us it is a thing though, and what I am writing about it is part of that thing.

From the discussion about that rhinoceros, a new thing emerged. A rhinoceros is an armoured beast which has a horn on its back which is used for killing elephants.

The rhino thing induced a wave of reifications:  nearby the place where that rhino was seen for the first time in Portugal, the Manueline Belém Tower  was under construction at that moment. “The tower was later decorated with gargoyles shaped as rhinoceros heads under its corbels.[11]” [wiki dixit]

Durer’s rhino, another reification of that discussion. And a vector of propagation of the discussion-thing. Yet another real effect, another  object which was created by the rhino thing is “Rinoceronte vestido con puntillas (1956) by Salvador Dalí in Puerto Banús, Marbella, Spain” [wiki dixit].

Let’s take another example. A discussion about the reglementations of the sizes of cucumbers and carrots to be sold in EU is a thing. This will produce a lot of reifications, in particular lots of correct size cucumbers and carrots and also algorithms for selecting them. And thrash, and algorithms for dispensing of that trash. And another discussions-things, like is it moral to dump the unfit carrots to the trash instead of using them to feed somebody who’s in need? or like the algorithm which states that when you go to the market, if you want to find the least poisoned vegetables then you have to pick them among those which are not the right size.

The same with the number 2, is a thing. One of it’s reifications is the set { { }, { { } } }. Once you start to discuss about sets, though, you are back in the world of things.

And so on.

I argue that one should understand from the outset that mathematics is distinct from the physical. Then it is possible to get on with the remarkable task of finding how mathematics fits with the physical, from the fact that we can represent numbers by rows of marks |  , ||, |||, ||||, |||||, ||||||, … (and note that whenever you do something concrete like this it only works for a while and then gets away from the abstraction living on as clear as ever, while the marks get hard to organize and count) to the intricate relationships of the representations of the symmetric groups with particle physics (bringing us back ’round to Littlewood and the Littlewood Richardson rule that appears to be the right abstraction behind elementary particle interactions).

However, note that   “the marks get hard to organize and count” shows only a limitation of the mark algorithm as an object, and there are two aspects of this:

  • to stir a discussion about this algorithm, thus to create a new thing
  • to recognize that such limitations are in fact limitations of our brains in isolation.

Because, I argue, brains (and their working) are real.  Thoughts are objects, in the sense used in this post! When we think about the number 2, there is a reification of out thinking about the number 2 in the brain.

Because brains, and thoughts, are made of an immensely big number of chemical reactions and electromagnetic  interactions, there is no ghost in these machines.

Most of our brain working is “low level”, that is we find hard to account even for the existence of it, we have problems to find the meaning of it, we are very limited into contemplating it in whole, like a self-reflecting mirror. We have to discuss about it, to make it into a thing and to contemplate instead derivative objects from this discussion.

However, following the path of this discussion, it may very well be that brains working thing can be understood as structure processing, with no need for external, high level, semantic, information based meaning.

After all, chemistry is structure processing.

A proof of principle argument for this is Distributed GLC.

The best part, in my opinion, of Kauffman’s post is, as it should, the end of it:

The key is in the seeing of the pattern, not in the mechanical work of the computation. The work of the computation occurs in physicality. The seeing of the pattern, the understanding of its generality occurs in the conceptual domain.

… which says, to my mind at least, that computation (in the usual input-output-with-bits-in-between sense) is just one of the derivative objects of the discussion about how brains (and anything) work.

Closer to the brain working thing, including the understanding of those thoughts about mathematics, is the discussion about “computation” as structure processing.

UPDATE: A discussion started in this G+ post.

_________________________________________

Uniform spaces, coarse spaces, dilation spaces (II)

Background:
(1) Uniform spaces, coarse spaces, dilation spaces

(2) What’s the difference between uniform and coarse spaces (I)

I shall use the idea explained in (1), in the groupoid frame of (2), classical situation of the trivial groupoid over a space X. In this case the uniform and coarse structures are just the classical notions.

This idea says that all we need to have is a field of dilations. With such an object we may construct an uniformity, or a coarse structure, then ask that the field of dilations has some properties with respect to the said uniformity (or coarse structure). If the field of dilations has those properties then it transforms into an emergent algebra  (in the case 0 below; in the other case  there is a new type of emergent algebra which appears in relation to coarse structures).

Remark. Measure can be constructed from a field of dilations, that’s for later.

Fields of dilations. We have a set X, call it “space”. We have the commutative group (0,\infty) with multiplication of reals operation, (other groups work well, let’s concentrate on this one).

A field of dilations is a function which associates to any x \in X and any \varepsilon \in (0,\infty)
an invertible transformation  \delta^{x}_{\varepsilon} : U(x) \subset X \rightarrow U(x,\varepsilon) \subset X which we call “the dilation based at x, of coefficient \varepsilon“.

We ask:

1. x \in U(x) for any point x \in X

2. for any fixed x \in X the function \varepsilon \in (0,\infty) \rightarrow \delta^{x}_{\varepsilon} is a representation of the commutative group (0,\infty).

3. fields of dilations come into 2 flavors (are there more?), depending on the choice between (0,1] and [1,\infty), two important sub-semigroups of (0,\infty).

(case 0) – If you choose (0,1] then we ask that for any \varepsilon \in (0,1] we have U(x,\varepsilon) \subset U(x), for any x,

This case is good for generating uniformities and  for the infinitesimal point of view.

(case \infty) – If your choice is [1,\infty) then we ask that for any \varepsilon \in [1,\infty) we have U(x) \subset U(x,\varepsilon) , for any x,

This case is good for generating coarse structures and for  the asymptotic point of view.

Starting from here, I’m afraid that my latex capabilities on wordpress are below what I need to continue.

Follow this working paper to see more. Thanks for comments!

PS. At some point, at least for the case of uniformities, I shall use “uniform refinements” and what I call “topological derivative” from arXiv:0911.4619, which can be applied for giving alternate proofs for rigidity results, without using Pansu’s Rademacher theorem in Carnot groups.

Heisenberg group, convex analysis, alike or not?

I posted on mathoverflow a question, with the purpose of clarifying the feelings I have concerning the formal resemblance between Heisenberg group and cyclically monotone operators from convex analysis. The question got an answer which made me realize something trivial (but sometimes we need somebody else to point to the obvious). However, I still think there is something worthy of further consideration here, that’s why I post it.

Setting: Let S be a semigroup (i.e. has an associative operation with neutral element e) and let (A,+) be a commutative group (with neutral element 0).

Let’s say that a semigroup extension of S with A is any operation on S \times A, of the form

(s,a)(s',a') = (s s' , a+a'+ \lambda(s,s'))

where \lambda: S \times S \rightarrow A is a function such that S \times A with the mentioned operation is a semigroup with neutral element (e,0). Obviously, the operation is encoded by the function \lambda, which has to satisfy:

\lambda(s s', s") + \lambda(s,s') = \lambda(s, s's") + \lambda(s',s")  (from associativity)

\lambda(e,s) = \lambda(s,e) = 0  (from the neutral element condition).

Here are two examples. Both are written in the setting of bipotentials, namely   X and Y are topological, locally convex, real vector spaces of dual variables x \in X and y \in Y, with the duality product \langle \cdot , \cdot \rangle : X \times Y \rightarrow \mathbb{R}.

The spaces X, Y have topologies compatible with the duality product, in the sense that for any continuous linear functional on X there is an y \in Y which puts the functional into the form x \mapsto \langle x,y\rangle (respectively any continuous linear functional on Y has the form y \mapsto \langle x,y\rangle, for an x \in X).

Example 1: (Heisenberg group) Let S = X \times Y with the operation of addition of pairs of vectors and let A = \mathbb{R} with addition. We may define a Heisenberg group over the pair (X,Y) as H(X,Y) = S \times A with the operation

(x,y,a)(x',y',b) = (x+x', y+y', a+a'+ \langle x, y'\rangle - \langle x', y \rangle)
Fact: There is no injective morphism from (X\times Y, +) to H(X,Y).

Example 2: (convex analysis) Let this time S = (X \times Y)^{*}, the free semigroup generated by X \times Y, i.e. the collection of all finite words with letters from X \times Y, together with the empty word e, with the operation of concatenation of words.

Let A be the set of bi-affine real functions on X \times Y, i.e. the collection of all functions a: X \times Y \rightarrow \mathbb{R} which are affine and continuous in each argument. A is a commutative group with the addition of real valued functions operation.

We define the function \lambda: S \times S \rightarrow A by:

\lambda(e, c)(x,y) = \lambda(c,e)(x,y)=0 for any c \in S and any (x,y) \in X \times Y.
– if c, h \in S are words c = (x_{1},y_{1})...(x_{n}, y_{n}) and h = (u_{1},v_{1})...(u_{m}, v_{m}), with m,n \geq 1, then

\lambda(c,h)(x,y) = \langle u_{1} - x , y_{n} - y \rangle for any (x,y) \in X \times Y.

This \lambda induces a semigroup extension operation on S \times A.

Fact: there is an injective morphism F: S \rightarrow S \times A, with the expression F(c) = (c, E(c)).

Here, for any c = (x_{1},y_{1})...(x_{n}, y_{n}) the expression E(c)(x,y) is the well known circular sum associated to the “dissipation during the discrete cycle” (x_{1},y_{1})...(x_{n}, y_{n})(x,y), namely:

E(c)(x,y) = \langle x_{1},y\rangle + \langle x,y_{n}\rangle - \langle x,y \rangle + \sum_{k=1}^{n-1}\langle x_{k+1}, y_{k}\rangle - \sum_{k=1}^{n} \langle x_{k},y_{k} \rangle

which appears in convex analysis, related to cyclically monotone operators.

The main difference between those two examples is that the example 2. is a direct product of a semigroup with a group. There are many resemblances though.

Uniform spaces, coarse spaces, dilation spaces

This post is related to the previous What’s the difference between uniform and coarse structures? (I).  However, it’s not part (II).

Psychological motivation.  Helge Gloecker made a clear mathscinet review of my paper Braided spaces with dilations and sub-riemannian symmetric spaces.  This part of the review motivated me to start writing some more about dilatation structures, some of it explained in the next section of this post. Here is the motivating part of the review:

“Aiming to put sub-Riemannian geometry on a purely metric footing (without recourse to differentiable structures), the author developed the concept of a dilatation structure and related notions in earlier works [see J. Gen. Lie Theory Appl. 1 (2007), no. 2, 65–95; MR2300069 (2008d:20074); Houston J. Math. 36 (2010), no. 1, 91–136; MR2610783 (2011h:53032); Commun. Math. Anal. 11 (2011), no. 2, 70–111; MR2780883 (2012e:53051)].

[…]

In a recent preprint [see “Emergent algebras”, arXiv:0907.1520], the author showed that idempotent right quasigroups are a useful auxiliary notion for the introduction (and applications) of dilatation structures.

[…]

The author sketches how dilatation structures can be defined with the help of uniform irqs, suppressing the technical and delicate axioms concerning the domains of the locally defined maps. More precisely, because the maps are only locally defined, the structures which occur are not uniform irqs at face value, but only suitable analogues using locally defined maps (a technical complication which is not mentioned in the article).”

This was done before, several times, for dilatation structures, but Helge is right that it has not been done for emergent algebras.

Which brings me to the main point: I started a paper with the same (but temporary) name as this post.  The paper is based on the following ideas.

Fields of dilations which generate their own topology, uniformity, coarse structure, whatever.    In a normed real vector space (V, \| \cdot \|) we may use dilations as replacements of metric balls. Here is the simple idea.

Let us define, for any x \in V, the domain of x as U(x),  the ball with center x, of radius one, with respect to the distance d(x,y) = \|x - y \|. In general, let B(x, r)  be the  metric ball with respect to the distance induced by the norm.

Also, for any x \in V and \varepsilon \in (0,+\infty), the dilation based at x, with coefficient \varepsilon is the function \delta^{x}_{\varepsilon} y = x + \varepsilon ( -x + y) .

I use these notations to write the  ball of radius \varepsilon \in (0,1] as B(x,\varepsilon) = \delta^{x}_{\varepsilon} U(x) and give it the fancy name dilation ball of coefficient \varepsilon.

In fact, spaces with dilations, or dilatation structures, or dilation structures, are various names for spaces endowed with fields of dilations which satisfy certain axioms. Real normed vector spaces are examples of spaces with dilations, as a subclass of the larger class of conical groups with dilations, itself just a subclass of groups with dilations. Regular sub-riemannian manifolds are examples of spaces with dilations without a predefined group structure.

For any space with dilations, then, we may ask what can we get by forgetting the distance function, but keeping the dilation balls.  With the help of those we may define the uniformity associated with a field of dilations and then ask that the field of dilations is behaves uniformly, and so on.

Another structure, as interesting as the uniformity structure, is the (bounded metric) coarse structure of the space, which  again could be expressed in terms of fields of dilations. As coarse structures and uniform structures are very much alike (only that one is interesting for the small scale, other for the large scale), is there a notion of dilation structure which is appropriate for coarse structures?

I shall be back on this, with details, but the overall idea is that a field of dilations (a notion even weaker than an emergent algebra) is somehow midway between a uniformity structure and a coarse structure.

UPDATE (28.10.2012): I just found bits of the work of Protasov, who introduced the notion of “ball structure” and “ballean”, which is clearly related with the idea of keeping the balls, not the distance. Unfortunately, for the moment I am unable to find a way to read his two monographs on the subject.

Ado’s theorem for groups with dilations?

Ado’s theorem  is equivalent with the following:

Theorem. Let G be a local Lie group. Then there is a real, finite dimensional vector space V and an injective, local group morphism from (a neighbourhood of the neutral element of) G to GL(V), the linear group of $V$.

Any proof I am aware of, (see this post for one proof and relevant links),  mixes the following ingredients:

–  the Lie bracket and the BCH formula,

– either reduction to the nilpotent case or (nonexclusive) use of differential equations,

– the universal enveloping algebra.

WARNING: further I shall not mention the “local” word, in the realm of spaces with dilations everything is local.

We may pass to the following larger frame of spaces with dilations, dilatation structures or emergent algebras:

– locally compact groups with dilations instead of Lie groups

– locally compact conical groups instead of vector spaces

– linearity in the sense of dilation structures instead of usual linearity.

Conjecture:  For any locally compact group with dilations G there is a locally compact conical group N and an injective morphism \rho: G \rightarrow GL(N) such that for every x \in N the map g \in G \mapsto \rho(g)x is differentiable.

In this frame:

– we don’t have the corresponding Lie bracket and BCH formula, see the related problem of the noncommutative BCH formula,

– what nilpotent means is no longer clear (or needed?)

– we don’t have a clear tensor product, therefore we don’t have a correspondent of the universal enveloping algebra.

Nevertheless I think the conjecture is true and actually much easier to prove than Ado’s theorem, because of the weakening of the conclusion.

Is there a positional numeral system for Carnot groups?

I advertise here a question I posed on mathoverflow, then I shall give a motivation for this question.

In the following is a reproduction of  the question (thanks for all useful comments, read them at mathoverflow) (slight modification of the title of the question here)

What is the computational complexity of multiplication in a Carnot group? 

Background:  A Carnot group is a real nilpotent Lie group N whose Lie algebra Lie(N) admits a direct sum decomposition

Lie(N) = V_{1}+ ... + V_{m}

such that if i+j \leq m then  [V_{i}, V_{j}] \subset V_{i+j},   otherwise [V_{i}, V_{j}] = 0.

Moreover, it is required that V_{1} generates the whole Lie algebra Lie(N).

Detailed question: What is the computational complexity of multiplication  in N, as a function of m, dim V_{1}, … , dim V_{m}?

More background: 

1. “Computational complexity” means giving an estimate for the running time of an algorithm for performing the multiplication operation.I am looking for nontrivial estimates, better than, for example, the one coming from matrix multiplication.Exponentiation and logarithms are not taken into account (one variant of thinking is to use BCH formula for multiplication, with a finite number of terms due to nilpotency, other variant would be to use the remark from example 2.4 below and think about multiplication as matrix multiplication).

2. Examples of Carnot groups:

2.1. When m=1, this is a real vector space and the multiplication operation is just addition of vectors (thus the complexity of the  multiplication operation is the one of the addition of vectors, much less than the one of matrix multiplication).

2.2. When m=2, the simplest example is the Heisenberg group [see posts here about it] .

2.3. The group of invertible, n \times n  upper triangular matrices, has m = n-1 and dim V_{i} = n-i. Is there any better estimate than n^{3} in this case? (I think so, but cannot find a reference to an answer.) [In fact Henry Cohn answered to this:

“The exponent for multiplication of upper triangular matrices is the same as that for full matrix multiplication. Specifically, you can reduce n \times n matrix multiplication to the 3n \times 3n upper triangular case. (Use nine n \times n  blocks, with identity matrices on the diagonal and a zero matrix in the upper right corner.) So for that case the best exponent that’s known is 2.373, the same as for the general case.

The reduction is not a morphism. Suppose A  and B are any n \times n  matrices (not necessarily invertible), and define 3n \times 3n matrices A' and B' as follows. Both are the identity matrix, except for one n \times n block. Specifically, A' has A as the middle top n \times n block [i.e. the (1,2) block is A, the diagonal blocks are n \times n identity matrices, all other blocks are n \times n  O matrices], and B' has B as the middle right n \times n  block [i.e. the (2,1) block is B, the diagonal blocks are n \times n identity matrices, all other blocks are n \times n  O matrices]. Then A'B' has AB  as its upper right block [i.e. the (3,3) block]. These larger matrices are upper triangular, so if you can efficiently multiply upper triangular matrices, then you can efficiently multiply arbitrary matrices (of one third the size).”

2.4. Any Carnot group can be seen as a subgroup of the group from example 2.3.

2.5. Carnot groups appear as metric tangent spaces for sub-riemannian, manifolds. As such, they are a kind of noncommutative vector spaces, with the “trivial” exception when they are commutative, like in the case of riemannian manifolds, when the tangent space is like in example 2.1  (other “simple” example being the metric contact manifolds, with the tangent space like in 2.2).”

What is my motivation behind this question? Actually, I wrote about this in the post “Combinatorics versus geometric …“.

Indeed, a positional numeral system can be seen from two related viewpoints:

1. as the basic ingredient for the algorithm of computing the “multiplication” operation in the abelian group of integers (or reals) (that’s like in the example 2.1 above),

2. as a “positional system” in the geometric sense, by exploiting the self-similarity of the group of integers (or reals), something which has not much to do with numbers. Let me explain: I am now in Europe, Romania, Bucharest. This is a positional notation for the place where I am, because Romania is a country in Europe and Bucharest is a city in Romania. Likewise, the number 123 (in decimal basis) is that number which is “in the continent 100“, in the “country 20” and in the “city 3“.  What makes such a positional notation useful for numbers is the self-similarity (manifested by the system of orders of magnitude) of the “world of numbers”.

Let us just put all together: is there a “positional numeral system” for Carnot groups which can give fast algorithms for computing the multiplication operation (in a Carnot group), by exploiting the self-similarity of a Carnot group?

What’s the difference between uniform and coarse structures? (I)

See this, if you need: uniform structure and coarse structure.

Given a set X, endowed with an uniform structure or with a coarse structure, Instead of working with subsets of X^{2}, I prefer to think about the trivial groupoid X \times X.

Groupoids. Recall that a groupoid is a small category with all arrows invertible. We may define a groupoid in term of it’s collection of arrows, as follows: a set G endowed with a partially defined operation m: G^{(2)} \subset G \times G \rightarrow G, m(g,h) = gh, and an inverse operation inv: G \rightarrow G, inv(g) = g^{-1}, satisfying some well known properties (associativity, inverse, identity).

The set Ob(G) is the set of objects of the groupoid, \alpha , \omega: G \rightarrow Ob(G) are the source and target functions, defined as:

– the source fonction is \alpha(g) = g^{-1} g,

– the target function is \omega(g) = g g^{-1},

– the set of objects is Ob(G) consists of all elements of G with the form g g^{-1}.

Trivial groupoid. An example of a groupoid is G = X \times X, the trivial groupoid of the set X. The operations , source, target and objects are

(x,y) (y,z) = (x,z), (x,y)^{-1} = (y,x)

\alpha(x,y) = (y,y), \omega(x,y) = (x,x),

– objects (x,x) for all x \in X.

In terms of groupoids, here is the definition of an uniform structure.

Definition 1. (groupoid with uniform structure) Let G be a groupoid. An uniformity \Phi on G is a set \Phi \subset 2^{G} such that:

1. for any U \in \Phi we have Ob(G) \subset U,

2. if U \in \Phi and U \subset V \subset G then V \in \Phi,

3. if U,V \in \Phi then U \cap V \in \Phi,

4. for any U \in \Phi there is V \in \Phi such that VV \subset U,  where VV is the set of all gh with g, h \in V and moreover (g,h) \in G^{(2)},

5. if U \in \Phi then U^{-1} \in \Phi.

Here is a definition for a coarse structure on a groupoid (adapted from the usual one, seen on the trivial groupoid).

Definition 2.(groupoid with coarse structure) Let G be a groupoid. A coarse structure  \chi on G is a set \chi \subset 2^{G} such that:

1′. Ob(G) \in \chi,

2′. if U \in \chi and V \subset U then V \in \chi,

3′. if U,V \in \chi then U \cup V \in \chi,

4′. for any U \in \chi there is V \in \chi such that UU \subset V,  where UU is the set of all gh with g, h \in U and moreover (g,h) \in G^{(2)},

5′. if U \in \chi then U^{-1} \in \chi.

Look pretty much the same, right? But they are not, we shall see the difference when we take into consideration how we generate such structures.

Question. Have you seen before such structures defined like this, on groupoids? It seems trivial to do so, but I cannot find this anywhere (maybe because I am ignorant).

UPDATE (01.08.2012): Just found this interesting paper  Coarse structures on groups, by Andrew Nicas and David Rosenthal, where they already did on groups something related to what I want to explain on groupoids. I shall be back on this.

2004: Fleeced, 2012: The cost of knowledge

In 2004 Rob Kirby publishes in the Notices of the AMS the opinion article “Fleeced?“.  Let me quote a bit from it.

“Most mathematicians feel that they own their journals. They write and submit papers to their favorite (often specialized) journals. They often referee for those same journals. And some devote time and energy as editors. Throughout this process there is no contact with nonmathematicians, except for some of the editors. It is no wonder that mathematicians have a sense of pride and ownership in their  journals.

But the truth is that, legally, mathematicians do not own the commercial journals. Elsevier and Academic Press journals are a highly profitable part of a big corporation. Bertelsmann has recently divested Springer, and now  Springer, Kluwer, and Birkhäuser are owned by an investment company (who did not buy these publishers in order to make less profit than before).  […]

We mathematicians simply give away our work (together with copyright) to commercial journals who turn around and sell it back to our institutions at a magnificent profit. Why? Apparently because we think of them as our journals and enjoy the prestige and honor of publishing,  refereeing, and editing for them. […]

What can mathematicians do? At one extreme they can refuse to submit papers, referee, and edit for the high-priced commercial journals. At the other extreme they can do nothing. […]

A possibility is this: one could post one’s papers (including the final version) at the arXiv and other websites and refuse to give away the copyright. If almost all of us did this, then no one would have to subscribe to the journals, and yet they could still exist in electronic form.
Personally, I (and numerous others) will not deal with the high-priced journals. What about you?”

In 2012 appeared The cost of knowledge, a site inspired by the blog post Elsevier – my part in its downfall by Timothy Gowers.

In 12 years the world changed a bit in this respect. It will change much more.

Let me finish this post by describing my modest experience related to this subject, during these 12 years.

In 2004 I was a kind of a post-doc/visitor (on a contract which was prolonged once a year, for a max of 6 years) at EPFL (Lausanne, Swiss). I already decided some years ago to act as if  the future of mathematical publication is the arxiv and alike. One reason is the obvious fact that the www will change the world much more than the invention of the press did. Almost all research which was left in manuscript perished after the press revolution. Everybody who wants to give something to the research community has to put its research on the net, I thought, and simultaneously, to help the old system to die, by not publishing in paper journals.  Moreover,  I had troubles in the past with publishing multidisciplinary papers. I always believed that  it is fun to mix in a paper several fields,  that there is one mathematics, and so on, but such papers were extremely difficult to publish, at least these papers written by me, with my modest competence (and Romanian origin, I have to say this). So, lulled by the relative swiss security, I was just putting my papers on arxiv,  moreover written in an  open form which was inviting others to participate to the same research, at least that I was thinking.

The result? In 2004-2005 I was practically laughed into the face. Who cares about a paper in arxiv, which is not published in journal form?

In 2006 I returned in Romania, decided to start to publish in paper journals, because what I was trying to do  was either disregarded as not counting, or discretely and “creatively” borrowed.  I could not renounce to my believes, therefore I arrived to a system of waves of papers in arxiv, some of them sent to publication (so to say, the most conservative ones).

After a time I started to recover after my strategic “fault”, but still there is work to do. But is it right to be forced to hide own beliefs?  Apparently, I am right in my beliefs,  which are similar to those publicly declared by  great mathematicians, at least since 2004.  Practically, a big chunk of my career was/is still disturbed by this immense inertia.  I am surely just an example among many others  colleagues who are suffering similar experiences.

Approximate groupoids may be useful

UPDATE (Jan 4 2013): See also this: Approximate groupoids again.

 

____________________________________

 

I appreciated yesterday the talk of Harald Helfgott, here is the link to the arxiv paper which was the subject of his communication: On the diameter of permutation groups (with Á. Seress).

At the end of his talk Harald stated as a conclusion that (if I well understood) the approximate groups field should be regarded as the study of

“stable configurations under the action of a group”

where “stable configuration” means a set (or a family…) with some controlled behaviour of its growth under the “action of a group”, understood maybe in a lax, approximate sense.

This applies to approximate groups, he said, where the action is by left translations, but it could apply to the conjugate action as well, and evenin  more general settings, where one has a space where a group acts.

My understanding of this bold claim is that Harald suggests that the approximate groups theory is a kind of geometry in the sense of Felix Klein: the (approximate, in this case) study of stable configuratiuons under the action of a group.

Very glad about hearing this!

Today I just remembered a comment that I have made last november on the blog of Tao, where I proposed to study “approximate groupoids”.

Why is this related?

Because a group acting on a set is just a particular type of groupoid, an action groupoid.

Here is the comment (link), reproduced further.

“Question (please erase if not appropriate): as a metric space is just a particular example of a normed groupoid, could you point me to papers where “approximate groupoids” are studied? For starters, something like an extension of your paper “Product set estimates for non-commutative groups”, along the lines in the introduction “one also consider partial sum sets …”, could be relevant, but I am unable to locate any. Following your proofs could be straightforward, but lengthy. Has this been done?

For example, a k  approximate groupoid A \subset G of a groupoid  G (where G denotes the set of arrows) could be a

(i)- symmetric subset of G: A = A^{-1}, for any x \in Ob(G)  id_{x} \in A  and

(ii)- there is another, symmetric subset K \subset G such that for any   (u,v) \in (A \times A) \cap G^{(2)}  there are (w,g) \in (A \times K) \cap G^{(2)}  such that uv = wg,

(iii)- for any x \in Ob(G)  we have \mid K_{x} \mid \leq k.

One may replace the cardinal, as you did, by a measure (or random walk kernel, etc), or even by a norm.”

UPDATE (28.10.2012): Apparently unaware about the classical proof of Ronald Brown,  by way of groupoids, of the Van Kampen theorem on the fundamental group of a union of spaces, Terence Tao has a post about this subject. I wonder if he is after some applications of his results on approximate groups in the realm of groupoids.

Computing with space: done!

The project of giving a meaning to “computing” part of “Computing with space” is done, via the \lambda-Scale calculus and its graphic lambda calculus (still in preview mode).

_______________________

UPDATE (09.01.2013): There is now a web tutorial about graphic lambda calculus on this blog.  At some point a continuation of “Computing with space …” will follow, with explicit use of this calculus, as well as applications which were mentioned only briefly, like why the diagram explaining the “emergence” of the Reidemeister III move gives a discretized notion of scalar curvature for a metric space with dilations.

_______________________

Explanations.  In the “Computing with space…” paper I claimed that:

1. – there is a “computing” part hidden behind the idea of emergent algebras

2. – which is analogous  with the hypothetical computation taking place in the front-end visual system.

The 1. part is done, essentially. The graphic version of \lambda-Scale is in fact very powerful, because it contains as sectors:

– lambda calculus

– (discrete, abstract) differential calculus

– the formalism of tangle diagrams.

These “sectors” appear as subsets S of graphs in GRAPH (see the preview paper for definitions), for which the condition $G \in  S$ is global, together with  respective selections of  local or global graphic moves (from those available on GRAPH) which transform elements of S into elements of S.

For example, for lambda calculus the relevant set is \lambda-GRAPH and the moves are (ASSOC) and  the graphic \beta move (actually, in this way we obtain a formalism a bit nicer than lambda calculus; in order to obtain exactly lambda calculus we have to add the stupid global FAN-OUT and global pruning moves).

For differential calculus we need to restrict to graphs like those in \lambda-GRAPH, but also admitting dilation gates. We may directly go to \lambda-Scale, which contains lambda calculus (made weaker by adding the (ext) rules, corresponding to \eta-conversion) and differential calculus (via emergent algebras). The moves are (ASSOC), graphic \beta move, (R1), (R2), (ext1), (ext2) and, if we want a dumber version,  some global FAN-OUT and pruning moves.

For tangle diagrams see the post Four symbols and wait for the final version of the graphic calculus paper.

SO  now, I declare part 1. CLOSED. It amounts to patiently writing all details, which is an interesting activity by itself.

Part 2. is open, albeit now I have much more hope to give a graph model for the front-end of the visual system, which is not relying on assumptions about the geometric structure of the space, linear algebra, tasks and other niceties of the existing models.

UPDATE  02.07.2012. I put on arxiv the graphical formalism paper, it should appear on 03.07.2012. I left outside of the paper a big chunk of very intriguing facts about various possible crossing definitions, for another paper.

Four symbols

Here is a diagram involving the four symbols in the graphic lambda-Scale:

– At the left are used drawing conventions from the preview paper “Local and global moves on planary trivalent graphs, lambda calculus and lambda-Scale“.

– At the right are used drawing conventions from “Computing with space…” page 21.

We can pass from a drawing to another by a graphic beta move.

To me, this looks like

e^{i \pi} = -1

a formula which involves four symbols too.

Gromov on entropy and Souriau

Gromov just posted on his page the paper In a Search for a Structure, Part 1: On Entropy. June 19, 2012.   With much interest I started to read it and my first impression is that I have seen something like this before (but I might be wrong, please excuse me if so) in the HUGE last paper (book)

Grammaire de la nature (version du 8 juillet 2007)

by Jean-Marie Souriau, the inventor of symplectic geometry and geometric quantization, among others.

Specifically, I refer to  the way Souriau treats probabilities in “CLE 8: Calcul des Hasards”, p. 209.

The book is a must-read! It is a summum of Souriau mathematical view of Nature, specifically concerning symmetry, entropy, relativity and quantum mechanics.

Gromov, with his enormous geometrical knowledge (different than Souriau’ though)  points to sofic groups, this  I need a lot of time to understand.

UPDATE: I am starting to understand the sofic group notion of Gromov and learning to appreciate it, it’s related to constructions with approximate groups, apparently.

Two halves of beta, two halves of chora

In this post I want to emphasize a strange similarity between the beta rule in lambda calculus and the chora construction (i.e. encircling a tangle diagram).

Motivation? Even if now clearer, I am still not completely satisfied by the degree of interaction between lambda calculus and emergent algebras, in the proposed lambda-Scale calculus. I am not sure if this is because lambda-Scale is yet not explored, or because there exist a more streamlined version of lambda calculus as a macro over the emergent algebras.

Also, I am working again on the paper put on preview (version 05.06.2012) about planar trivalent graphs ans lambda calculus, after finishing the  course notes on intrinsic sub-riemannian geometry.

So, I let my mind hovering over …

As explained in the draft paper, the beta rule in lambda calculus  is a LOCAL rule, described in this picture (advertised here):

It is made by two halves: the left half contains the lambda abstraction, the right half contains the application operation. In between there is a wire. The rule says that these two halves annihilate somehow and the wire is replaced by a dumb crossing with no information about who’s on top.

Let us contemplate an elementary chora, made also by two halves:

We can associate to this figure a move, which consists in the annihilation of the left (difference gate) and right (sum gate) halves, followed by the replacement of the “wire” by an equivalent crossing

Preview of two papers, thanks for comments

Here are two papers:

Local and global moves on planary trivalent graphs, lambda calculus and lambda-Scale (update 03.07.2012, final version, appears as arXiv:1207.0332)

Sub-riemannian geometry from intrinsic viewpoint    (update 14.06.2012: final version, appears as arxiv:1206.3093)

which are still subject to change.  Nevertheless most of what I am trying to communicate is there. I would appreciate  mathematical comments.

This is an experiment,  to see what happens if I make previews of papers available, like a kind of a blog of papers in the making.

Intrinsic characterizations of riemannian and sub-riemannian spaces (I)

In this post I explain what is the problem of intrinsic characterization of riemannian manifolds, in what sense has been solved in full generality by Nikolaev, then I shall comment on the proof of the Hilbert’s fifth problem by Tao.

In the next post there will be then some comments about Gromov’s problem of giving an intrinsic characterization of sub-riemannian manifolds, in what sense I solved this problem by adding a bit of algebra to it. Finally, I shall return to the characterization of riemannian manifolds, seen as particular sub-riemannian manifolds, and comment on the differences between this characterization and Nikolaev’ one.

1. History of the problem for riemannian manifolds. The problem of giving an intrinsic characterization of riemannian manifolds is a classic and fertile one.

Problem: give a metric description of a Riemannian manifold.

Background: A complete riemannian manifold is a length metric space (or geodesic, or intrinsic metric space) by Hopf-Rinow theorem. The problem asks for the recovery of the manifold structure from the distance function (associated to the length functional).

For 2-dim riemannian manifolds the problem has been solved by A. Wald [Begrundung einer koordinatenlosen Differentialgeometrie der Flachen, Erg. Math. Colloq. 7 (1936), 24-46] (“Begrundung” with umlaut u, “Flachen” with umlaut a, sorry for this).

In 1948 A.D. Alexandrov [Intrinsic geometry of convex surfaces, various editions] introduces its famous curvature (which uses comparison triangles)  and proves that, under mild smoothness conditions  on this curvature, one is capable to recover the differential structure and the metric of the 2-dim riemannian manifold. In 1982 Alexandrov proposes as a conjecture that a characterization of a riemannian manifold (of any dimension) is possible in terms of metric (sectional) curvatures (of the type introduced by Alexandrov) and weak smoothness assumptions formulated in metric way (as for example Holder smoothness). Many other results deserve to be mentioned (by Reshetnyak, for example).

2. Solution of the problem by Nikolaev. In 1998 I.G. Nikolaev [A metric characterization of riemannian spaces, Siberian Adv. Math. , 9 (1999), 1-58] solves the general problem of intrinsic characterization of C^{m,\alpha} riemannian spaces:

every locally compact length metric space M, not linear at one of its points,  with \alpha Holder continuous metric sectional curvature of the “generalized tangent bundle” T^{m}(M) (for some $m=1,2,…$, which admits local geodesic extendability, is isometric to a C^{m+2} smooth riemannian manifold..

Therefore:

  • he defines a generalized tangent bundle in metric sense
  • he defines a notion of sectional curvature
  • he asks some metric smoothness of this curvature

and he gets the result.

3. Gleason metrics and Hilbert’s fifth problem. Let us compare this with the formulation of the solution of the Hilbert’s fifth problem by Terence Tao. THe problem is somehow similar, namely recover the differential structure of a Lie group from its algebraic structure. This time the “intrinsic” object is the group operation, not the distance, as previously.

Tao shows that the proof of the solution may be formulated in metric terms. Namely, he introduces a Gleason metric (definition 4 in the linked post), which will turn to be a left invariant riemannian metric on the (topological) group. I shall not insist on this, instead read the post of Tao and also, for the riemannian metric description, read this previous post by me.

Lambda-Scale is the new name

The paper on the calculus adapted to emergent algebras has been almost completely rewritten. I submitted it to arXiv, it shall appear tomorrow.

The new name of the paper is “\lambda-Scale, a lambda calculus for spaces with dilations”, and it is already available from my page at this link.

A geometric viewpoint on computation?

Let me try to explain what I am trying to do in this work related to “computing with space“. The goal is to understand the process of emergence, in its various precise mathematical forms, like:

– how the dynamics of a big number of particles becomes the dynamics of a continuous system? Apart the physics BS of neglecting infinities, I know of very few mathematically correct approaches. From my mixed background of calculus of variations and continuous media mechanics, I can mention an example of such an approach  in the work of Andrea Braides    on the \Gamma-convergence of the energy functional of a discrete system to the energy functional of a continuous system and atomistic models of solids.

– how to endow a metric space (like a fractal, or sub-riemannian space) with a theory of differential calculus? Translated: how to invent “smoothness” in spaces where there is none, apparently? Because smoothness is certainly emergent. This is part of the field of non-smooth calculus.

– how to explain the profound resemblance between geometrical results of Gromov on groups with polynomial growth and combinatorial results of Breuillard, Gree, Tao on approximate groups? In both cases a nilpotent structure emerges from considering larger and larger scales. The word “explain” means here: identify a general machine at work in both results.

– how to explain the way our brain deals with visual input?  This is a clear case of emergence because the input is the excitation of some receptors of the retina and the output is almost completely not understood, except that we all know that we see objects which are moving and complex geometrical relations among them. A fly sees as well, read From insect vision to robot vision by N. Franceschini, J.M. Pichon, C. Blanes. Related to this paper, I cite from the abstract (boldfaced by me):

  We designed, simulated, and built a complete terrestrial creature which moves about and avoids obstacles solely by evaluating the relative motion between itself and the environment. The compound eye uses an array of elementary motion detectors (EMDS) as smart, passive ranging sensors. Like its physiological counterpart, the visuomotor system is based on analogue, continuous-time processing and does not make use of conventional computers. It uses hardly any memory to adjust the robot’s heading in real time via a local and intermittent visuomotor feedback loop.

More generally, there seems to be a “computation” involved in vision, massively parallel and taking very few steps (up to six), but it is not understood how this is a computation in the mathematical, or computer science sense. Conversely, the visual performances of any device based on computer science computation up to now, are dwarfed by any fly.

I identified a “machine of emergence” which is in work in some of the examples given above. Mathematically, this machine should have something to do with emergent algebras, but what about the computation part?

Probably geometers reason like flies: by definition, a geometrical statement is invariant up to the choice of maps. A sphere is not, geometrically speaking, a particular atlas of maps on the sphere. For a geometer, reproducing whatever it does by using ad-hoc enumeration by  natural numbers, combinatorics  and Turing machines is nonsense, because profoundly not geometrical.

On the other hand, the powerful use and control of abstraction is appealing to the geometer. This justifies the effort to import abstraction techniques from computer science and to replace the non-geometrical stuff by … whatever is more of a geometrical character.

For the moment, such efforts are mostly a source of frustration, a familiar feeling for any mathematician.

But at some point, in these times of profound changes in, mathematics as well as in the society, from all these collective efforts will emerge something beautiful, clear and streamlined.

Geometry of imaginary spaces, by Koenderink

This post is about the article “Geometry of imaginary spaces“,   Journal of  Physiology – Paris, 2011, in press, by Jan Koenderink.

Let me first quote from the abstract (boldfaced  by me):

“Imaginary space” is a three-dimensional visual awareness that feels different from what you experience when you open your eyes in broad daylight. Imaginary spaces are experienced when you look “into” (as distinct from “at”) a picture for instance.

Empirical research suggests that imaginary spaces have a tight, coherent structure, that is very different from that of three-dimensional Euclidean space.

[he proposes the structure of a bundle E^{2} \times A^{1} \rightarrow E^{2}, with basis the euclidean plane, “the visual field” and fiber the 1-dimensional affine line, “the depth domain”,]

I focus on the topic of how, and where, the construction of such geometrical structures, that figure prominently in one’s awareness, is implemented in the brain. My overall conclusion—with notable exceptions—is that present day science has no clue.

What is remarkable in this paper? Many many things, here are just three quotes:

–  (p. 3) “in the mainstream account”, he writes, “… one starts from samples of … the retinal “image”. Then follows a sequence of image operations […] Finally there is a magic step: the set of derived images turns into a “representation of the scene in front of you”. “Magic” because image transformations convert structures into structures. Algorithms cannot convert mere structure into quality and meaning, except by magic. […] Input structure is not intrinsically meaningful, meaning needs to be imposed (magically) by some arbitrary format.”

– (p. 4) “Alternatives to the mainstream account have to […] replace inverse optics with “controlled hallucination” [related to this, see the post “The structure of visual space“]

– (p. 5) “In the mainstream account one often refers to the optical structure as “data”, or “information”. This is thoroughly misleading because to be understood in the Shannon (1948) sense of utterly meaningless information. As the brain structures transform the optical structure into a variety of structured neural activities, mainstream often uses semantic terms to describe them. This confuses facts with evidence. In the case of an “edge detector” (Canny, 1986) the very name suggests that the edge exists before being detected. This is nonsensical, the so-called edge detector is really nothing but a “first order directional derivative operator” (Koenderink and van Doorn, 1992). The latter term is to be preferred because it describes the transformation of structure into structure, whereas the former suggests some spooky operation” [related to this, see the tag archive “Map is the territory“]

Related to my  spaces with dilations, let me finally quote from the “Final remarks”:

The psychogenetic process constrains its articulations through probing the visual front end. This part of the brain is readily available for formal descriptions that are close to the neural hardware. The implementation of the group of isotropic similarities, a geometrical object that can  easily be probed through psychophysical means, remains fully in the dark though.

Scaled lambda epsilon

My first attempt to introduce a scaled version of lambda epsilon turned out to be wrong, but now I think I have found a way. It is a bit trickier than I thought. Let me explain.

In lambda epsilon calculus we have three operations (which are not independent), namely the lambda abstraction, the application and the emergent algebra (one parameter family of) operation(s), called dilations. If we want to obtain a scaled version then we have to “conjugate” with dilations. Looking at terms as being syntactic trees, this amounts to:

– start with a term A and a scale \varepsilon \in \Gamma,

– transform a tree T such that FV(T) \cap FV(A) = \emptyset,  into a tree A_{\varepsilon}[T], by conjugating with A \circ_{\varepsilon} \cdot.

This can be done by recursively defining the transform T \mapsto A_{\varepsilon}[T]. Graphically, we would like to transform the elementary syntactic trees of the three operations into this:


The problem is that, while (c) is just the familiar scaled dilation, the scaled \lambda from (a) does not make sense, because A \circ_{\varepsilon} u is not a variable. Also, the scaled application (b) is somehow misterious.

The solution is to exploit the fact that it makes sense to make substitutions of the form B[ A \circ_{\varepsilon} u : = C] because of the invertibility of dilations. Indeed, A \circ_{\varepsilon} u = C is equivalent with u = A \circ_{\varepsilon^{-1}} C, therefore we may define B[ A \circ_{\varepsilon} u : = C] to mean B[u : = A \circ_{\varepsilon^{-1}} C].

If we look to the rule (ext2) here, the discussion about substitution becomes:

Therefore the correct scaled lambda, instead of (a)  from the first figure, should be this:

The term (syntactic tree) from the LHS should be seen as a notation for the term from the RHS.

And you know what? The scaled application, (b) from the first figure, becomes less misterious, because we can prove the following.

1.  Any u \in X \setminus FV(A) defined a relative variable u^{\varepsilon}_{A} := A \circ_{\varepsilon} u (remark that relative variables are terms!).The set of relative variables is denoted by X_{\varepsilon}(A).

2. The function B \mapsto A_{\varepsilon}[B] is defined for any term B \in T such that FV(A) \cap FV(B) = \emptyset. The definition is this:

–  A_{\varepsilon}[A] = A,

–  A_{\varepsilon}[u] = u for any u \in X \setminus FV(A)

A_{\varepsilon}[ B \mu C] = A \circ_{\varepsilon^{-1}} ((A \circ_{\varepsilon} A_{\varepsilon}[B]) \mu (A \circ_{\varepsilon} A_{\varepsilon}[C]))  for  any B, C \in T such that FV(A) \cap (FV(B) \cup FV(C))= \emptyset

–  A_{\varepsilon}[ u \lambda B] is given by:

 

 

3. B is a scaled term, notation B \in T_{\varepsilon} (A), if there is a term B' \in T such that FV(A) \cap FV(B') = \emptyset and such that B = A_{\varepsilon}[B'].

4. Finally, the operations on scaled terms are these:

– for any \mu \in \Gamma and B, C \in T_{\varepsilon}(A) the scaled application (of coefficient \mu) is

B \mu^{\varepsilon}_{A} C = A \circ_{\varepsilon^{-1}} ((A \circ_{\varepsilon} B) \mu (A \circ_{\varepsilon} C))

– for any scaled variable  u^{\varepsilon}_{A} \in X_{\varepsilon}(A)  and any scaled term B \in T_{\varepsilon}(A) the scaled abstraction is

5.    With this, we can prove that (u^{\varepsilon}_{A} \lambda^{\varepsilon}_{A} B) 1^{\varepsilon}_{A} C = (u \lambda B) 1 C = B [ u: = C], which is remarkable, I think!

The neuron unit

I am updating the paper on \lambda \epsilon almost daily.  It is the first time when I am doing such a thing, it is maybe interesting to see what comes out of this way of writing.

The last addition is something I was thinking about for a long time, something which is probably well known in some circles, but maybe not. It is about eliminating variable (names) from such a calculus. This has been done in several ways, here is another (or the same as a previous one?).

The idea is simple. let T be any term and x \in Var(T) a variable. Look at the syntactic tree of T, then glue to all leaves decorated by x the leaves of a tree with nodes consisting of FAN-OUT  gates.

Further on I shall identify syntactic  trees with terms. I shall add to such trees a new family
(of trees), constructed from the elementary tree depicted at (a) in the next figure. At (b) we see an example of such a tree. We consider also the trivial tree (c).

We have to think about such trees as devices for multiplying the occurences of a variable. I call them FAN-OUT trees. All these trees, with the exception of the trivial one (c), are planar binary trees. We shall add the following rule of associativity:

(ASSOC) any two FAN-OUT trees with the same number of leaves are identified.

This rule will be applied under the following form: we are free to pass from a FAN-OUT tree to an equivalent one which has the same number of leaves. The name “associativity” comes from the fact that a particular instance of this rule (which deserves the name  “elementary associativity move”) is this:

With the help of these FAN-OUT trees we shall replace the multiple occurences of a variable by such trees. Let us see what become the rules of \lambda \varepsilon calculus by using this notation.

\alpha conversion is no longer needed, because variables have no longer names. Instead, we are free to graft usual trees to FAN-OUT trees. This way, instead of terms we shall use “neurons”.

Definition:   The forgetful form (b) of a syntactic tree (a) (of a term) is the tree with the name variables deleted.

A neuron is the result of gluing the root of a forgetful form of a syntactic tree to the root of a FAN-OUT tree, like in the following figure.

The axon of the neuron is the FAN-OUT tree. The dendrites of the neuron are the undecorated edges of the forgetful form of the syntactic tree. A dendrite is bound if it is a left  edge pointing to a node decorated by \lambda.   For any bound dendrite, the set of dependent dendrides are those of the sub-tree starting from the right edge of the \lambda node (where the bound dendrite is connected via a left edge), which are moreover not bound. Otherwise a dendrite is free.  The soma of the neuron is the forgetful form of the syntactic tree.

Substitution. We are free to connect leaves of axons of neurons with dendrites of other neurons.
In order to explain the substitution we have to add the following rule:

(subst) The leaves of an axon cannot be glued to more than one bounded dendrite of another neuron. If a leaf of the axon of the neuron A is connected to a bound dendrite of the neuron B, then it has to be the leftmost leaf of the axon of A. Moreover, in this case all other leaves of A which are connected with B have to be connected only via dendrites which are dependent on the mentioned bound dendrite of B, possibly via adding leaves to the axon of A by using (assoc).

Substitution is therefore assimilated to connecting neurons.

New (beta*). The rule (beta*) takes the following graphical form. In the next figure appear only the leaf of the neuron A connecting to the \lambda node (the other leaves of the axon of A not drawn) and only some of the dendrites depending on the bound one relative to the \lambda node which is figured.

The neuron A may have other dendrites, not figured. According to the definition of the neuron, A together with the \lambda node and adiacent edges form a bigger neuron. Also figured is another leaf of the axon of the neuron B, which may point to another neuron. Finally, in the RHS, the bound dendrite looses all dependents.

Important remark. Notice that there may be multiple outcomes from (subst) and (beta*)! Indeed, this is due to the fact that connections could be made in several ways, by using all or only of part of the dependent dendrites. Because of that, it seems that this version of the calculus is richer than the previous one, but I am not at all sure if this is the case.

Another thing to be remarked is that the “=” sign in this version of these  rules is reflexive and transitive, but not symmetric. Previously the “$ latex =$” sign was supposed to be symmetric.

(R1)  That is easy, we use the emergent algebra gates:

(R2)    Easy as well:

The rules (ext1), (ext2) take obvious forms.

Therefore, if we think about computation as reduction to a normal form (if any), in this graphical notation with “neurons”, computation amounts to re-wiring of neurons or changing the rewiring inside the soma of some neurons.

Variables dissapeared, with the price of introducing FAN-OUT trees.

As concerns the  remark  previously made, we could obtain a calculus which is clearly equivalent with \lambda \varepsilon by modifying the definition of the neuron, in this way.
In order to clearly specify which are the dependent dendrites, we could glue to any bound dendrite a FAN-OUT tree, such that the leaves of this tree connect again with a set of dependent dendrites of the same neuron. In this way, substitution and (beta*) will amount of erasing such a FAN-OUT tree and then perform the moves, as previously explained, but using this time all the dependent dendrites which were connected to the bound dendrite by the erased tree.

Rules of lambda epsilon calculus

I updated the paper on lambda epsilon calculus.  See the link to the actual version (updated daily) or check the arxiv article, which will be updated as soon as a stable version will emerge.

Here are the rules of this calculus:

(beta *)   (x \lambda A) \varepsilon B= (y \lambda (A[x:=B])) \varepsilon B for any fresh variable y,

(R1) (Reidemeister one) if x \not \in FV(A) then (x \lambda A) \varepsilon A = A

(R2) (Reidemeister two) if x \not \in FV(B) then (x \lambda ( B \mu x)) \varepsilon A = B (\varepsilon \mu ) A

(ext1) (extensionality one)  if  x \not \in FV(A) then x \lambda (A 1 x) = A

(ext2) (extensionality two) if  x \not \in FV(B) then (x \lambda B) 1 A = B

These are taken together with usual substitution and \alpha-conversion.

The relation between the operations from \lambda \varepsilon calculus and emergent algebras is illustrated in the next figure.

Lambda epsilon calculus, an attempt for a language of “computing with space”

Just posted on the arxiv the paper Lambda calculus combined with emergent algebras.   Also, I updated my page on emergent algebras and computing with space, taking into consideration what I try to do in the mentioned paper. Here are some excerpts (from the mentioned page) about what I think now “computing with space” may be:

Let’s look at some examples of spaces, like: the real world or a virtual world of a game, as seen by a fly or by a human, “abstract” mathematical spaces as manifolds, fractals, symmetric spaces, groups, linear spaces.
To know what a space is, to define it mathematically, is less interesting than to know what one can do in such a space. I try to look at spaces from a kind of a “computational viewpoint”.
A model of such a computation could be the following process: Alice and Bob share a class of primitives of spaces (like a common language which Alice can use in order to communicate to Bob what she sees when exploring the unknown space). Alice explores an unknown territory and sends to Bob the operational content of the exploration (i.e. maps of what she sees and how she moves, expresses in the common language ). Then Bob, who is placed in a familiar space, tries to make sense of the maps received from Alice. Usually, he can’t put together in the familiar the received information (for example because there is a limit of accuracy of maps of the unknown territory into the familiar one). Instead, Bob tries to simulate the operational content of Alice exploration by interpreting the messages from Alice (remember, expressed in a language of primitives of space) in his space.
A language of space primitives could be (or contain as a part) the emergent algebras. Ideally, such a language of primitives should be described by:
A – a class of gates (operations), which represent the “map-making” relation, with an abstract scale parameter attached, maybe also with supplementary operations (like lambda abstraction?),
B – a class of variables (names) which represent generic points of a space, and a small class of terms (words) expressed in terms of variables and gates from (A) (resembling to combinators in lambda calculus?). These are the “generators” of the space and they have the emergent algebras property, namely that as the scale goes to zero, uniformly with respect to the variables, the output converges to a new operation.
C – a class of rewriting rules saying that some simple assemblies of generators of space have equivalent function, or saying that relations between those simple assemblies converge to relations of the space as the scale goes to zero.

The article on lambda epsilon calculus is a first for me in this field, I would be grateful for any comments and suggestions of improvements.

Three problems and a disclaimer

In this post I want to summarize the list of problems I am currently thinking about. This is not a list of regular mathematical problems, see the disclaimer on style written at the end of the post.

Here is the list:

1. what is “computing with space“? There is something happening in the brain (of a human or of a fly) which is akin to a computation, but is not a logical computation: vision. I call this “computing with space”. In the head there are a bunch of neurons chirping one to another, that’s all. There is no euclidean geometry, there are no a priori coordinates (or other extensive properties), there are no problems to solve for them neurons, there is  no homunculus and no outer space, only a dynamical network of gates (neurons and their connections). I think that a part of an answer is the idea of emergent algebras (albeit there should be something more than this).  Mathematically, a closely related problem is this: Alice is exploring a unknown space and then sends to Bob enough information so that Bob could “simulate” the space in the lab. See this, or this, or this.

Application: give the smallest hint of a purely relational  model of vision  without using any a priori knowledge of the (euclidean or other) geometry of outer space or any  pre-defined charting of the visual system (don’t give names to neurons, don’t give them “tasks”, they are not engineers).

2. non-commutative Baker-Campbell-Hausdorff formula. From the solution of the Hilbert’s fifth problem we know that any locally compact topological group without small subgroups can be endowed with the structure of a “infinitesimally commutative” normed group with dilations. This is true because  one parameter sub-groups  and Gleason metrics are used to solve the problem.  The BCH formula solves then another problem: from the infinitesimal structure of a (Lie) group (that is the vector space structure of the tangent space at the identity and the maniflod structure of the Lie group) and from supplementary infinitesimal data (that is the Lie bracket), construct the group operation.

The problem of the non-commutative BCH is the following: suppose you are in a normed group with dilations. Then construct the group operation from the infinitesimal data (the conical group structure of the tangent space at identity and the dilation structure) and supplementary data (the halfbracket).

The classical BCH formula corresponds to the choice of the dilation structure coming from the manifold structure of the Lie group.

In the case of a Carnot group (or a conical group), the non-commutative BCH formula should be trivial (i.e. x y = x \cdot y, the equivalent of xy = x+y in the case of a commutative Lie group, where by convention we neglect all “exp” and “log” in formulae).

3. give a notion of curvature which is meaningful for sub-riemannian spaces. I propose the pair curvdimension- curvature of a metric profile. There is a connection with problem 1: there is a link between the curvature of the metric profile and the “emergent Reidemeister 3 move” explained in section 6 of the computing with space paper. Indeed, at page 36 there is this figure. Yes, R^{x}_{\epsilon \mu \lambda} (u,v) w is a curvature!

Disclaimer on style. I am not a problem solver, in the sense that I don’t usually like to find the solution of an already formulated problem. Instead, what I do like to do is to understand some phenomenon and prove something about it in the simplest way possible.  When thinking about a subject, I like to polish the partial understanding I have by renouncing to use any “impure” tools, that is any (mathematical) fact which is strange to the subject. I know that this is not the usual way of doing the job, but sometimes less is more.

Curvdimension and curvature of a metric profile III

I continue from the previous post “Curvdimension and curvature of a metric profile II“.

Let’s see what is happening for (X,g), a sufficiently smooth (C^{4} for example),  complete, connected  riemannian manifold.  The letter “g” denotes the metric (scalar product on the tangent space) and the letter “d” will denote the riemannian distance, that is for any two points x,y \in X the distance  d(x,y) between them is the infimum of the length of absolutely continuous curves which start from x and end in y. The length of curves is computed with the help of the metric g.

Notations.   In this example X is a differential manifold, therefore it has tangent spaces at every point, in the differential geometric sense. Further on, when I write “tangent space” it will mean tangent space in this sense. Otherwise I shall write “metric tangent space” for the metric notion of tangent space.

Let u,v be vectors in the tangent space at $x \in X$. When the basepoint x is fixed by the context then I may renounce to mention it in the various notations. For example \|u\| means the norm of the vector u with respect to the scalar product  g_{x} on the tangent space T_{x} X  at the point x. Likewise,\langle u,v \rangle may be used instead of g_{x}(u,v);  the riemannian curvature tensor at x  may be denoted by R and not by R_{x}, and so on …

Remark 2. The smoothness of the riemannian manifold (X,g) should be just enough such that the curvature tensor is C^{1} and such that for any compact subset C \subset X of X, possibly by rescaling g, the geodesic exponential exp_{x} u makes sense (exists and it is uniquely defined) for any x \in C and for any  u \in T_{x} X with \|u\| \leq 2.

Let us fix such a compact set C and let’s take a  point x \in C.

Definition 5. For any \varepsilon \in (0,1) we define on the closed ball of radius 1 centered at x (with respect to the distance d) the following distance: for any u,v \in T_{x} X with \|u\| \leq 1, \| v\| \leq 1

d^{x}_{\varepsilon} (exp_{x} \, u, exp_{x} v) \, = \, \frac{1}{\varepsilon} d((exp_{x} \, \varepsilon u, exp_{x} \varepsilon v).

(The notation used here is in line with the one used in  dilation structures.)

Recall that the sectional curvature K_{x}(u,v) is defined for any pair of vectors   u,v \in T_{x} X which are linearly independent (i.e. non collinear).

Proposition 1. Let M > 0 be greater or equal than \mid K_{x}(u,v)\mid , for any x \in C and any non-collinear pair of vectors u,v \in T_{x} X with \|u\| \leq 1, \| v\| \leq 1.  Then for any  \varepsilon \in (0,1) and any x \in Cu,v \in T_{x} X with \|u\| \leq 1, \| v\| \leq 1 we have

\mid d^{x}_{\varepsilon} (exp_{x} \, u, exp_{x} v) - \|u-v\|_{x} \mid \leq \frac{1}{3} M \varepsilon^{2} \|u-v\|_{x} \|u\|_{x} \|v\|_{x} + \varepsilon^{2} \|u-v\|_{x} O(\varepsilon).

Corollary 1. For any x \in X the metric space (X,d) has a metric tangent space at x, which is the isometry class of the unit ball in T_{x}X with the distance d^{x}_{0}(u,v) = \|u - v\|_{x}.

Corollary 2. If the sectional curvature at x \in X is non trivial then the metric profile at x has curvdimension 2 and moreover

d_{GH}(P^{m}(\varepsilon, [X,d,x]), P^{m}(0, [X,d,x]) \leq \frac{2}{3} M \varepsilon^{2} + \varepsilon^{2} O(\varepsilon).

Proofs next time, but you may figure them out by looking at the section 1.3 of these notes on comparison geometry , available from the page of  Vitali Kapovitch.

Curvdimension and curvature of a metric profile, II

This continues the previous post Curvdimension and curvature of a metric profile, I.

Definition 3. (flat space) A locally compact metric space (X,d) is locally flat around x \in X if there exists a > 0 such that for any \varepsilon, \mu \in (0,a] we have P^{m}(\varepsilon , [X,d,x]) = P^{m}(\mu , [X,d.x]). A locally compact metric space is flat if the metric profile at any point is eventually constant.

Carnot groups  and, more generally, normed conical groups are flat.

Question 1. Metric tangent spaces  are locally flat but are they locally flat everywhere? I don’t think so, but I don’t have an example.

Definition 4. Let (X,d) be a  locally compact metric space and x \in X a point where the metric space admits a metric tangent space. The curvdimension of (X,d) at x is curvdim \, (X,d,x) = \sup M, where  M \subset [0,+\infty) is the set of all \alpha \geq 0 such that

\lim_{\varepsilon \rightarrow 0} \frac{1}{\varepsilon^{\alpha}} d_{GH}(P^{m}(\varepsilon , [X,d,x]) , P^{m}( 0 , [X,d,x])) = 0

Remark that the set M always contains 0. Also, according to this definition, if the space is locally flat around x then the curvdimension at x is + \infty.

Question 2. Is there any  metric space with infinite curvdimension at a point where the space  is not locally flat? (Most likely the answer is “yes”, a possible example would be the revolution surface obtained from a  graph of a infinitely differentiable function f such that f(0) = 0 and all derivatives of f at 0 are equal to 0. This surface is taken with the distance from the 3-dimensional space, but maybe I am wrong. )

We are going to see next that the curvdimension of a sufficiently smooth riemannian manifold  at any of its points where the sectional curvature is not trivial is equal to 2.

Curvdimension and curvature of a metric profile, part I

In the notes Sub-riemannian geometry from intrinsic viewpoint    I propose two notions related to the curvature of a metric space at one of its points: the curvdimension and the curvature of a metric profile.In this post I would like to explain in detail what is this about, as well as making a number of comments and suggestions which are not in the actual version of the notes.

Related to these notions, they stem from rather vague proposals first made in earlier papers Curvature of sub-riemannian spaces and Sub-riemannian geometry and Lie groups II.

I shall start with the definition of the metric profile associated to a point x \in X of a locally compact metric space (X,d).  We need first a short preparation.

Let CMS be the collection of isometry classes of  pointed compact metric spaces.An element of CMS is denoted like [X,d,x] and is the equivalence class of a compact metric space (X,d), with a specified point x\in X, with respect to the equivalence relation: two pointed compact metric spaces (X,d,x), (Y,D,y) are equivalent if there is a surjective  isometry f: (X,d) \rightarrow (Y,D) such that f(x) = y.

The space $CMS$ is a metric space when endowed with the Gromov-Hausdorff distance between (isometry classes of) pointed compact metric spaces.

Definition 1.  Let (X,d) be a locally compact metric space. The metric profile of (X,d) at x is the function which associates to \varepsilon > 0 the element of CMS defined by

P^{m}(\varepsilon, x) = \left[\bar{B}(x,1), \frac{1}{\varepsilon} d, x\right]

(defined for small enough \varepsilon, so that the closed metric ball \bar{B}(x,\varepsilon) with respect to the distance d,  is compact).

Remark 1. See the previous post Example: Gromov-Hausdorff distance and the Heisenberg group, part II , where the behaviour of the metric profile of the physicists Heisenberg group is discussed.

The metric profile of the space at a point is therefore  a curve in another metric space, namely CMS with a Gromov-Hausdorff distance. It is not any curve, but one which has certain properties which can be expresses with the help of the GH distance. Very intriguing, what about a dynamic induced along these curves in the CMS. Nothing is known about this, strangely!

Indeed, to any element [X,d,x] of CMS it is associated the curve P^{m}(\varepsilon,x). This curve could be renamed P^{m}(\varepsilon , [X,d,x]).  Notice that P^{m}(1 , [X,d,x]) = [X,d,x].

For a fixed \varepsilon \in (0,1], take now P^{m}(\varepsilon , [X,d,x]), what is the metric profile of this element of CMS? The answer is: for any \mu \in (0,1] we have

P^{m}(\mu , P^{m}(\varepsilon , [X,d,x])) = P^{m}(\varepsilon \mu , [X,d,x])

which proves that the curves in CMS which are metric profiles are not just any curves.

Definition 2. If the metric profile P^{m}(\varepsilon ,[X,d,x]) can be extended by continuity to \varepsilon = 0, then the space $(X,d)$ admits a metric tangent space at x \in X and the isometry class of (the unit ball in) the tangent space equals  P^{m}(0 , [X,d,x]).

You see, P^{m}(0 , [X,d,x]) cannot be any point from $CMS$. It has to be the isometry class of a metric cone, namely a point of CMS which has constant metric profile.

The curvdimension and curvature explain how the the metric profile curve behaves near \varepsilon = 0. This is for the next post.

Sub-riemannian geometry from intrinsic viewpoint, course notes

Here are the course notes prepared   for a course at CIMPA research school on sub-riemannian geometry (2012):

Sub-riemannian geometry from intrinsic viewpoint ( 27.02.2012) (14.06.2012)

I want to express my thanks for the invitation and also my excuses for not being abble to attend the school (due to very bad weather conditions in this part of Europe, I had to cancel my plane travel).

Theatron as an eye

I want to understand what “computing with space” might be. By making  a parallel with the usual computation, there are three ingredients which need to be identified: what are the computing with space equivalents of

1. the universal computing gate (in usual computing this is the transistor)

2. the universal machine (in usual computing this is the Turing machine)

3. what is the universal machine doing by using its arrangement of universal computing gates (in usual computing this is the algorithm).

I think that (3) is (an abstraction of) the activity of map making, or space exploration. The result of this activity is coded by a dilation structure, but I have no idea HOW such a result is achieved. Once obtained though, a mathematical model of the space is the consequence of  a priori assumptions (that we can repeat in principle indefinitely the map making operations) which lead to the emergent algebraic and differential structure of the space.

The universal gate (1), I think, is the dilation gate, or the map-territory relation.

Today I want to pave the way to the discovery of the universal machine (2). This is related to my previous posts The Cartesian Theater: philosophy of mind versus aerography and Towards aerography, or how space is shaped to comply with the perceptions of the homunculus.

My take is that the Greek Theater, or Theatron (as opposed to the “theater in a box”, or Cartesian Theater) is a good model for an universal machine.

For today, I just want to point to the similarities between the theatron and the eye.

The following picture represents the main parts of the theatron (the ancient greek meaning of “theatron” is “place of seeing). In black are written the names of the theatron parts and in red you see the names of the corresponding parts of the eye, according to the proposed similarity.

Let me proceed with the meaning of these words:

– Analemmata means the pedestal of a sundial (related with analemma and analemmatic sundial; basically a theatron is an analemmatic sundial, with the chorus as the gnomon). I suggest to parallel this with the choroid of the eye.

– Diazomata (diazoma means “belt”), proposed to be similar with the retina.

Prohedria (front seating) is a privilege to sit in the first few rows at the bottom of the viewing area. Similar with the fovea (small pit), responsible for sharp central vision.

Skene (tent), the stage building, meant to HIDE the workings  of the actors which are not part of the show, as well as the masks and other materials. When a character dies, it happens behind the skene. Eventually, the skene killed the chorus and  became the stage. The eye equivalent  of this is the iris.

Parodos (para – besides, counter, and ode – song) entrance of the chorus. Eye equivalent is the crystalline lens.

– Orchestra, the ancient greek stage, is the place where the chorus acts, the center of the greek theater. Here we pass to abstraction: the eye correspondent is the visual field.

Approximate algebraic structures, emergent algebras

I updated and submitted for publication the paper “Emergent algebras“.

This is the first paper where emergent algebras appear. The subject is further developed in the paper “Braided spaces with dilations and sub-riemannian symmetric spaces“.

I strongly believe this is a very important notion, because it shows how  both the differential and algebraic realms  emerge naturally, from abstract nonsense. It is a “low tech” approach, meaning that I don’t use in the construction any “high tech” mathematical object, everything is growing from the grass.

One interesting fact, apart from the strange ideas of the paper (it is already verified that reading the paper algebraists will not understand easily the strength of the axiom concerning uniform convergence and analysts will not care enough about the occurrence of algebraic structure very much alike quandles and racks), is that an emergent algebra can also be seen as an approximate algebraic structure! But in a different sense than approximate groups.  The operations themselves are approximately associative, for example.

And my next question is: is this a really different notion of approximate algebraic structure than approximate groups? Or there is a way to see, for example, an approximate group (btw, why not an approximate symmetric space in the sense of Loos, whatever this could mean?) as an emergent algebra?

My hope is that the answer is YES.

UPDATE:   No, in fact there are reasons to think that there is a complementarity, there is a mathematical object standing over both, which may be called POSITIONAL SYSTEM, more soon, but see also this previous post of mine.

Here is the abstract of the paper:

“Inspired from research subjects in sub-riemannian geometry and metric geometry, we propose uniform idempotent right quasigroups and emergent algebras as an alternative to differentiable algebras.
Idempotent right quasigroups (irqs) are related with racks and quandles, which appear in knot theory (the axioms of a irq correspond to the first two Reidemeister moves). To any uniform idempotent right quasigroup can be associated an approximate differential calculus, with Pansu differential calculus in sub-riemannian geometry as an example.
An emergent algebra A over a uniform idempotent right quasigroup X is a collection of operations such that each operation emerges from X, meaning that it can be realized as a combination of the operations of the uniform irq X, possibly by taking limits which are uniform with respect to a set of parameters.
Two applications are considered: we prove a bijection between contractible groups and distributive uniform irqs (uniform quandles) and that some symmetric spaces in the sense of Loos may be seen as uniform quasigroups with a distributivity property. “

On the difference of two Lipschitz functions defined on a Carnot group

Motivation for this post: the paper “Lipschitz and biLipschitz Maps on Carnot Groups” by William Meyerson. I don’t get it, even after several readings of the paper.

The proof of Fact 2.10 (page 10) starts by the statement that the difference of two Lipschitz functions is Lipschitz and the difference of two Pansu differentiable functions is differentiable.

Let us see: we have a Carnot group (which I shall assume is not commutative!) G and two functions f,g: U \subset G \rightarrow G, where U is an open set in G. (We may consider instead two Carnot groups G and H (both non commutative) and two functions f,g: U \subset G \rightarrow H.)

Denote by h the difference of these functions: for any x \in U h(x) = f(x) (g(x))^{-1}  (here the group operations  and inverses are denoted multiplicatively, thus if G = \mathbb{R}^{n} then h(x) = f(x) - g(x); but I shall suppose further that we work only in groups which are NOT commutative).

1.  Suppose f and g are Lipschitz with respect to the respective  CC left invariant distances (constructed from a choice of  euclidean norms on their respective left invariant distributions).   Is the function h Lipschitz?

NO! Indeed, consider the Lipschitz functions f(x) = x, the identity function,  and g(x) = u a constant function, with u not in the center of G. Then h is a right translation, notoriously NOT Lipschitz with respect to a CC left invariant distance.

2. Suppose instead that f and g are everywhere Pansu differentiable and let us compute the Pansu “finite difference”:

(D_{\varepsilon} h )(x,u) = \delta_{\varepsilon^{-1}} ( h(x)^{-1} h(x \delta_{\varepsilon} u) )

We get that (D_{\varepsilon} h )(x,u) is the product w.r.t. the group operation of two terms: the first is the conjugation of the finite difference (D_{\varepsilon} f )(x,u)  by \delta_{\varepsilon^{-1}} ( g(x) ) and the second term is the finite difference   (D_{\varepsilon} g^{-1} )(x,u).  (Here  Inn(u)(v) = u v u^{-1} is the conjugation of v by $u$ in the group G.)

Due to the non commutativity of the group operation, there should be some miracle in order for the finite difference of h to converge, as \varepsilon goes to zero.

We may take instead the sum of two differentiable functions, is it differentiable (in the sense of Pansu?). No, except in very particular situations,  because we can’t get rid of the conjugation, because the conjugation is not a Pansu differentiable function.

Non-Euclidean analysis, a bit of recent history

Being an admirer of bold geometers who discovered that there is more to geometry than euclidean geometry, I believe that the same is true for analysis. In my first published paper “The topological substratum of the derivative” (here is a scan of this hard to find paper), back in 1993, I advanced the idea that there are as many “analyses” as the possible fields of dilations, but I was not aware about Pierre Pansu huge paper from 1989 “Metriques de Carnot-Caratheodory et quasiisometries des espaces symmetriques de rang un” (sorry for the missing accents, I am still puzzled by the keyboard of the computer I am using to write this post), where he invents what is now called “Pansu calculus”, which is the analysis associated to a Carnot group.

The same idea is then explored in the papers “Sub-riemannian geometry and Lie groups, I“, “Tangent bundles to sub-riemannian groups“, “Sub-riemannian geometry and Lie groups II“. These papers have not been published (only put on arXiv), because at that moment I hoped that the www will change publishing quick (I still do believe this, but now I am just a bit wiser, or forced by bureaucracy to publish or perish), so one could communicate not only the very myopic technical, incremental result, but also the ideas behind, the powerful  meme.

During those years (2001-2005) I have been in Lausanne, trying to propagate the meme around, in Europe, as I said previously. There were mixed results, people were not taking this serious enough, according to my taste. Sergey Vodopyanov had ideas which were close to mine, except that he was trying to rely on what I call “euclidean analysis”, instead of intrinsic techniques, as witnessed by his outstanding results concerning detailed proofs in low-regularity sub-riemannian geometry. (I was against such results by principle, because what is C^{1,1} but euclidean regularity? but the underlying ideas were very close indeed).

In a very naive way I tried to propagate the meme further, by asking for a visit at IHES, in 2004, when I had the pleasure to meet Pierre Pansu and Andre Bellaiche, then I dared to ask for another visit immediately and submitted the project

“Non-Euclidean Analysis” start-up

which I invite you to read. (The project was rejected, for good reasons, I was already there visiting and suddenly I was asking for another, much longer visit)

Then, from 2006 I went back to basics and proposed axioms for this, that is how dilation structures appeared (even if the name and a definition containing the most difficult axiom was already proposed in the previous series of papers on sub-riemannian geometry and Lie groups.  See my homepage for further details and papers (published this time).

I see now that, at least at the level of names of grant projects, the meme is starting to spread. Here is the “Sub-riemannian geometric analysis in Lie groups” GALA project and here is the more recent “Geometric measure theory in Non Euclidean spaces” GeMeThNES project.

Bipotentials, variational formulations

The paper on the use of bipotentials in variational formulations is finally submitted, also available on arxiv here. See also this presentation.

In case you wonder how this could be related with other subjects commented on this blog, then wait to see “A gallery of emergent algebras”, where it shall be explained the connection between convex analysis and an emergent algebra related to a semidirect product between the semigroup of words over a symplectic space and \mathbb{R}.

Proof mining and approximate groups, announcement

The last weeks have been very busy for personal reasons. I shall come back to writing on this blog in short time.

With Laurentiu Leustean we won (a month ago, but the project has been submitted this Spring) the financing for our research project

“Proof mining in metric anaysis, geometric group theory and ergodic theory”

(project PN-II-ID-PCE-2011-3-0383). Laurentiu is a specialist in proof mining with applications to geodesic spaces and ergodic theory; I am interested in emergent algebras, particularly in dilation structures, so one of our aims (in this project) is to understand why nilpotent like structures appear in the famous Gromov theorem on groups of polynomial growth, as well in approximate groups, by using proof mining techniques for “finitizing” emergent algebras, roughly.

This program is very close to one of the programs of Terence Tao, who continues his outstanding research on approximate groups. The following post

Ultraproducts as a bridge between hard analysis and soft analysis

made me happy because it looks like confirming that our dreams (for the moment) have a correspondent in reality and probably ideas like this are floating in the air.

UPDATE 25.10: Today a new post of Tao announces the submission on arxiv of the paper by him, Emmanuel Breuillard and Ben Green, “The structure of approximate groups“. I look forward to study it, to see if they explain why nilpotent structures appear in the limit. My explanation, in a different context, “A characterization of sub-riemannian spaces…”, related also to work of Gromov, namely why in sub-riemannian geometry nilpotent groups appear as models of metric tangent spaces, is that this is a feature of an emergent algebra. See also previous posts, like Principles: randomness/structure or emergent from a common cause?.

Example: Gromov-Hausdorff distances and the Heisenberg group, PART 3

This post continues the previous one “Gromov-Hausdorff distances and the Heisenberg group, PART 2“.

We have seen that small enough balls in physicist’ Heisenberg group G are like balls in the mathematician’ Heisenberg group H(1) and big balls in G become more and more alike (asymptotically the same) as balls in the euclidean vector space \mathbb{R}^{2}.

What is causing this?

Could it be the choice of an euclidean norm on the generating set D = \mathbb{R}^{2} \times \left\{ 1 \right\}? I don’t think so, here is why. Let us take any (vector space) norm on \mathbb{R}^{2}, instead of an euclidean one. We may repeat all the construction and the final outcome would be: same for small balls, big balls become asymptotically alike to balls in \mathbb{R}^{2} with the chosen norm. The algebraic structure of the limits in the infinitesimally small or infinitely big is the same.

Remember that the group norm is introduced only to estimate quantitatively how the set D generates the group G, so the initial choice of the norm is a kind of a gauge.

Could it be then the algebraic structure (the group operation and choice of the generating set)? Yes, but there is much flexibility here.

Instead of G = \mathbb{R}^{2} \times S^{1} with the given group operation, we may take any contact manifold structure over the set G (technically we may take any symplectic structure over \mathbb{R}^{2} and then contactify it (with the fiber S^{1}). Sounds familiar? Yes, indeed, this is a step in the recipe of geometric quantization. (If you really want to understand what is happening, then you should go and read Souriau).

Briefly said, put a norm on the kernel of the contact form and declare all directions in this kernel as horizontal, then repeat the construction of the sub-riemannian distance and metric profiles. What you get is this: small balls become asymptotically like balls in the mathematician’ Heisenberg group, big balls are alike balls in a normed vector space.

Therefore, it is not the algebraic structure per se which creates the phenomenon, but the “infinitesimal structure”. This will be treated in a later posting, but before this let me mention an amazing phenomenon.

We are again in the group G and we want to make a map of the small (i.e. of a small enough ball in G) into the big (that is into a ball in the vector space \mathbb{R}^{2}, which is the asymptotically big model of balls from G). Our macroscopic lab is in the asymptotically big, while the phenomenon happens in the small.

A good map is a bi-lipschitz one (it respects the “gauges”, the group norm) from a ball in the vector space \mathbb{R}^{2} to a ball in the Heisenberg group H(1). Surprise: there is no such map! The reason is subtle, basically the same reason as the one which leads to the algebraic structure of the infinitesimally small or infinitely large balls.

However, there are plenty of bi-lipschitz maps from a curve in the ball from the lab (one dimensional submanifold of the symplectic \mathbb{R}^{2}, this are the lagrangian submanifolds in this case) to the small ball where the phenomenon happens. This is like: you can measure the position, or the momentum, but not both…

If there are not good bi-lipschitz maps, then there are surely quasi-isometric maps . Their accuracy is bounded by the Gromov-Hausdorff distance between big balls and small balls, as explained in this pedagogical Maps of metric spaces.

Example: Gromov-Hausdorff distances and the Heisenberg group, PART 2

As the title shows, this post continues the previous one

Gromov-Hausdorff distances and the Heisenberg group, PART 1

The Heisenberg group G is seen from the point of view of the generating set D. Quantitatively, the group norm “measures how” D generates G. The group norm has the following properties:

  • \| X \| = 0 if and only if X = E = (0,1), the neutral element of G. In general \| X\| \geq 0 for any X \in G.
  • \| X \cdot Y \| \leq \|X\| + \|Y\|, for any X,Y \in G (that is a consequence of the fact that if we want to go from E to X \cdot Y by using horizontal increments, then we may go first from E to X, then from X to X \cdot Y, by using horizontal strings).
  • \| X^{-1} \| = \| X \| for any X \in G (consequence of X \in D implies X^{-1} \in D).

From (group) norms we obtain distances: by definition, the distance between X and Y is

d(X,Y) = \| X^{-1} \cdot Y \|

This is the sub-riemannian distance mentioned at the end of the previous post.

The definition of this distance does not say much about the properties of it. We may use a reasoning similar with the one in (finite dimensional) normed vector spaces in order to prove that any two group norms are equivalent. In our case, the result is the following:

there are strictly positive constants a, c, C such that for any
X \in G (which has the form X = (x, e^{2\pi i z})) with \| X \| \leq a we have

c ( x_{1}^{2} + x_{2}^{2} + \mid z \mid) \leq \|X\|^{2} \leq C ( x_{1}^{2} + x_{2}^{2} + \mid z \mid).

We may take a = 1/3, for example.

For “big” norms, we have another estimate, coming from the fact that the S^{1} part of the semidirect product is compact, thus bounded:

there is a strictly positive constant A such that for any X \in G (which has the form X = (x, e^{2\pi i z})) we have

\| x\| \leq \|X \| \leq \|x\| + A

Let us look now at the ball B(R) = \left\{ X \in G \mbox{ : } \|X\| \leq R \right\} endowed with the rescaled distance

d_{R} (X,Y) = \frac{1}{R} d(X,Y)

Denote by Profile(R) = [B(R), d_{R}] the isometry class (the class of metric spaces isometric with … ) of (B(R), d_{R}). This is called a “metric profile”, see Introduction to metric spaces with dilations, section 2.3, for example.

The function which associates to R > 0 the Profile(R) can be seen as a curve in the Gromov space of (isometry classes of) compact metric spaces, endowed with the Gromov-Hausdorff distance.

This curve parameterized with R roams in this huge abstract space.
I want to see what happens when R goes to zero or infinity. The interpretation is the following: when R is small (or large, respectively), how the small (or large) balls look like?

Based on the previous estimates, we can answer this question.

When R goes to infinity, the profile Profile(R) becomes the one of the unit ball in \mathbb{R}^{2} with the euclidean norm. Indeed, this is easy, because of the second estimate, which implies that for any X = (R x, e^{2 \pi i z}) and Y = (R y, e^{2 \pi i u}) which belong to B(R), (thus \|x\|, \|y\| \leq 1) we have:

d_{euclidean}(x, y) \leq d_{R}(X,Y) \leq d_{euclidean}(x, y) + \frac{A}{R}.

Therefore, as R goes to infinity, we get the isometry result.

On the other side, if R is small enough (for example smaller or equal to 1/3, then Profile(R) becomes stationary!

Indeed, let me introduce a second Heisenberg group, baptized H(1) = \mathbb{R}^{2} \times R, with the group operation

(x, z) \cdot (y, u) = (x+ y, z + u + \frac{1}{2}\omega(x,y))

Remark that the function (x, e^{2 \pi i z}) \mapsto (x,z) is a group morphism (in fact a local group isomorphism), for z small enough! That means locally the groups G and H(1) are isomorphic. If you don’t know what a local group is then see the post Notes on local groups by Terence Tao.

By exactly the same procedure, we may put a group norm on H(1).

OK, so small balls in G are isometric with small balls in H(1). What about the rescaling with \frac{1}{R}? Well, it turns out that the group H(1) is selfsimilar, moreover, is a conical group (see for example section 6 from the paper Braided spaces with dilations… and check also the references, for the notion of conical group). Conical means that the group has a one parameter family of self-similarities: for any R > 0 the function

\delta_{R} (x,z) = (R x, R^{2} z)

is an auto-morphism of H(1) and moreover:

\| \delta_{R} (x,z) \| = R \| (x,z)\| for any (x,z) \in H(1).

As a consequence, all balls in H(1) look alike (i.e. the metric profile of the group H(1) is stationary, a hallmark of null curvature…). More precisely, for any R > 0 and any X,Y \in H(1), if we denote by d the distance in H(1) induced by the group norm, we have:

d_{R}( \delta_{R} X, \delta_{R} Y) = d(X,Y).

Conclusion for this part: Small balls in G look like balls in the Heisenberg group H(1). Asymptotically, as R goes to infinity, balls of radius R in the group G look more and more alike balls in the euclidean space \mathbb{R}^{2} (notice that this space is self-similar as well, all balls are isometric, with distances properly rescaled).

Example: Gromov-Hausdorff distances and the Heisenberg group, PART 1

This post continues the previous one “Quantum physics and the Gromov-Hausdorff distance“.

Let me take an example. We are in the following Heisenberg group (this is really a physicist Heisenberg group): the semidirect product G = \mathbb{R}^{2} \times S^{1}. Elements of the group have the form

X = (x, e^{2\pi iz}) with x \in \mathbb{R}^{2} and z \in \mathbb{R} / \mathbb{Z}

(think that z \in [0,1] with the identification o = 1).
The group operation is given by:

X \cdot Y = (x,e^{2\pi i z}) \cdot (y, e^{2 \pi i u}) = (x+y, e^{2 \pi i (u+z+ \frac{1}{2} \omega(x,y))})

where \omega: \mathbb{R}^{2} \times \mathbb{R}^{2} \rightarrow \mathbb{R} is the “symplectic form” (or area form)

\omega(x,y) = x_{1} y_{2} - x_{2} y_{1}

Remark 1. The pair (\mathbb{R}^{2}, \omega) is a symplectic (linear) space. As we well know, the Hamiltonian mechanics lives in such spaces, therefore we may think about (\mathbb{R}^{2}, \omega) as being a “classical”, or “large scale” (see further the justification, in PART 2) phase space of a mechanical system with one degree of freedom.

The group G is generated by the subset D = \mathbb{R}^{2} \times \left\{ 1 \right\}, more precisely any element of G can be expressed as a product of four elements of D (in a non-unique way).

What is the geometry of G, seen as generated by D? In order to understand this, we may put an euclidean norm on D (identified with \mathbb{R}^{2}):

\| (x, 1) \| = \| x\|, where \|x\|^{2} = x_{1}^{2} + x_{2}^{2} for example.

Then we define “horizontal strings” and their “length”: a string w = X_{1} ... X_{n} of elements of G is horizontal if for any two successive elements of the string, say X_{i}, X_{i+1} we have

X_{i}^{-1} \cdot X_{i+1} \in D, where X^{-1} denotes the inverse of X with respect to the group operation. Also, we have to ask that X_{1} \in D.

The length of the horizontal string w = X_{1} ... X_{n} is defined as:

l(w) = \|X_{1}\| + \| X_{1}^{-1} \cdot X_{2}\| + .... + \|X_{n-1}^{-1} \cdot X_{n}\|. The source of the string w is the neutral element s(w) = E = (0,1) and the target of the string is t(w) = X_{1}\cdot ... \cdot X_{n}.

OK, then let us define the “group norm” of an element of G, which is an extension of the norm defined on D. A formula for this would be:

\| X\| = \, inf \left\{ l(w) \mbox{ : } t(w) = X \right\}.

Small technicality: it is not clear to me if this definition is really good as it is, but we may improve it by the following procedure coming from the definition of the Hausdorff measure. Let us introduce the “finesse” of a horizontal string, given by

fin(w) = \max \left\{ \|X_{1}\| , \| X_{1}^{-1} \cdot X_{2}\| , ... , \|X_{n-1}^{-1} \cdot X_{n}\| \right\}

and then define, for any \varepsilon > 0, the quantity:

\| X\|_{\varepsilon} = \, inf \left\{ l(w) \mbox{ : } t(w) = X \mbox{ and } fin(w) < \varepsilon \right\}.

The correct definition of the group norm is then

\| X\| = \, sup \left\{\| X\|_{\varepsilon} \mbox{ : } \varepsilon > 0 \right\}.

With words, that means: for a given “scale” $\varepsilon > 0$, take discrete paths from E to X, made by “small” (norm smaller than \varepsilon) horizontal increments, and then take the infimum of the length of such curves. You get \| X\|_{\varepsilon}. Go with \varepsilon to o and get the norm \| X\|_{\varepsilon}.

Up to some normalization, the bigger is the norm of an element of G, the bigger is the infimal length of a horizontal curve which expresses it, therefore the group norm gives a quantitative estimate concerning how the group element is generated.

In disguise, this norm is nothing but a sub-riemannian distance!

Quantum physics and the Gromov-Hausdorff distance: more than discrete or continuous

In face of the question “is reality digital or analog?”, my first reaction was “how can one be so unimaginative?”. Actually, I know that there is at least another possibility, thanks to some mathematical results, like Pansu’ Rademacher theorem for Carnot groups and to the Gromov-Hausdorff distance.
I saw this question announced as the subject of a FQXI essay contest and I decided to give a try to explain a bit what I have in mind. It was interesting to do this. I think it was also a bit disconcerting, because it feels a lot like social interacting during the participation in a MMORPG.

Anyway, my contribution was this (see arXiv link also)

More than discrete or continous: a bird’s view

The idea is that any experiment is like making a map of a part of the reality (say at the quantum scale, for example) into another (the results, as we read them in the lab). Estimating distances are proposed as a good analogy of a measurement. Or, it may be possible that there are a priori limits of the precision (accuracy, etc) of the map (any map obtained by a exploiting a physical process) due to the fact that at different scales the space is different, like metric spaces are.

Up to some normalization, the Planck length may be just (proportional to) the Gromov-Hausdorff distance between the (unit ball in ) the Heisenberg group (as a Carnot group) and the euclidean phase space of the same system.

Reality (whatever that means, say the “real” phase space of a system) may be discrete, continuous, both or none, but at different scales, up to a (small) error, the Heisenberg group or the euclidean space, respectively, may be close to it (to the small part of reality explored). Then, as a consequence of Pansu’ Rademacher theorem, it follows that one cannot make a perfect (read “bi-lipschitz”) map of a part of reality into the other, but instead, there is a lower limit of the accuracy of any such map, quantitatively proportional with a Gromov-Hausdorff distance.

In order to explain this in more detail, I took the view that anyway the differential calculus (which is the mathematical base of physics) is an abstraction of the activity of map-making, or chorography.

The “bird’s view” is a play with a quote from Plato, I let you discover it, at the end of the paper.

Combinatorics versus geometric…

… is like using roman numerals versus using a positional numeral system, like the hindu-arabic numerals we all know very well. And there is evidence that our brain is NOT using combinatorial techniques, but geometric, see further.

What is this post about? Well, it is about the problem of using knowledge concerning topological groups in order to study discrete approximate groups, as Tao proposes in his new course, it is about discrete finitely generated groups with polynomial growth which, as Gromov taught us, when seen from far away they become nilpotent Lie groups, and so on. Only that there is a long way towards these subjects, so please bear me a little bit more.

This is part of a larger project to try to understand approximate groups, as well as normed groups with dilations, in a more geometric way. One point of interest is understanding the solution to the Hilbert’s fifth problem from a more general perspective, and this passes by NOT using combinatorial techniques from the start, even if they are one of the most beautiful mathematical gems which is the solution given by Gleason-Montgomery-Zippin to the problem.

What is combinatorial about this solution? Well, it reduces (in a brilliant way) the problem to counting, by using objects which are readily at hand in any topological group, namely the one-parameter subgroups. There is nothing wrong about this, only that, from this point of view, Gromov’s theorem on groups with polynomial growth appears as magical. Where is this nilpotent structure coming from?

As written in a previous post, Hilbert’s fifth problem without one-parameter subgroups, Gromov’ theorem is saying a profound geometrical thing about a finitely generated group with polynomial growth: that seen from far away this group is self-similar, that is a conical group, or a contractible group w.r.t. any of its dilations. That is all! the rest is just a Siebert’ result. This structure is deeply hidden in the proof and one of my purposes is to understand where it is coming from. A way of NOT understanding this is to use huge chunks of mathematical candy in order to make this structure appear by magic.

I cannot claim that I understand this, that I have a complete solution, but instead, for this post, I looked for an analogy and I think I have found one.

It is the one from the first lines of the post.

Indeed, what is wrong, by analogy, with the roman numeral system? Nothing, actually, we have generators, like I, V, X, L, C, D, M, and relations, like IIII = IV, and so on (yes, they are not generators and relations exactly like in a group sense). The problems appear when we want to do complex operations, like addition of large numbers. Then we have to be really clever and use very intelligent and profound combinatorial arguments in order to efficiently manage all the cancellations and whatnot coming from the massive use of relations. Relations are at very small scale, we have to bridge the way towards large scales, therefore we have to approximate the errors by counting in different ways and to be very clever about these ways.

Another solution for this, historically preferred, was to use a positional number system, which is more geometric, because it exploits a large scale property of natural numbers, which is that their collection is (approximately) self-similar. Indeed, take (as another kind of generators, again not in a group sense), a small set, like B={0, 1, 2, …, 8, 9} and count in base 10, which goes roughly like this: take a (big, preferably) natural number X and do the following

– initialize i = 1,

– find the smallest natural power a_{i} of 10 such that 10^{-a_{i}} X has a norm smaller than 10, then pick the element k_{i} of B which minimizes the distance to 10^{-a_{i}} X,

– substract (from the right or the left, it does not matter here because addition of natural numbers is commutative) 10^{a_{i}} k_{i} from X, and rename the result by X,

-repeat until X \in B and finish the algorithm by taking the last digit as X.

In the end (remember, I said “roughly”) represent X as a string which codes the string of pairs (a_{i}, k_{i}).

The advantage of this representation of natural numbers is that we can do, with controlled precision, the addition of big numbers, approximately. Indeed, take two very big numbers X, Y and take another number, like 10. Then for any natural n define

(X+Y) approx(n) = 10^{n} ( [X]_{n} + [Y]_{n})

where [X]_{n} is the number which is represented as the truncation of the string which represents X up to the first n letters, counting from left to right.
If n is small compared with X, Y, then (X+Y) approx(n) is close to the true X+Y, but the computational effort for calculating (X+Y) approx(n) is much smaller than the one for calculating X+Y.

Once we have this large scale self-similarity, then we may exploit it by using the more geometrical positional numeral system instead of the roman numeral system, that is my analogy. Notice that in this (almost correct) algorithm 10^{a} X is not understood as X+X+....+X 10^{a} times.

Let me now explain why the positional numeral system is more geometric, by giving a neuroscience argument, besides what I wrote in this previous post: “How not to get bored, by reading Gromov and Tao” (mind the comma!).

I reproduce from the wiki page on “Subitizing and counting

Subitizing, coined in 1949 by E.L. Kaufman et al.[1] refers to the rapid, accurate, and confident judgments of number performed for small numbers of items. The term is derived from the Latin adjective subitus (meaning “sudden”) and captures a feeling of immediately knowing how many items lie within the visual scene, when the number of items present falls within the subitizing range.[1] Number judgments for larger set-sizes were referred to either as counting or estimating, depending on the number of elements present within the display, and the time given to observers in which to respond (i.e., estimation occurs if insufficient time is available for observers to accurately count all the items present).

The accuracy, speed, and confidence with which observers make judgments of the number of items are critically dependent on the number of elements to be enumerated. Judgments made for displays composed of around one to four items are rapid,[2] accurate[3] and confident.[4] However, as the number of items to be enumerated increases beyond this amount, judgments are made with decreasing accuracy and confidence.[1] In addition, response times rise in a dramatic fashion, with an extra 250–350 ms added for each additional item within the display beyond about four.[5]

This is a brain competence which is spatial (geometrical) in nature, as evidenced by Simultanagnosia:

Clinical evidence supporting the view that subitizing and counting may involve functionally and anatomically distinct brain areas comes from patients with simultanagnosia, one of the key components of Balint’s syndrome.[14] Patients with this disorder suffer from an inability to perceive visual scenes properly, being unable to localize objects in space, either by looking at the objects, pointing to them, or by verbally reporting their position.[14] Despite these dramatic symptoms, such patients are able to correctly recognize individual objects.[15] Crucially, people with simultanagnosia are unable to enumerate objects outside the subitizing range, either failing to count certain objects, or alternatively counting the same object several times.[16]

From the wiki description of simultanagnosia:

Simultanagnosia is a rare neurological disorder characterized by the inability of an individual to perceive more than a single object at a time. It is one of three major components of Bálint’s syndrome, an uncommon and incompletely understood variety of severe neuropsychological impairments involving space representation (visuospatial processing). The term “simultanagnosia” was first coined in 1924 by Wolpert to describe a condition where the affected individual could see individual details of a complex scene but failed to grasp the overall meaning of the image.[1]

I rest my case.

Baker-Campbell-Hausdorff polynomials and Menelaus theorem

This is a continuation of the previous post on the noncommutative BCH formula. For the “Menelaus theorem” part see this post.

Everything is related to “noncommutative techniques” for approximate groups, which hopefully will apply sometimes in the future to real combinatorial problems, like the Tao’ project presented here, and also to the problem of understanding curvature (in non-riemannian settings), see a hint here, and finally to the problem of higher order differential calculus in sub-riemannian geometry, see this for a comment on this blog.

Remark: as everything this days can be retrieved on the net, if you find in this blog something worthy to include in a published paper, then don’t be shy and mention this. I believe strongly in fair practices relating to this new age of scientific collaboration opened by the www, even if in the past too often ideas which I communicated freely were taken in published papers without attribution. Hey, I am happy to help! but unfortunately I have an ego too (not only an ergobrain, as any living creature).

For the moment we stay in a Lie group , with the convention to take the exponential equal to identity, i.e. to consider that the group operation can be written in terms of Lie brackets according to the BCH formula:

x y = x + y + \frac{1}{2} [x,y] + \frac{1}{12}[x,[x,y]] - \frac{1}{12}[y,[y,x]]+...

For any \varepsilon \in (0,1] we define

x \cdot_{\varepsilon} y = \varepsilon^{-1} ((\varepsilon x) (\varepsilon y))

and we remark that x \cdot_{\varepsilon} y \rightarrow x+y uniformly with respect to x,y in a compact neighbourhood of the neutral element e=0. The BCH formula for the operation labeled with \varepsilon is the following

x \cdot_{\varepsilon} y = x + y + \frac{\varepsilon}{2} [x,y] + \frac{\varepsilon^{2}}{12}[x,[x,y]] - \frac{\varepsilon^{2}}{12}[y,[y,x]]+...

Let us define the BCH functions. We start with

BCH^{0}_{\varepsilon} (x,y) = x \cdot_{\varepsilon} y

and BCH^{0}_{0}(x,y) = \lim_{\varepsilon \rightarrow 0} BCH^{0}_{\varepsilon}(x,y) = x + y.

Define the “linearized dilation\delta^{x}_{\varepsilon} y = x + \varepsilon (-x+y) (written like this on purpose, without using the commutativity of the “+” operation; due to limitations of my knowledge to use latex in this environment, I am shying away to put a bar over this dilation, to emphasize that it is different from the “group dilation”, equal to x (\varepsilon(x^{-1}y))).

Consider the family of \beta > 0 such that there is an uniform limit w.r.t. x,y in compact set of the expression

\delta_{\varepsilon^{-\beta}}^{BCH^{0}_{\varepsilon}(x,y)}  BCH^{0}_{0}(x,y)

and remark that this family has a maximum \beta = 1. Call this maximum \alpha_{0} and define

BCH^{1}_{\varepsilon}(x,y) = \delta_{\varepsilon^{-\alpha_{1}}}^{BCH^{0}_{\varepsilon}(x,y)}  BCH^{0}_{0}(x,y)

and BCH^{1}_{0}(x,y) = \lim_{\varepsilon \rightarrow 0} BCH^{1}_{\varepsilon}(x,y).

Let us compute BCH^{1}_{0}(x,y):

BCH^{1}_{0}(x,y) = x + y + \frac{1}{2}[x,y]

and also remark that

BCH^{1}_{\varepsilon}(x,y) = x+y + \varepsilon^{-1} ( -(x+y) + (x \cdot_{\varepsilon} y)).

We recognize in the right hand side an expression which is a relative of what I have called in the previous post an “approximate bracket”, relations (2) and (3). A better name for it is a halfbracket.

We may continue indefinitely this recipe. Namely for any natural number i\geq 1 we first define the maximal number \alpha_{i} among all \beta > 0 with the property that the (uniform) limit exists

\lim_{\varepsilon \rightarrow 0} \delta_{\varepsilon^{-\beta}}^{BCH^{i}_{\varepsilon}(x,y)}  BCH^{i}_{0}(x,y)

Generically we shall find \alpha_{i} = 1. We define then

BCH^{i+1}_{\varepsilon}(x,y) = \delta_{\varepsilon^{-\alpha_{i}}}^{BCH^{i}_{\varepsilon}(x,y)}  BCH^{i}_{0}(x,y)

and BCH^{i+1}_{0}(x,y) = \lim_{\varepsilon \rightarrow 0} BCH^{i+1}_{\varepsilon}(x,y).

It is time to use Menelaus theorem. Take a natural number N > 0. We may write (pretending we don’t know that all \alpha_{i} = 1, for i = 0, ... N):

x \cdot_{\varepsilon} y = BCH^{0}_{\varepsilon}(x,y) = \delta^{BCH^{0}_{0}(x,y)}_{\varepsilon^{\alpha_{0}}} \delta^{BCH^{1}_{0}(x,y)}_{\varepsilon^{\alpha_{1}}} ... \delta^{BCH^{N}_{0}(x,y)}_{\varepsilon^{\alpha_{N}}} BCH^{N+1}_{\varepsilon}(x,y)

Let us denote \alpha_{0} + ... + \alpha_{N} = \gamma_{N} and introduce the BCH polynomial PBCH^{N}(x,y)(\mu) (the variable of the polynomial is \mu), defined by: PBCH^{N}(x,y)(\mu) is the unique element of the group with the property that for any other element z (close enough to the neutral element) we have

\delta^{BCH^{0}_{0}(x,y)}_{\mu^{\alpha_{0}}} \delta^{BCH^{1}_{0}(x,y)}_{\mu^{\alpha_{1}}} ... \delta^{BCH^{N}_{0}(x,y)}_{\mu^{\alpha_{N}}} z = \delta^{PBCH^{N}(x,y)(\mu)}_{\mu^{\gamma_{N}}} z

Such an element exists and it is unique due to (Artin’ version of the) Menelaus theorem.

Remark that PBCH^{N}(x,y)(\mu) is not a true polynomial in \mu, but it is a rational function of \mu which is a polynomial up to terms of order \mu^{\gamma_{N}}. A straightforward computation shows that the BCH polynomial (up to terms of the mentioned order) is a truncation of the BCH formula up to terms containing N-1 brackets, when we take \mu =1.

It looks contorted, but written this way it works verbatim for normed groups with dilations! There are several things which are different in detail. These are:

1. the coefficients \alpha_{i} are not equal to 1, in general. Moreover, I can prove that the \alpha_{i} exist (as a maximum of numbers \beta such that …) for a sub-riemannian Lie group, that is for a Lie group endowed with a left-invariant dilation structure, by using the classical BCH formula, but I don’t think that one can prove the existence of these numbers for a general group with dilations! Remark that the numbers \alpha_{i} are defined in a similar way as Hausdorff dimension is!

2. one has to define noncommutative polynomials, i.e. polynomials in the frame of Carnot groups (at least). This can be done, it has been sketched in a previous paper of mine, Tangent bundles to sub-riemannian groups, section 6.

UPDATE: (30.10.2011) See the post of Tao

Associativity of the Baker-Campbell-Hausdorff formula

where a (trained) eye may see the appearance of several ingredients, in the particular commutative case, of the mechanism of definition of the BCH formula.

The associativity is rephrased, in a well known way,  in proposition 2 as a commutativity of say left and  right actions. From there signs of commutativity (unconsciously assumed) appear:  the obvious first are the “radial  homogeneity  identities”, but already at this stage a lot of familiar  machinery is put in place and the following is more and more heavy of  the same. I can only wonder:  is this  all necessary? My guess is: not. Because for starters, as explained here and in previous posts, Lie algebras are of a commutative blend, like the BCH formula. And (local, well known from the beginning) groups are not.

Principles: randomness/structure or emergent from a common cause?

I think both.

Finally Terence Tao presented a sketch of his project relating Hilbert’ fifth problem and approximate groups. For me the most interesting part is his Theorem 12 and its relation with Gromov’ theorem on finitely generated groups with polynomial growth.

A bit dissapointing (for me) is that he seems to choose to rely on “commutative” techniques and, not surprising, he is bound to get results valid only in riemannian geometry (or spaces of Alexander type) AND also to regard the apparition of nilpotent structures as qualitatively different from smooth structures.

For clarity reasons, I copy-paste his two “broad principles”

The above three families of results exemplify two broad principles (part of what I like to call “the dichotomy between structure and randomness“):

  • (Rigidity) If a group-like object exhibits a weak amount of regularity, then it (or a large portion thereof) often automatically exhibits a strong amount of regularity as well;
  • (Structure) This strong regularity manifests itself either as Lie type structure (in continuous settings) or nilpotent type structure (in discrete settings). (In some cases, “nilpotent” should be replaced by sister properties such as “abelian“, “solvable“, or “polycyclic“.)

Let me contrast with my

Principle of common cause: an uniformly continuous algebraic structure has a smooth structure because both structures can be constructed from an underlying emergent algebra (introduced here).

(from this previous post).

While his “dichotomy between structure and randomness“ principle is a very profound and mysterious one, the second part (Structure) is only an illusion created by psychological choices (I guess). Indeed, both (Lie) smooth structure and nilpotence are just “noncommutative (local) linearity”, as explained previously. Both smooth structure and “conical algebraic” structure (nilpotent in particular) stem from the same underlying dilation structure. What is most mysterious, in my opinion, be it in the “exact” or “approximate” group structure, is HOW a (variant of) dilation structure appears from non-randomness (presumably as a manifestation of Tao randomness/structure principle), i.e. HOW just a little bit of approximate self-similarity bootstraps to a dilation structure, or emergent algebra, which then leads to various flavors of “smoothness” and “nilpotence”.

Penrose’ combinatorial space time as chora

Roger Penrose, among other extraordinary things he did, proposed an approach to combinatorial space-time by way of spin-networks. Here is a link to his amazing paper

Roger Penrose, Angular momentum: an approach to combinatorial space-time, in Quantum Theory and Beyond, ed. T. Bastin, Cambridge University Press, Cambridge, 1971.

taken from this page of John Baez.

With the new knowledge gradually constructed lately, I returned to read this classic and it just downed on me that a strand-network (Penrose paper page 19 in the pdf linked above) can be given a structure of a collection of choroi, see the post

Entering chora, the infinitesimal place

or better go to the paper “Computing with space“.

This is food for thought for me, I just felt the need to communicate this. It is not my intention to chase for theories of everything, better is to understand, as a mathematician, what is worthy to learn and import from this field into what interests me.

Menelaus theorem by way of Reidemeister move 3

(Half of) Menelaus theorem is equivalent with a theorem by Emil Artin, from his excellent book Geometric Algebra, Interscience Publishers (1957), saying that the inverse semigroup generated by dilations (in an affine space) is composed by dilations and translations. More specifically, if \varepsilon, \mu > 0 are such that \varepsilon \mu < 1 then the composition of two dilations, one of coefficient \varepsilon and the other of coefficient \mu, is a dilation of coefficient \varepsilon \mu.

Artin contributed also to braid theory, so it may be a nice idea to give a proof of Artin interpretation of Menelaus theorem by using Reidemeister moves.

This post is related to previous ones, especially these three:

Noncommutative Baker-Campbell-Hausdorff formula

A difference which makes four differences, in two ways

Rigidity of algebraic structure: principle of common cause

which I shall use as references for what a normed group with dilations, conical group and associated decorations of tangle diagrams are.

Let’s start! I use a representation of dilations as decorations of an oriented tangle diagram.

For any \varepsilon > 0, dilations of coefficient \varepsilon and \varepsilon^{-1} provide two operations which give to the space (say X) the structure of an idempotent right quasigroup, which is equivalent to saying that decorations of the tangle diagrams by these rules are stable to the Reidemeister moves of type I and II.

A particular example of a space with dilations is a normed group with dilations, where the dilations are left-invariant.

If the decorations that we make are also stable with respect to the Reidemeister move 3, then it can be proved that definitely the space with dilations which I use has to be a conical group! What is a conical group? It is a non-commutative vector space, in particular it could be a real vector space or the Heisenberg group, or a Carnot group and so on. Read the previous posts about this.

Graphically, the Reidemeister move 3 is this sliding movement:

of CC' under the crossing AA'-BB' (remark also how the decorations of crossings \varepsilon and \mu switch places).
Further on I shall suppose that we use for decorations a conical group, with distance function denoted by d. Think about a real vector space with distance given by an euclidean norm, but don’t forget that in fact we don’t need to be so restrictive.

Take now two strings and twist them one around the other, an infinity of times, then pass a third string under the first two, then decorate everything as in the following figure

We can slide twice the red string (the one which is under) by using the Reidemeister move 3. The decorations do not change. If you want to see what is the expression of z' as a function of x,y then we easily write that

z' = \delta^{x}_{\varepsilon} \delta^{y}_{\mu} z = \delta^{x_{1}}_{\varepsilon} \delta^{y_{1}}_{\mu} z

where x_{1}. y_{1} are obtained from x,y according to the rules of decorations.

We may repeat n times the double slide movement and we get that

z' = \delta^{x_{n}}_{\varepsilon} \delta^{y_{n}}_{\mu} z

If we prove that the sequences x_{n}, y_{n} converge both to some point $w$, then by passing to the limit in the previous equality we would get that

z' = \delta^{w}_{\varepsilon} \delta^{w}_{\mu} z = \delta^{w}_{\varepsilon \mu} z

which is the conclusion of Artin’s result! Otherwise said, if we slide the red string under all the twisted pair of strings, then the outcome is the same as passing under only one string, decorated with $w$, with the crossing decorated with \varepsilon \mu.

The only thing left is to prove that indeed the sequences converge. But this is easy: we prove that the recurrence relation between x_{n+1}, y_{n+1} and x_{n},y_{n} is a contraction. See the figure:

Well, that is all.

UPDATE: I was in fact motivated to draw the figures and explain all this after seeing this very nice post of Tao, where an elementary proof of a famous result is given, by using “elementary” graphical means.

Towards aerography, or how space is shaped to comply with the perceptions of the homunculus

In the previous post

The Cartesian Theater: philosophy of mind versus aerography

I explained why the Cartesian Theater is not well describing the appearance of the homunculus.

A “Cartesian theater”, Dennett proposes, is any theory about what happens in one’s mind which can be reduced to the model of a “tiny theater in the brain where a homunculus … performs the task of observing all the sensory data projected on a screen at a particular instant, making the decisions and sending out commands.”

This leads to infinite regression, therefore any such theory is flawed. One has to avoid the appearance of the homunculus in one’s theory, as a consequence.

The homunculus itself may appear from apparently innocuous assumptions, such as the introduction of any limen (or threshold), like supposing that (from Consciousness Explained (1991), p. 107)

“…there is a crucial finish line or boundary somewhere in the brain, marking a place where the order of arrival equals the order of “presentation” in experience because what happens there is what you are conscious of.”

By consequence such assumptions are flawed. There is no limen, boundary inside the brain (strangely, any assumption which supposes a boundary which separates the individual from the environment is not disturbing anybody excepting Varela, Maturana, or the second order cybernetics).

In the previous post I argued, based on my understanding of the excellent paper of Kenneth R Olwig

“All that is landscape is melted into air: the `aerography’ of ethereal space”, Environment and Planning D: Society and Space 2011, volume 29, pages 519 – 532,

that the “Cartesian theater” model is misleading because it neglects to notice that what happens on stage is as artificial as the homunculus spectator, while, in the same time, the theater itself (a theater in a box) is designed for perception.

Therefore, while everybody (?) accepts that there is no homunculus in the brain, in the same time nobody seems to be bothered that always the perception data are modeled as if they come from the stage of the Cartesian theater.

For example, few would disagree that we see a 3-dimensional, euclidean world. But this is obviously not what we see and the proof is that we can be easily tricked by stereoscopy. These are the visual data (together with other, more subtle, auditory, posture and whatnot) which the brain uses to reconstruct the world as seen by a homunculus, created by our illusory image that there is a boundary between us (me, you) and the environment.

You would say: nobody in the right mind denies that the world is 3d, at least our familiar everyday world, not quantum or black holes or other inventions of physicists. I don’t deny it, just notice, like in this previous post, that the space is perceived as it is based on prior knowledge, that is because prior “controlled hallucinations” led consistently to coherent interpretations.

The idea is that in fact there are two things to avoid: one is the homunculus and the other one is the scenic space.

The “scenic space” is itself a model of the real space (does this exists?) and it leads itself to infinite regression. We “learn space” by relating to it and modeling it in our brains. I suppose that all (inside and outside of the brain) complies with the same physical laws and that the rational explanation for the success of the “3d scenic space” (which is consistent with our educated perception, but also with physical phenomena in our world, at least at human scale and range) should come from this understanding that brain processes are as physical as a falling apple and as mathematical as perspective is.

How not to get bored, by reading Gromov and Tao

This is a continuation of the previous post Gromov’s ergobrain, triggered by the update of Gromov paper on July 27. It is also related to the series of posts by Tao on the Hilbert’s fifth problem.

To put you in the right frame of mind, both Gromov and Tao set the stage for upcoming, hopefully extremely interesting (or “beautiful” on Gromov’s scale: interesting, amusing, amazing, funny and beautiful) developments of their work on “ergosystems” and “approximate groups” respectively.

What can be the link between those? In my opinion, both works refer to the unexplored ground between discrete (with not so many elements) and continuous (or going to the limit with the number of elements of a discrete world).

Indeed, along with my excuses for simplifying too much a very rich text, let me start with the example of the bug on a leaf, sections 2.12, 2.13 in Gromov’s paper). I understand that the bug, as any other “ergosystem” (like one’s brain) would get bored to behave like a finite state automaton crawling on a “co-labeled graph” (in particular on a Cayley graph of a discretely generated group). The implication seems to be that an ergosystem has a different behaviour.

I hardly refrain to copy-paste the whole page 96 of Gromov’s paper, please use the link and read it instead, especially the part related to downsides of Turing modeling (it is not geometrical, in few words). I shall just paste here the end:

The two ergo-lessons one may draw from Turing models are mutually contradictory.
1. A repeated application of a simple operation(s) may lead to something unexpectedly complicated and interesting.
2. If you do not resent the command “repete” and/or are not getting bored by doing the same thing over and over again, you will spend your life in a “Turing loop” of an endless walk in a circular tunnel.

That is because the “stop-function” associated to a class of Turing machines

may grow faster than anything you can imagine, faster than anything expressible by any conceivable formula – the exponential and double exponential functions that appeared so big to you seem as tiny specks of dust compared to this monstrous stop-function. (page 95)

Have I said “Cayley graph”? This brings me to discrete groups and to the work of Tao (and Ben Green and many others). According to Tao, there is something to be learned from the solution of the Hilbert’s fifth problem, in the benefit of understanding approximate groups. (I am looking forward to see this!) There are some things that I understood from the posts of Tao, especially that a central concept is a Gleason metric and its relations with group commutators. In previous posts (last is this) I argue that Gleason metrics are very unlike sub-riemannian distances. It has been unsaid, but obvious to specialists, that sub-riemannian metrics are just like distances on Cayley graphs, so as a consequence Gleason metrics are only a commutative “shadow” of what happens in a Cayley graph when looked from afar. Moreover, in this post concerning the problem of a non-commutative Baker-Campbell-Hausdorff formula it is said that (in the more general world of groups with dilations, relevant soon in this post) the link between the Lie bracket and group commutators is shallow and due to the commutativity of the group operation in the tangent space.

So let me explain, by using Gromov’s idea of boredom, how not to get bored in a Cayley graph. Remember that I quoted a paragraph (from Gromov paper, previous version), stating that an ergosystem “would be bored to death” to add large numbers? Equivalently, an ergosystem would be bored to add (by using the group operation) elements of the group expressed as very long words with letters representing the generators of the group. Just by using “finite state automata” type of reasoning with the relations between generators (expressed by commutators and finitary versions of Gleason like metrics) an ergosystem would get easily bored. What else can be done?

Suppose that we crawl in the Cayley graph of a group with polynomial growth, therefore we know (by a famous result of Gromov) that seen from afar the group is a nilpotent one, more precisely a group with the algebraic structure completely specified by its dilations. Take one such dilation, of coefficient 10^{-30} say, and (by an yet unknown “finitization” procedure) associate to it a “discrete shadow”, that is an “approximate dilation” acting on the discrete group itself. As this is a genuinely non-commutative object, probably the algorithm for defining it (by using relations between growth and commutators) would be very much resource consuming. But suppose we just have it, inferred from “looking at the forrest” as an ergosystem.

What a great object would that be. Indeed, instead of getting bored by adding two group elements, the first expressed as product of 200034156998123039234534530081 generators, the second expressed as a product of 311340006349200600380943586878 generators, we shall first reduce the elements (apply the dilation of coefficient 10^{-30}) to a couple of elements, first expressed as a product of 2 generators, second expressed as a product of 3 generators, then we do the addition 2+3 = 5 (and use the relations between generators), then we use the inverse dilation (which is a dilation of coefficient 10^{30}) to obtain the “approximate sum” of the two elements!

In practice, we probably have a dilation of coefficient 1/2 which could simplify the computation of products of group elements of length 2^{4} at most, for example.

But it looks like a solution to the problem of not getting bored, at least to me.

Braitenberg vehicles, enchanted looms and winnowing-fans

Braitenberg vehicles were introduced in the wonderful book (here is an excerpt which contains enough information for understanding this post):

Vehicles: Experiments in Synthetic Psychology [update: link no longer available]

by Valentino Braitenberg.

In the introduction of the book we find the following:

At times, though, in the back of my mind, while I was counting fibers in the visual ganglia of the fly or synapses in the cerebral cortex of the mouse, I felt knots untie,  distinctions dissolve, difficulties disappear, difficulties I had experienced much earlier when I still held my first naive philosophical approach to the problem of the mind.

This is not the first appearance of knots (and related weaving craft) as a metaphor for things related to the brain. A famous paragraph, by Charles Scott Sherrington compares the brain waking from sleep with an enchanted loom

 The great topmost sheet of the mass, that where hardly a light had twinkled or moved, becomes now a sparkling field of rhythmic flashing points with trains of traveling sparks hurrying hither and thither. The brain is waking and with it the mind is returning. It is as if the Milky Way entered upon some cosmic dance. Swiftly the head mass becomes an enchanted loom where millions of flashing shuttles weave a dissolving pattern, always a meaningful pattern though never an abiding one; a shifting harmony of subpatterns.

Compare with the following passage (Timaeus 52d and following) from Plato:

 …the nurse of generation [i.e. space, chora] …  presented a strange variety of appearances; and being full of powers which were neither similar nor equally balanced, was never in any part in a state of equipoise, but swaying unevenly hither and thither, was shaken by them, and by its motion again shook them; and the elements when moved were separated and carried continually, some one way, some another; as, when grain is shaken and winnowed by fans and other instruments used in the threshing of corn, the close and heavy particles are borne away and settle in one direction, and the loose and light particles in another.

The winnowing-fan (liknon) is important in the Greek mythology, it means also cradle and Plato uses this term with both meanings.

For a mathematician at least, winnowing and weaving are both metaphors of computing with braids: the fundamental group of the configuration space of the grains is the braid group and moreover the grains (trajectories) are the weft, the winnowing-fan is the warp of a loom.

All part of the reason of proposing a tangle formalism for chora and computing with space.

Back to Braitenberg vehicles. Vehicles 2,3,4 and arguably 5 are doing computations with space, not logical computations, by using sensors, motors and connections (that is map-making operations). I cite from the end of Vehicle 3 section:

But, you will say, this is ridiculous: knowledge implies a flow of information from the environment into a living being ar at least into something like a living being. There was no such transmission of information here. We were just playing with sensors, motors and connections: the properties that happened to emerge may look like knowledge but really are not. We should be careful with such words. […]

Meanwhile I invite you to consider the enormous wealth of different properties that we may give Vehicle 3c by choosing various sensors and various combinations of crossed and uncrossed, excitatory and inhibitory, connections.

Gromov’s Ergobrain

Misha Gromov updated his very interesting “ergobrain” paper

Structures, Learning and Ergosystems: Chapters 1-4, 6 

Two quotes I liked: (my emphasis)

The concept of the distance between, say, two locations on Earth looks simple enough, you do not think you need a mathematician to tell you what distance is. However, if you try to explain what you think you understand so well to a computer program you will be stuck at every step.  (page 76)

Our ergosystems will have no explicit knowledge of numbers, except may be for a few small ones, say two, three and four. On the contrary, neurobrains, being physical systems, are run by numbers which is reflected in their models, such as neural networks which sequentially compose addition of numbers with functions in one variable.

An unrestricted addition is the essential feature of “physical numbers”, such as mass, energy, entropy, electric charge. For example, if you bring together \displaystyle 10^{30}  atoms, then, amazingly, their masses add up […]

Our ergosytems will lack this ability. Definitely, they would be bored to death if they had to add one number to another \displaystyle 10^{30} times. But the \displaystyle 10^{30} -addition, you may object, can be implemented by \displaystyle log_{2} 10^{30} \sim 100 additions with a use of binary bracketing; yet, the latter is a non-trivial structure in its own right that our systems, a priori, do not have.  Besides, sequentially performing even 10 additions is boring. (It is unclear how Nature performs “physical addition” without being bored in the process.) (page 84)

Where is this going? I look forward to learn.

Two papers on arXiv

I put on arxiv two papers

The paper Computing with space contains too may ideas, is too dense, therefore much of it will not be read, as I was warned repeatedly. This is the reason to do again what I did with Introduction to metric spaces with dilations, which is a slightly edited part of the paper A characterization of sub-riemannian spaces as length dilation structures. Apparently the part (Introduction to ..), the small detail,  is much more read  than the whole (A characterization…).

Concerning the second paper “Normed groupoids…”, it is an improvement of the older paper. Why did I not updated the older paper? Because I need help, I just don’t understand where this is going (and why such direction of research was not explored before).

Escape property of the Gleason metric and sub-riemannian distances again

The last post of Tao from his series of posts on the Hilbert’s fifth problem contains interesting results which can be used for understanding the differences between Gleason distances and sub-riemannian distances or, more general, norms on groups with dilations.

For normed groups with dilations see my previous post (where links to articles are also provided). Check my homepage for more details (finally I am online again).

There is also another post of mine on the Gleason metric (distance) and the CC (or sub-riemannian) distance, where I explain why the commutator estimate (definition 3, relation (2) from the last post of Tao) forces “commutativity”, in the sense that a sub-riemannian left invariant distance on a Lie group which has the commutator estimate must be a riemannian distance.

What about the escape property (Definition 3, relation (1) from the post of Tao)?

From his Proposition 10 we see that the escape property implies the commutator estimate, therefore a sub-riemannian left invariant distance with the escape property must be riemannian.

An explanation of this phenomenon can be deduced by using the notion of “coherent projection”, section 9 of the paper

A characterization of sub-riemannian spaces as length dilation structures constructed via coherent projections, Commun. Math. Anal. 11 (2011), No. 2, pp. 70-111

in the very particular case of sub-riemannian Lie groups (or for that matter normed groups with dilations).

Suppose we have a normed group with dilations (G, \delta) which has another left invariant dilation structure on it (in the paper this is denoted by a “\delta bar”, here I shall use the notation \alpha for this supplementary dilation structure).

There is one such a dilation structure available for any Lie group (notice that I am not trying to give a proof of the H5 problem), namely for any \varepsilon > 0 (but not too big)

\alpha_{\varepsilon} g = \exp ( \varepsilon \log (g))

(maybe interesting: which famous lemma is equivalent with the fact that (G,\alpha) is a group with dilations?)
Take \delta to be a dilation structure coming from a left-invariant distribution on the group . Then \delta commutes with \alpha and moreover

(*) \lim_{\varepsilon \rightarrow 0} \alpha_{\varepsilon}^{-1} \delta_{\varepsilon} x = Q(x)

where Q is a projection: Q(Q(x)) = x for any x \in G.

It is straightforward to check that (the left-translation of) Q (over the whole group) is a coherent projection, more precisely it is the projection on the distribution!

Exercise: denote by \varepsilon = 1/n and use (*) to prove that the escape property of Tao implies that Q is (locally) injective. This implies in turn that Q = id, therefore the distribution is the tangent bundle, therefore the distance is riemannian!

UPDATE:    See the recent post 254A, Notes 4: Bulding metrics on groups, and the Gleason-Yamabe theorem by Terence Tao, for understanding in detail the role of the escape property in the proof of the Hilbert 5th problem.

Pros and cons of higher order Pansu derivatives

This interesting question from mathoverflow

Higher order Pansu derivative

is asked by nil (no website, no location). I shall try to explain the pros and cons of higher order derivatives in Carnot groups. As for a real answer to nil’s question, I could tell him but then …

For “Pansu derivative” see the paper: (mentioned in this previous post)

Métriques de Carnot-Carathéodory et quasiisométries des espaces symétriques de rang un, The Annals of Mathematics Second Series, Vol. 129, No. 1 (Jan., 1989), pp. 1-60

Such derivatives can be done in any metric space with dilations, or in any normed group with dilations in particular (see definition in this previous post).

Pros/cons: It would be interesting to have a higher order differential calculus with Pansu derivatives, for all the reasons which make higher derivatives interesting in more familiar situations. Three examples come to my mind: convexity, higher order differential operators and curvature.

1. Convexity pro: the positivity of the hessian of a function implies convexity. In the world of Carnot groups the most natural definition of convexity (at least that is what I think) is the following: a function f: N \rightarrow \mathbb{R}, defined on a Carnot group N with (homogeneous) dilations \displaystyle \delta_{\varepsilon}, is convex if for any x,y \in N and for any \varepsilon \in [0,1] we have

f( x \delta_{\varepsilon}(x^{-1} y)) \leq f(x) + \varepsilon (-f(x) + f(y)) .

There are conditions in terms of higher order horizontal derivatives (if the function is derivable in the classical sense) which are sufficient for the function to be convex (in the mentioned sense). Note that the positivity of the horizontal hessian is not enough! It would be nice to have a more intrinsic differential condition, which does not use classical horizontal derivatives. Con: as in classical analysis, we can do well without second order derivatives when we study convexity. In fact convex analysis is so funny because we can do it without the need of differentiability.

2. Differential operators Pro: Speaking about higher order horizontal derivatives, notice that the horizontal laplacian is not expressed in an intrinsic manner (i.e. as a combinaion of higher order Pansu derivatives). It would be interesting to have such a representation for the horizontal laplacian, at least for not having to use “coordinates” (well, these are families of horizontal vector fields which span the distribution) in order to be able to define the operator. Con: nevertheless the horizontal hessian can be defined intrinsically in a weak sense, using only the sub-riemannian distance (and the energy functional associated to it, as in the classical case). Sobolev spaces and others are a flourishing field of research, without the need to appeal to higher order Pansu derivatives. (pro: this regards the existence of solutions in a weak sense, but to be honest, what about the regularity business?)

3. Curvature Pro: What is the curvature of a level set of a function defined on a Carnot group? Clearly higher order derivatives are needed here. Con: level set are not even rectifiable in the Carnot world!

Besides all this, there is a general:

Con: There are not many functions, from a Carnot group to itself, which are Pansu derivable everywhere, with continuous derivative. Indeed, for most Carnot groups (excepting the Heisenberg type and the jet type) only left translations are “smooth” in this sense. So even if we could define higher order derivatives, there is not much room to apply them.

However, I think that it is possible to define derivatives of Pansu type such that always there are lots of functions derivable in this sense and moreover it is possible to introduce higher order derivatives of Pansu type (i.e. which can be expressed with dilations).

UPDATE:  This should be read in conjunction with this post. Please look at Lemma 11   from the   last post of Tao    and also at the notations made previously in that post.  Now, relation (4) contains an estimate of a kind of discretization of a second order derivative. Based on Lemma 11 and on what I explained in the linked post, the relation (4) cannot hold in the sub-riemannian world, that is there is surely no bump  function \phi such that d_{\phi} is equivalent with a sub-riemannian distance (unless the metric is riemannian). In conclusion, there are no “interesting” nontrivial C^{1,1} bump functions (say quadratic-like, see in the post of Tao how he constructs his bump function by using the distance).

There must be something going wrong with the “Taylor expansion” from the end of the proof of Lemma 11,  if instead of a norm with respect to a bump function we put a sub-riemannian distance. Presumably instead of “n”  and  “n^{2}” we have to put something else, like   “n^{a}”    and  “n^{b}” respectively, with coefficients  a, b/2 <1 and also functions of (a kind of  degree,  say) of g. Well, the coefficient b will be very interesting, because related to some notion of curvature to be discovered.

Topographica, the neural map simulator

The following speaks for itself:

 Topographica neural map simulator 

“Topographica is a software package for computational modeling of neural maps, developed by the Institute for Adaptive and Neural Computation at the University of Edinburgh and the Neural Networks Research Group at the University of Texas at Austin. The project is funded by the NIMH Human Brain Project under grant 1R01-MH66991. The goal is to help researchers understand brain function at the level of the topographic maps that make up sensory and motor systems.”

From the Introduction to the user manual:

“The cerebral cortex of mammals primarily consists of a set of brain areas organized as topographic maps (Kaas et al. 1997Vanessen et al. 2001). These maps contain systematic two-dimensional representations of features relevant to sensory, motor, and/or associative processing, such as retinal position, sound frequency, line orientation, or sensory or motor motion direction (Blasdel 1992Merzenich et al. 1975Weliky et al. 1996). Understanding the development and function of topographic maps is crucial for understanding brain function, and will require integrating large-scale experimental imaging results with single-unit studies of individual neurons and their connections.”

One of the Tutorials is about the Kohonen model of self-organizing maps, mentioned in the post  Maps in the brain: fact and explanations.

Numbers for biology, are them enough?

Very impressed by this post:

Numb or numbered?

from the blog of Stephen Curry.

Two reactions, opposite somehow, could be triggered by the parallel between physics (now a field respected by any  layman) and biology (the new challenger).

The glory of physics, as well as the industrial revolution, are a consequence of the discovery of infinitesimal calculus by  the Lucasian Professor of Mathematics Isaac Newton  and by the   philosopher, lawyer and mathematician Gottfried Leibniz. All of this started from the extraordinary creation of a gifted generation of thinkers. We may like this or not, but this is TRUE.

The reactions:

1. Positive: yes, definitely some mathematical literacy would do a lot of good to students from the biological sciences. In fact I am shocked that apparently there is resistance to this in the field. (Yes, mathematicians can be and are arrogant when interacting with other scientists, but in most of the cases that means that (a) they are bad mathematicians anyway, except when they are not, or  (b) that they react to the misconceptions of the other scientists (which, by manifesting such narrowness of view, are bad scientists, except when they are not))

2. Negative: Numeracy and preadolescent recipes (at least this is (or was)  the level of mathematics knowledge in the school curriculum in the part of the world where I grown up) are not enough. Mathematics was highly developed before infinitesimal calculus, but this was not sufficient for the newtonian revolution.

To finish,  Robert Hooke was in the same generation with Newton and Leibniz. So maybe biology could hurry up a bit in this respect.

Noncommutative Baker-Campbell-Hausdorff formula: the problem

I come back to a problem alluded in a previous post, where the proof of the Baker-Campbell-Hausdorff formula from this post by Tao is characterized as “commutative”, because of the “radial homogeneity” condition in his Theorem 1 , which forces commutativity.

Now I am going to try to explain this, as well as what the problem of a “noncommutative” BCH formula would be.

Take a Lie group G and identify a neighbourhood of its neutral element with a neighbourhood of the 0 element of its Lie algebra. This is standard for Carnot groups (connected, simply connected nilpotent groups which admit a one parameter family of contracting automorphisms), where the exponential is bijective, so the identification is global. The advantage of this identification is that we get rid of log’s and exp’s in formulae.

For every s > 0 define a deformation of the group operation (which is denoted multiplicatively), by the formula

(1)                s(x *_{s} y) = (sx) (sy)

Then we have x *_{s} y \rightarrow x+y as s \rightarrow 0.

Denote by [x,y] the Lie bracket of the (Lie algebra of the) group G with initial operation and likewise denote by [x,y]_{s} the Lie bracket of the operation *_{s}.

The relation between these brackets is: [x,y]_{s} = s [x,y].

From the Baker-Campbell-Hausdorff formula we get:

-x + (x *_{s} y) - y = \frac{s}{2} [x,y] + o(s),

(for reasons which will be clear later, I am not using the commutativity of addition), therefore

(2)         \frac{1}{s} ( -x + (x *_{s} y) - y ) \rightarrow \frac{1}{2} [x,y]       as        s \rightarrow 0.

Remark that (2) looks like a valid definition of the Lie bracket which is not related to the group commutator. Moreover, it is a formula where we differentiate only once, so to say. In the usual derivation of the Lie bracket from the group commutator we have to differentiate twice!

Let us now pass to a slightly different context: suppose G is a normed group with dilations (the norm is for simplicity, we can do without; in the case of “usual” Lie groups, taking a norm corresponds to taking a left invariant Riemannian distance on the group).

G is a normed group with dilations if

  • it is a normed group, that is there is a norm function defined on G with values in [0,+\infty), denoted by \|x\|, such that

\|x\| = 0 iff x = e (the neutral element)

\| x y \| \leq \|x\| + \|y\|

\| x^{-1} \| = \| x \|

– “balls” \left\{ x \mid \|x\| \leq r \right\} are compact in the topology induced by the distance $d(x,y) = \|x^{-1} y\|$,

  • and a “multiplication by positive scalars” (s,x) \in (0,\infty) \times G \mapsto sx \in G with the properties:

s(px) = (sp)x , 1x = x and sx \rightarrow e as $s \rightarrow 0$; also s(x^{-1}) = (sx)^{-1},

– define x *_{s} y as previously, by the formula (1) (only this time use the multiplication by positive scalars). Then

x *_{s} y \rightarrow x \cdot y      as      s \rightarrow 0

uniformly with respect to x, y in an arbitrarry closed ball.

\frac{1}{s} \| sx \| \rightarrow \|x \|_{0}, uniformly with respect to x in a closed ball, and moreover \|x\|_{0} = 0 implies x = e.

Comments:

1. In truth, everything is defined in a neighbourhood of the neutral element, also G has only to be a local group.

2. the operation x \cdot y is a (local) group operation and the function \|x\|_{0} is a norm for this operation, which is also “homogeneous”, in the sense

\|sx\|_{0} = s \|x\|_{0}.

Also we have the distributivity property s(x \cdot y) = (sx) \cdot (sy), but generally the dot operation is not commutative.

3. A Lie group with a left invariant Riemannian distance d and with the usual multiplication by scalars (after making the identification of a neighbourhood of the neutral element with a neighbourhood in the Lie algebra) is an example of a normed group with dilations, with the norm \|x\| = d(e,x).

4. Any Carnot group can be endowed with a structure of a group with dilations, by defining the multiplication by positive scalars with the help of its intrinsic dilations. Indeed, take for example a Heisenberg group G = \mathbb{R}^{3} with the operation

(x_{1}, x_{2}, x_{3}) (y_{1}, y_{2}, y_{3}) = (x_{1} + y_{1}, x_{2} + y_{2}, x_{3} + y_{3} + \frac{1}{2} (x_{1}y_{2} - x_{2} y_{1}))

multiplication by positive scalars

s (x_{1},x_{2},x_{3}) = (sx_{1}, sx_{2}, s^{2}x_{3})

and norm given by

\| (x_{1}, x_{2}, x_{3}) \|^{2} = (x_{1})^{2} + (x_{2})^{2} + \mid x_{3} \mid

Then we have X \cdot Y = XY, for any X,Y \in G and \| X\|_{0} = \|X\| for any X \in G.

Carnot groups are therefore just a noncommutative generalization of vector spaces, with the addition operation $+$ replaced by a noncommutative operation!

5. There are many groups with dilations which are not Carnot groups. For example endow any Lie group with a left invariant sub-riemannian structure and hop, this gives a norm group with dilations structure.

In such a group with dilations the “radial homogeneity” condition of Tao implies that the operation x \cdot y is commutative! (see the references given in this previous post). Indeed, this radial homogeneity is equivalent with the following assertion: for any s \in (0,1) and any x, y \in G

x s( x^{-1} ) = (1-s)x

which is called elsewhere “barycentric condition”. This condition is false in any noncommutative Carnot group! What it is true is the following: let, in a Carnot group, x be any solution of the equation

x s( x^{-1} ) = y

for given y \in G and $s \in (0,1)$. Then

x = \sum_{k=0}^{\infty} (s^{k}) y ,

(so the solution is unique) where the sum is taken with respect to the group operation (noncommutative series).

Problem of the noncommutative BCH formula: In a normed group with dilations, express the group operation xy as a noncommutative series, by using instead of “+” the operation “\cdot” and by using a definition of the “noncommutative Lie bracket” in the same spirit as (2), that is something related to the asymptotic behaviour of the “approximate bracket”

(3)         [x,y]_{s} = (s^{-1}) ( x^{-1} \cdot (x *_{s} y) \cdot y^{-1} ).

Notice that there is NO CHANCE to have a limit like the one in (2), so the problem seems hard also from this point of view.

Also notice that if G is a Carnot group then

[x,y]_{s} = e (that is like it is equal to o, remember)

which is normal, if we think about G as being a kind of noncommutative vector space, even of G may be not commutative.

So this noncommutative Lie bracket is not about commutators!

Topological substratum of the derivative

Until recently, on my home page was a link to the scan of the paper

 The topological substratum of the derivative (I), Math. Reports (Stud. Cerc. Mat.) 45, 6,       (1993), 453-465

which is no longer visible now. But maybe it deserves a post here, because is my oldest attempt to understand differential calculus as an abstract matter and to look to new forms of it.

To me it became clear that differential calculus admits variants, in the same spirit as euclidean geometry admitting non-euclidean variants. At that moment I had no really intersting examples of such a “non-euclidean” differential calculus, so I switched to other research subjects. Nobody pointed to me the huge paper

Métriques de Carnot-Carathéodory et quasiisométries des espaces symétriques de rang unThe Annals of Mathematics Second Series, Vol. 129, No. 1 (Jan., 1989), pp. 1-60

by Pierre Pansu. It was only luck that in 2000, at Lausanne, I met Sergey Vodop’yanov (from Sobolev Institute of Mathematics). He started to explain to me what Carnot groups are and I was thrilled to   learn that examples I needed previously are numerous in sub-riemannian geometry.

With the right frame of mind (at least I think so), that of intrinsic dilations, I  started then to study sub-riemannian geometry.

Planar rooted trees and Baker-Campbell-Hausdorff formula

Today on arXiv was posted the paper

Posetted trees and Baker-Campbell-Hausdorff product, by Donatella Iacono, Marco Manetti

with the abstract

We introduce the combinatorial notion of posetted trees and we use it in order to write an explicit expression of the Baker-Campbell-Hausdorff formula.

The paper may be relevant (check also the bibliography!) for the subject of writing “finitary“, “noncommutative” BCH formulae, from self-similarity arguments using dilations.

Entering “chora”, the infinitesimal place

There is a whole discussion around the key phrases “The map is not the territory” and “The map is the territory”. From the wiki entry on the map-territory relation, we learn that Korzybski‘s dictum “the map is not the territory” means that:

A) A map may have a structure similar or dissimilar to the structure of the territory,

B) A map is not the territory.

Bateson, in “Form, Substance and Difference” has a different take on this: he starts by explaining the pattern-substance dichotomy

Let us go back to the original statement for which Korzybski is most famous—the statement that the map is not the territory. This statement came out of a very wide range of philosophic thinking, going back to Greece, and wriggling through the history of European thought over the last 2000 years. In this history, there has been a sort of rough dichotomy and often deep controversy. There has been violent enmity and bloodshed. It all starts, I suppose, with the Pythagoreans versus their predecessors, and the argument took the shape of “Do you ask what it’s made of—earth, fire, water, etc.?” Or do you ask, “What is its pattern?” Pythagoras stood for inquiry into pattern rather than inquiry into substance.1 That controversy has gone through the ages, and the Pythagorean half of it has, until recently, been on the whole the submerged half.

Then he states his point of view:

We say the map is different from the territory. But what is the territory? […] What is on the paper map is a representation of what was in the retinal representation of the man who made the map–and as you push the question back, what you find is an infinite regress, an infinite series of maps. The territory never gets in at all.

Always the process of representation will filter it out so that the mental world is only maps of maps of maps, ad infinitum.

At this point Bateson puts a very interesting footnote:

Or we may spell the matter out and say that at every step, as a difference is transformed and propagated along its pathways, the embodiment of the difference before the step is a “territory” of which the embodiment after the step is a “map.” The map-territory relation obtains at every step.

Inspired by Bateson, I want to explore from the mathematical side the point of view that there is no difference between the map and the territory, but instead the transformation of one into another can be understood by using tangle diagrams.

Let us imagine that the exploration of the territory provides us with an atlas, a collection of maps, mathematically understood as a family of two operations (an “emergent algebra”). We want to organize this spatial information in a graphical form which complies with Bateson’s footnote: map and territory have only local meaning in the graphical representation, being only the left-hand-side (and r-h-s respectively) of the “making map” relation.

Look at the following figure:

In the figure from the left, the “v” which decorates an arc, represents a point in the “territory”, that is the l-h-s of the relation, the “u” represents a “pixel in the map”, that is the r-h-s of a relation. The relation itself is represented by a crossing decorated by an epsilon, the “scale” of the map.

The opposite crossing, see figure from the right, is the inverse relation.

Imagine now a complex diagram, with lots of crossings, decorated by various
scale parameters, and segments decorated with points from a space X which
is seen both as territory (to explore) and map (of it).

In such a diagram the convention map-territory can be only local, around each crossing.

There is though a diagram which could unambiguously serve as a symbol for
“the place (near) the point x, at scale epsilon” :

In this diagram, all crossings which are not decorated have “epsilon” as a decoration, but this decoration can be unambiguously placed near the decoration “x” of the closed arc. Such a diagram will bear the name “infinitesimal place (or chora) x at scale epsilon”.

“Metric spaces with dilations”, the book draft updated

Here is the updated version.

Many things left to be done and to explain properly, like:

  • the word tangent bundle and more flexible notions of smoothness, described a bit hermetically and by means of examples here, section 6,
  • the relation between curvature and how to perform the Reidemeister 3 move, the story starts here in section 6 (coincidence)
  • why the Baker-Campbell-Hausdorff formula can be deduced from self-similarity arguments (showing in particular that there is another interpretation of the Lie bracket than the usual one which says that the bracket is related to the commutator). This will be posted on arxiv soon. UPDATE:  see this post by Tao on the (commutative, say) BCH formula. It is commutative because of his “radial homogeneity” axiom in Theorem 1, which is equivalent with the “barycentric condition” (Af3) in Theorem 2.2 from “Infinitesimal affine geometry…” article.
  • a gallery of emergent algebras, in particular you shall see what “spirals” are (a kind of generalization of rings, amazingly connecting by way of an example with another field of interests of mine, convex analysis).

Gleason metric and CC distance

In the series of posts on Hilbert’s fifth problem, Terence Tao defines a Gleason metric, definition 4 here, which is a very important ingredient of the proof of the solution to H5 problem.

Here is Remark 1. from the post:

The escape and commutator properties are meant to capture “Euclidean-like” structure of the group. Other metrics, such as Carnot-Carathéodory metrics on Carnot Lie groups such as the Heisenberg group, usually fail one or both of these properties.

I want to explain why this is true. Look at the proof of theorem 7. The problem comes from the commutator estimate (1). I shall reproduce the relevant part of the proof because I don’t yet know how to write good-looking latex posts:

From the commutator estimate (1) and the triangle inequality we also obtain a conjugation estimate

\displaystyle  \| ghg^{-1} \| \sim \|h\|

whenever {\|g\|, \|h\| \leq \epsilon}. Since left-invariance gives

\displaystyle  d(g,h) = \| g^{-1} h \|

we then conclude an approximate right invariance

\displaystyle  d(gk,hk) \sim d(g,h)

whenever {\|g\|, \|h\|, \|k\| \leq \epsilon}.

The conclusion is that the right translations in the group are Lipschitz (with respect to the Gleason metric). Because this distance (I use “distance” instead of “metric”) is also left invariant, it follows that left and right translations are Lipschitz.

Let now G be a connected Lie group with a left-invariant distribution, obtained by left translates of a vector space D included in the Lie algebra of G. The distribution is completely non-integrable if D generates the Lie algebra by using the + and Lie bracket operations. We put an euclidean norm on D and we get a CC distance on the group defined by: the CC distance between two elements of the group equals the infimum of lengths of horizontal (a.e. derivable, with the tangent in the distribution) curves joining the said points.

The remark 1 of Tao is a consequence of the following fact: if the CC distance is right invariant then D equals the Lie algebra of the group, therefore the distance is riemannian.

Here is why: in a sub-riemannian group (that is a group with a distribution and CC distance as explained previously) the left translations are Lipschitz (they are isometries) but not all right translations are Lipschitz, unless D equals the Lie algebra of G. Indeed, let us suppose that all right translations are Lipschitz. Then, by Margulis-Mostow version (see also this) of the Rademacher theorem , the right translation by an element “a” is Pansu derivable almost everywhere. It follows that the Pansu derivative of the right translation by “a” (in almost every point) preserves the distribution. A simple calculus based on invariance (truly, some explanations are needed here) shows that by consequence the adjoint action of “a” preserves D. Because “a” is arbitrary, this implies that D is an ideal of the Lie algebra. But D generates the Lie algebra, therefore D equals the Lie algebra of G.

If you know a shorter proof please let me know.

UPDATE: See the recent post 254A, Notes 4: Bulding metrics on groups, and the Gleason-Yamabe theorem by Terence Tao, for details of the role of the Gleason metric  in the proof of the Hilbert 5th problem.

Curvature and Brunn-Minkowski inequality

A beautiful paper by Yann Ollivier and Cedric Villani

A curved BRUNN–MINKOWSKI INEQUALITY on the discrete hypercube OR: WHAT IS THE RICCI CURVATURE OF THE DISCRETE  HYPERCUBE?

The Brunn-Minkowski inequality  says that  the log  of the volume (in euclidean spaces) is concave. The concavity inequality is improved, in riemannian manifolds with Ricci curvature at least K, by a quadratic term with coefficient proportional with K.

The paper is remarkable in many ways. In particular are compared two roads towards curvature in spaces more general than riemannian: the coarse curvature introduced by Ollivier and the other based on the displacement convexity of the entropy function (Felix Otto , Cedric Villani, John Lott, Karl-Theodor Sturm), studied by many researchers. Both are related to  Wasserstein distances . NONE works for sub-riemannian spaces, which is very very interesting.

In few words, here is the description of the coarse Ricci curvature: take an epsilon and consider the application from the metric space (riemannian manifold, say) to the space of probabilities which associates to a point from the metric space the restriction of the volume measure on the epsilon-ball centered in that point (normalized to give a probability). If this application is Lipschitz with constant L(epsilon) (on the space of probabilities take the L^1 Wassertein distance) then the epsilon-coarse Ricci curvature times epsilon square is equal to 1 minus L(epsilon) (thus we get a lower bound of the Ricci curvature function, if we are in a Riemannian manifold). Same definition works in a discrete space (this time epsilon is fixed).
The second definition of Ricci curvature comes from inverse engineering of the displacement convexity inequality discovered in many particular spaces. The downside of this definition is that is hard to “compute” it.

Initially, this second definition was related to the L^2 Wasserstein distance which,  according to Otto calculus, gives to the space of probabilities (in the L^2 frame) a structure of an infinite dimensional riemannian manifold.

Concerning the sub-riemannian spaces, in the first definition the said application cannot be Lipschitz and in the second definition there is (I think) a manifestation of the fact that we cannot put, in a metrically acceptable way, a sub-riemannian space into a riemannian-like one, even infinite dimensional.

A difference which makes four differences, in two ways

Gregory Bateson , speaking about the map-territory relation

“What is in the territory that gets onto the map? […] What gets onto the map, in fact, is difference.

A difference is a very peculiar and obscure concept. It is certainly not a thing or an event. This piece of paper is different from the wood of this lectern. There are many differences between them, […] but if we start to ask about the localization of those differences, we get into trouble. Obviously the difference between the paper and the wood is not in the paper; it is obviously not in the wood; it is obviously not in the space between them .

A difference, then, is an abstract matter.

Difference travels from the wood and paper into my retina. It then gets picked up and worked on by this fancy piece of computing machinery in my head.

… what we mean by information — the elementary unit of information — is a difference which makes a difference.

(from “Form, Substance and Difference”, Nineteenth Annual Korzybski Memorial
Lecture delivered by Bateson on January 9, 1970, under the auspices of the Institute of General Semantics, re-printed from the General Semantics Bulletin, no.
37, 1970, in Steps to an Ecology of Mind (1972))

This “difference which makes a difference” statement is quite famous, although sometimes considered only a figure of speach.

I think it is not, let me show you why!

For me a difference can be interpreted as an operator which relates images of the same thing (from the territory) viewed in two different maps, like in the following picture:

This figure is taken from “Computing with space…” , see section 1 “The map is the territory” for drawing conventions.

Forget now about maps and territories and concentrate on this diagram viewed as a decorated tangle. The rules of decorations are the following: arcs are decorated with “x,y,…”, points from a space, and the crossings are decorated with epsilons, elements of a commutative group (secretly we use an emergent algebra, or an uniform idempotent right quasigroup, to decorate arcs AND crossings of a tangle diagram).

What we see is a tangle which appears in the Reidemeister move 3 from knot theory. When epsilons are fixed, this diagram defines a function called (approximate) difference.

Is this a difference which makes a difference?

Yes, in two ways:

1. We could add to this diagram an elementary unknot passing under all arcs, thus obtaining the diagram

Now we see four differences in this equivalent tangle: the initial one is made by three others.
The fact that a difference is selfsimilar is equivalent with the associativity of the INVERSE of the approximate difference operation, called approximate sum.

2. Let us add an elementary unknot over the arcs of the tangle diagram, like in the following figure

called “difference inside a chora” (you have to read the paper to see why). According to the rules of tangle diagrams, adding unknots does not change the tangle topologically (although this is not quite true in the realm of emergent algebras, where the Reidemeister move 3 is an acceptable move only in the limit, when passing with the crossing decorations to “zero”).

By using only Reidemeister moves 1 and 2, we can turn this diagram into the celtic looking figure

which shows again four differences: the initial one in the center and three others around.

This time we got a statement saying that a difference is preserved under “infinitesimal parallel transport”.

So, indeed, a difference makes four differences, in at least two ways, for a mathematician.

If you want to understand more from this crazy post, read the paper 🙂

Rigidity of algebraic structure: principle of common cause

I follow with a lot of interest the stream of posts by Terence Tao on the Hilbert’s fifth problem and I am waiting impatiently to see how it connects with the field of approximate groups.

In his latest post Tao writes that

… Hilbert’s fifth problem is a manifestation of the “rigidity” of algebraic structure (in this case, group structure), which turns weak regularity (continuity) into strong regularity (smoothness).

This is something amazing and worthy of exploration!
I propose the following “explanation” of this phenomenon, taking the form of the:

Principle of common cause: an uniformly continuous algebraic structure has a smooth structure because both structures can be constructed from an underlying emergent algebra (introduced here).

Here are more explanations (adapted from the first paper on emergent algebras):

A differentiable algebra, is an algebra (set of operations A) over a manifold X with the property that all the operations of the algebra are differentiable with respect to the manifold structure of X. Let us denote by D the differential structure of the manifold X.
From a more computational viewpoint, we may think about the calculus which can be
done in a differentiable algebra as being generated by the elements of a toolbox with two compartments “A” and “D”:

– “A” contains the algebraic information, that is the operations of the algebra, as
well as algebraic relations (like for example ”the operation ∗ is associative”, or ”the operation ∗ is commutative”, and so on),
– “D” contains the differential structure informations, that is the information needed in order to formulate the statement ”the function f is differentiable”.
The compartments “A” and “D” are compatible, in the sense that any operation from “A” is differentiable according to “D”.

I propose a generalization of differentiable algebras, where the underlying differential structure is replaced by a uniform idempotent right quasigroup (irq).

Algebraically, irqs are related with racks and quandles, which appear in knot theory (the axioms of a irq correspond to the first two Reidemeister moves). An uniform  irq is a family of irqs indexed by elements of a commutative group (with an absolute), such that  the third Reidemeister move is related to a statement in terms of uniform limits of composites of operations of the family of irqs.

An emergent algebra is an algebra A over the uniform irq X such that all operations and algebraic relations from A can be constructed or deduced from combinations of operations in the uniform irq, possibly by taking limits which are uniform with respect to a set of parameters. In this approach, the usual compatibility condition between algebraic information and differential information, expressed as the differentiability of algebraic operations with respect to the differential structure, is replaced by the “emergence” of algebraic operations and relations from the minimal structure of a uniform irq.

Thus, for example, algebraic operations and the differentiation operation (taking   the triple (x,y,f) to Df(x)y , where “x, y” are  points and “f” is a function) are expressed as uniform limits of composites of more elementary operations. The algebraic operations appear to be differentiable because of algebraic abstract nonsense (obtained by exploitation of the Reidemeister moves) and because of the uniformity assumptions which allow us to freely permute limits with respect to parameters in the commutative group (as they tend to the absolute), due to the uniformity assumptions.

The structure of visual space

Mark Changizi has an interesting post “The Visual Nerd in You Undestands Curved Space” where he explains that spherical geometry is relevant for the visual perception.

At some point he writes a paragraph which triggered my post:

Your visual field conforms to an elliptical geometry!

(The perception I am referring to is your perception of the projection, not your perception of the objective properties. That is, you will also perceive the ceiling to objectively, or distally, be a rectangle, each angle having 90 degrees. Your perception of the objective properties of the ceiling is Euclidean.)

Is it true that our visual perception senses the Euclidean space?

Look at this very interesting project

The structure of optical space under free viewing conditions

and especially at this paper:

The structure of visual spaces by J.J. Koenderink, A.J. van Doorn, Journal of mathematical imaging and vision, Volume: 31, Issue: 2-3 (2008), pp. 171-187

In particular, one of the very nice things this group is doing is to experimentally verify the perception of true facts in projective geometry (like this Pappus theorem).

From the abstract of the paper: (boldfaced by me)

The “visual space” of an optical observer situated at a single, fixed viewpoint is necessarily very ambiguous. Although the structure of the “visual field” (the lateral dimensions, i.e., the “image”) is well defined, the “depth” dimension has to be inferred from the image on the basis of “monocular depth cues” such as occlusion, shading, etc. Such cues are in no way “given”, but are guesses on the basis of prior knowledge about the generic structure of the world and the laws of optics. Thus such a guess is like a hallucination that is used to tentatively interpret image structures as depth cues. The guesses are successful if they lead to a coherent interpretation. Such “controlled hallucination” (in psychological terminology) is similar to the “analysis by synthesis” of computer vision.

So, the space is perceived to be euclidean based on prior knowledge, that is because prior controlled hallucinations led consistently to coherent interpretations.

Maps in the brain: fact and explanations

From wikipedia

Retinotopy describes the spatial organization of the neuronal responses to visual stimuli. In many locations within the brain, adjacent neurons have receptive fields that include slightly different, but overlapping portions of the visual field. The position of the center of these receptive fields forms an orderly sampling mosaic that covers a portion of the visual field. Because of this orderly arrangement, which emerges from the spatial specificity of connections between neurons in different parts of the visual system, cells in each structure can be seen as forming a map of the visual field (also called a retinotopic map, or a visuotopic map).

See also tonotopy for sounds and the auditory system.

The existence of retinotopic maps is a fact, the problem is to explain how they appear and how they function without falling into the homunculus fallacy, see my previous post.

One of the explanations of the appearance of these maps is given by Teuvo Kohonen.

Browse this paper (for precise statements) The Self-Organizing map , or get a blurry impression from this wiki page. The last paragraph from section B. Brain Maps reads:

It thus seems as if the internal representations of information in the brain are generally organized spatially.

Here are some quotes from the same section, which should rise the attention of a mathematician to the sky:

Especially in higher animals, the various cortices in the cell mass seem to contain many kinds of “map” […] The field of vision is mapped “quasiconformally” onto the primary visual cortex. […] in the visual areas, there are line orientation and color maps. […] in the auditory cortex there are the so-called tonotopic maps, which represent pitches of tones in terms of the cortical distance […] at the higher levels the maps are usually unordered, or at most the order is a kind of ultrametric topological order that is not easy interpretable.

Typical for self-organizing maps is that they use (see wiki page) “a neighborhood function to preserve the topological properties of the input space”.

From the connectionist viewpoint, this neighbourhood function is implemented by lateral connections between neurons.

For more details see for example Maps in the Brain: What Can We Learn from Them? by Dmitri B. Chklovskii and Alexei A. Koulakov. Annual Review of Neuroscience 27: 369-392 (2004).

Also browse Sperry versus Hebb: Topographic mapping in Isl2/EphA3 mutant mice by Dmitri Tsigankov and Alexei A. Koulakov .

Two comments:

1. The use of a neighbourhood function is much more than just preserving topological information. I tentatively propose that such neighbourhood functions appear out of the need of organizing spatial information, like explained in the pedagogical paper from the post Maps of metric spaces.

2. Just to reason on discretizations (like hexagonal or other) of the plane is plain wrong, but this is a problem encountered in many many places elsewhere. It is wrong because it introduces the (euclidean) space on the back door (well, this and using happily an L^2 space).

Maps of metric spaces

This is a pedagogical introduction covering maps of metric spaces, Gromov-Hausdorff distance and its “physical” meaning, and dilation structures as a convenient simplification of an exhaustive database of maps of a metric space into another:

Maps of metric spaces

The material is taken and slightly adapted from the long paper “Computing with space”, check for updates of this on this page.

Hilbert fifth’s problem without one parameter subgroups

Further I reproduce, with small modifications, a comment   to the post

Locally compact groups with faithful finite-dimensional representations

by Terence Tao.

My motivation lies in the  project   described first time in public here.  In fact, one of the reasons to start this blog is to have a place where I can leisurely explain stuff.

Background:    The answer to the  Hilbert fifth’s problem  is: a connected locally compact group without small subgroups is a Lie group.

The key idea of the proof is to study the space of one parameter subgroups of the topological group. This space turns out to be a good model of the tangent space at the neutral element of the group (eventually) and the effort goes towards turning upside-down this fact, namely to prove that this space is a locally compact topological vector space and the “exponential map”  gives a chart  of  (a neighbourhood of the neutral element of ) the group into this space.

Because I am a fan of differential structures   (well, I think they are only the commutative, boring side of dilation structures  or here or emergent algebras)   I know a situation when one can prove the fact that a topological group is a Lie group without using the one parameter subgroups!

Here starts the original comment, slightly modified:

Contractive automorphisms may be as relevant as one-parameter subgroups for building a Lie group structure (or even more), as shown by the following result from E. Siebert, Contractive Automorphisms on Locally Compact Groups, Math. Z. 191, 73-90 (1986)

5.4. Proposition. For a locally compact group G the following assertions are equivalent:
(i) G admits a contractive automorphism group;
(ii) G is a simply connected Lie group whose Lie algebra g admits a positive graduation.

The corresponding result for local groups is proved in L. van den Dries, I. Goldbring, Locally Compact Contractive Local Groups, arXiv:0909.4565v2.

I used Siebert result for proving the Lie algebraic structure of the metric tangent space to a sub-riemannian manifold in M. Buliga, Infinitesimal affine geometry of metric spaces endowed with a dilatation structure, Houston Journal of Mathematics, 36, 1 (2010), 91-136. arXiv:0804.0135v2

(added here: see  in Corollary 6.3 from “Infinitesimal affine …” paper, as well as Proposition 5.9 and Remark 5.10 from the paper  A characterization of sub-riemannian spaces as length dilation structures constructed via coherent projections, Commun. Math. Anal. 11 (2011), No. 2, pp. 70-111 , arXiv:0810.5042v4 )

When saying that contractive automorphisms, or approximately contractive automorphisms [i.e. dilation structures], may be more relevant than one-parameter subgroups, I am thinking about sub-riemannian geometry again, where a one-parameter subgroup of a group, endowed with a left-invariant distribution and a corresponding Carnot-Caratheodory distance, is “smooth” (with respect to Pansu-type derivative) if and only if the generator is in the distribution. Metrically speaking, if the generator  is not in the distribution then any trajectory of the one-parameter group has Hausdorff dimension greater than one. That means lots of problems with the definition of the exponential and any reasoning based on differentiating flows.

Book project: Metric spaces with dilations

On the mathematics front, here is a link to a book project

Metric spaces with dilations

(working version! chech for updates)

As it is now, it’s not much more than a merger of previous research papers, without the nice figures from “Computing with space…” , but from seeing all in one place you may get a sense of what is this all about.

This first version has been prepared as a basis for the minicourse

“Carnot-Caratheodory spaces as metric spaces with dilations”

held at a January 2011 Summer School in Rio de Janeiro.

In the same place I started being interested in exploring this frontier between neuroscience and mathematics…

Computing with space

This is the first in a series of postings concerning computing with space. I shall try to give a gentle introduction to – and later discussion around – the ideas from the paper

Computing with space: a tangle formalism for chora and difference

We shall talk about:

– mathematics of metric spaces with dilations

Bateson viewpoint that the map is the territory, as opposed to Korzybski dictum “the map is not the territory”.

– Plato’ Timaeus 48e-53c where he introduces the concept of “chora”, which means “space” or “place”

– research in the neuroscience of vision, like Jan Koenderink paper “Brain a geometry engine”

and many other.

Older papers of mine on this subject: arXiv:1009.5028 “What is a space? Computations in emergent algebras and the front end visual system” and the arXiv:1007.2362 “Introduction to metric spaces with dilations”.