How not to get bored, by reading Gromov and Tao

This is a continuation of the previous post Gromov’s ergobrain, triggered by the update of Gromov paper on July 27. It is also related to the series of posts by Tao on the Hilbert’s fifth problem.

To put you in the right frame of mind, both Gromov and Tao set the stage for upcoming, hopefully extremely interesting (or “beautiful” on Gromov’s scale: interesting, amusing, amazing, funny and beautiful) developments of their work on “ergosystems” and “approximate groups” respectively.

What can be the link between those? In my opinion, both works refer to the unexplored ground between discrete (with not so many elements) and continuous (or going to the limit with the number of elements of a discrete world).

Indeed, along with my excuses for simplifying too much a very rich text, let me start with the example of the bug on a leaf, sections 2.12, 2.13 in Gromov’s paper). I understand that the bug, as any other “ergosystem” (like one’s brain) would get bored to behave like a finite state automaton crawling on a “co-labeled graph” (in particular on a Cayley graph of a discretely generated group). The implication seems to be that an ergosystem has a different behaviour.

I hardly refrain to copy-paste the whole page 96 of Gromov’s paper, please use the link and read it instead, especially the part related to downsides of Turing modeling (it is not geometrical, in few words). I shall just paste here the end:

The two ergo-lessons one may draw from Turing models are mutually contradictory.
1. A repeated application of a simple operation(s) may lead to something unexpectedly complicated and interesting.
2. If you do not resent the command “repete” and/or are not getting bored by doing the same thing over and over again, you will spend your life in a “Turing loop” of an endless walk in a circular tunnel.

That is because the “stop-function” associated to a class of Turing machines

may grow faster than anything you can imagine, faster than anything expressible by any conceivable formula – the exponential and double exponential functions that appeared so big to you seem as tiny specks of dust compared to this monstrous stop-function. (page 95)

Have I said “Cayley graph”? This brings me to discrete groups and to the work of Tao (and Ben Green and many others). According to Tao, there is something to be learned from the solution of the Hilbert’s fifth problem, in the benefit of understanding approximate groups. (I am looking forward to see this!) There are some things that I understood from the posts of Tao, especially that a central concept is a Gleason metric and its relations with group commutators. In previous posts (last is this) I argue that Gleason metrics are very unlike sub-riemannian distances. It has been unsaid, but obvious to specialists, that sub-riemannian metrics are just like distances on Cayley graphs, so as a consequence Gleason metrics are only a commutative “shadow” of what happens in a Cayley graph when looked from afar. Moreover, in this post concerning the problem of a non-commutative Baker-Campbell-Hausdorff formula it is said that (in the more general world of groups with dilations, relevant soon in this post) the link between the Lie bracket and group commutators is shallow and due to the commutativity of the group operation in the tangent space.

So let me explain, by using Gromov’s idea of boredom, how not to get bored in a Cayley graph. Remember that I quoted a paragraph (from Gromov paper, previous version), stating that an ergosystem “would be bored to death” to add large numbers? Equivalently, an ergosystem would be bored to add (by using the group operation) elements of the group expressed as very long words with letters representing the generators of the group. Just by using “finite state automata” type of reasoning with the relations between generators (expressed by commutators and finitary versions of Gleason like metrics) an ergosystem would get easily bored. What else can be done?

Suppose that we crawl in the Cayley graph of a group with polynomial growth, therefore we know (by a famous result of Gromov) that seen from afar the group is a nilpotent one, more precisely a group with the algebraic structure completely specified by its dilations. Take one such dilation, of coefficient 10^{-30} say, and (by an yet unknown “finitization” procedure) associate to it a “discrete shadow”, that is an “approximate dilation” acting on the discrete group itself. As this is a genuinely non-commutative object, probably the algorithm for defining it (by using relations between growth and commutators) would be very much resource consuming. But suppose we just have it, inferred from “looking at the forrest” as an ergosystem.

What a great object would that be. Indeed, instead of getting bored by adding two group elements, the first expressed as product of 200034156998123039234534530081 generators, the second expressed as a product of 311340006349200600380943586878 generators, we shall first reduce the elements (apply the dilation of coefficient 10^{-30}) to a couple of elements, first expressed as a product of 2 generators, second expressed as a product of 3 generators, then we do the addition 2+3 = 5 (and use the relations between generators), then we use the inverse dilation (which is a dilation of coefficient 10^{30}) to obtain the “approximate sum” of the two elements!

In practice, we probably have a dilation of coefficient 1/2 which could simplify the computation of products of group elements of length 2^{4} at most, for example.

But it looks like a solution to the problem of not getting bored, at least to me.

Braitenberg vehicles, enchanted looms and winnowing-fans

Braitenberg vehicles were introduced in the wonderful book (here is an excerpt which contains enough information for understanding this post):

Vehicles: Experiments in Synthetic Psychology [update: link no longer available]

by Valentino Braitenberg.

In the introduction of the book we find the following:

At times, though, in the back of my mind, while I was counting fibers in the visual ganglia of the fly or synapses in the cerebral cortex of the mouse, I felt knots untie,  distinctions dissolve, difficulties disappear, difficulties I had experienced much earlier when I still held my first naive philosophical approach to the problem of the mind.

This is not the first appearance of knots (and related weaving craft) as a metaphor for things related to the brain. A famous paragraph, by Charles Scott Sherrington compares the brain waking from sleep with an enchanted loom

 The great topmost sheet of the mass, that where hardly a light had twinkled or moved, becomes now a sparkling field of rhythmic flashing points with trains of traveling sparks hurrying hither and thither. The brain is waking and with it the mind is returning. It is as if the Milky Way entered upon some cosmic dance. Swiftly the head mass becomes an enchanted loom where millions of flashing shuttles weave a dissolving pattern, always a meaningful pattern though never an abiding one; a shifting harmony of subpatterns.

Compare with the following passage (Timaeus 52d and following) from Plato:

 …the nurse of generation [i.e. space, chora] …  presented a strange variety of appearances; and being full of powers which were neither similar nor equally balanced, was never in any part in a state of equipoise, but swaying unevenly hither and thither, was shaken by them, and by its motion again shook them; and the elements when moved were separated and carried continually, some one way, some another; as, when grain is shaken and winnowed by fans and other instruments used in the threshing of corn, the close and heavy particles are borne away and settle in one direction, and the loose and light particles in another.

The winnowing-fan (liknon) is important in the Greek mythology, it means also cradle and Plato uses this term with both meanings.

For a mathematician at least, winnowing and weaving are both metaphors of computing with braids: the fundamental group of the configuration space of the grains is the braid group and moreover the grains (trajectories) are the weft, the winnowing-fan is the warp of a loom.

All part of the reason of proposing a tangle formalism for chora and computing with space.

Back to Braitenberg vehicles. Vehicles 2,3,4 and arguably 5 are doing computations with space, not logical computations, by using sensors, motors and connections (that is map-making operations). I cite from the end of Vehicle 3 section:

But, you will say, this is ridiculous: knowledge implies a flow of information from the environment into a living being ar at least into something like a living being. There was no such transmission of information here. We were just playing with sensors, motors and connections: the properties that happened to emerge may look like knowledge but really are not. We should be careful with such words. […]

Meanwhile I invite you to consider the enormous wealth of different properties that we may give Vehicle 3c by choosing various sensors and various combinations of crossed and uncrossed, excitatory and inhibitory, connections.

Gordon Pask: An essay on the kinetics of language, behaviour and thought

It looks to me that there is a considerable quantity of mathematical structure hidden in the following internal paper from Gordon Pask at System Research

An essay on the kinetics of language, behaviour and thought

I am not impressed by any authority or fashion arguments, my question is the following: is there somebody who said interesting mathematical things about this work?

Thanks to Nick Green for sending the link to the file and for very interesting discussions during the writing of Computing with space.

Gromov’s Ergobrain

Misha Gromov updated his very interesting “ergobrain” paper

Structures, Learning and Ergosystems: Chapters 1-4, 6 

Two quotes I liked: (my emphasis)

The concept of the distance between, say, two locations on Earth looks simple enough, you do not think you need a mathematician to tell you what distance is. However, if you try to explain what you think you understand so well to a computer program you will be stuck at every step.  (page 76)

Our ergosystems will have no explicit knowledge of numbers, except may be for a few small ones, say two, three and four. On the contrary, neurobrains, being physical systems, are run by numbers which is reflected in their models, such as neural networks which sequentially compose addition of numbers with functions in one variable.

An unrestricted addition is the essential feature of “physical numbers”, such as mass, energy, entropy, electric charge. For example, if you bring together \displaystyle 10^{30}  atoms, then, amazingly, their masses add up […]

Our ergosytems will lack this ability. Defi nitely, they would be bored to death if they had to add one number to another \displaystyle 10^{30} times. But the \displaystyle 10^{30} -addition, you may object, can be implemented by \displaystyle log_{2} 10^{30} \sim 100 additions with a use of binary bracketing; yet, the latter is a non-trivial structure in its own right that our systems, a priori, do not have.  Besides, sequentially performing even 10 additions is boring. (It is unclear how Nature performs “physical addition” without being bored in the process.) (page 84)

Where is this going? I look forward to learn.

Two papers on arXiv

I put on arxiv two papers

The paper Computing with space contains too may ideas, is too dense, therefore much of it will not be read, as I was warned repeatedly. This is the reason to do again what I did with Introduction to metric spaces with dilations, which is a slightly edited part of the paper A characterization of sub-riemannian spaces as length dilation structures. Apparently the part (Introduction to ..), the small detail,  is much more read  than the whole (A characterization…).

Concerning the second paper “Normed groupoids…”, it is an improvement of the older paper. Why did I not updated the older paper? Because I need help, I just don’t understand where this is going (and why such direction of research was not explored before).

Escape property of the Gleason metric and sub-riemannian distances again

The last post of Tao from his series of posts on the Hilbert’s fifth problem contains interesting results which can be used for understanding the differences between Gleason distances and sub-riemannian distances or, more general, norms on groups with dilations.

For normed groups with dilations see my previous post (where links to articles are also provided). Check my homepage for more details (finally I am online again).

There is also another post of mine on the Gleason metric (distance) and the CC (or sub-riemannian) distance, where I explain why the commutator estimate (definition 3, relation (2) from the last post of Tao) forces “commutativity”, in the sense that a sub-riemannian left invariant distance on a Lie group which has the commutator estimate must be a riemannian distance.

What about the escape property (Definition 3, relation (1) from the post of Tao)?

From his Proposition 10 we see that the escape property implies the commutator estimate, therefore a sub-riemannian left invariant distance with the escape property must be riemannian.

An explanation of this phenomenon can be deduced by using the notion of “coherent projection”, section 9 of the paper

A characterization of sub-riemannian spaces as length dilation structures constructed via coherent projections, Commun. Math. Anal. 11 (2011), No. 2, pp. 70-111

in the very particular case of sub-riemannian Lie groups (or for that matter normed groups with dilations).

Suppose we have a normed group with dilations (G, \delta) which has another left invariant dilation structure on it (in the paper this is denoted by a “\delta bar”, here I shall use the notation \alpha for this supplementary dilation structure).

There is one such a dilation structure available for any Lie group (notice that I am not trying to give a proof of the H5 problem), namely for any \varepsilon > 0 (but not too big)

\alpha_{\varepsilon} g = \exp ( \varepsilon \log (g))

(maybe interesting: which famous lemma is equivalent with the fact that (G,\alpha) is a group with dilations?)
Take \delta to be a dilation structure coming from a left-invariant distribution on the group . Then \delta commutes with \alpha and moreover

(*) \lim_{\varepsilon \rightarrow 0} \alpha_{\varepsilon}^{-1} \delta_{\varepsilon} x = Q(x)

where Q is a projection: Q(Q(x)) = x for any x \in G.

It is straightforward to check that (the left-translation of) Q (over the whole group) is a coherent projection, more precisely it is the projection on the distribution!

Exercise: denote by \varepsilon = 1/n and use (*) to prove that the escape property of Tao implies that Q is (locally) injective. This implies in turn that Q = id, therefore the distribution is the tangent bundle, therefore the distance is riemannian!

UPDATE:    See the recent post 254A, Notes 4: Bulding metrics on groups, and the Gleason-Yamabe theorem by Terence Tao, for understanding in detail the role of the escape property in the proof of the Hilbert 5th problem.

Pros and cons of higher order Pansu derivatives

This interesting question from mathoverflow

Higher order Pansu derivative

is asked by nil (no website, no location). I shall try to explain the pros and cons of higher order derivatives in Carnot groups. As for a real answer to nil’s question, I could tell him but then …

For “Pansu derivative” see the paper: (mentioned in this previous post)

Métriques de Carnot-Carathéodory et quasiisométries des espaces symétriques de rang un, The Annals of Mathematics Second Series, Vol. 129, No. 1 (Jan., 1989), pp. 1-60

Such derivatives can be done in any metric space with dilations, or in any normed group with dilations in particular (see definition in this previous post).

Pros/cons: It would be interesting to have a higher order differential calculus with Pansu derivatives, for all the reasons which make higher derivatives interesting in more familiar situations. Three examples come to my mind: convexity, higher order differential operators and curvature.

1. Convexity pro: the positivity of the hessian of a function implies convexity. In the world of Carnot groups the most natural definition of convexity (at least that is what I think) is the following: a function f: N \rightarrow \mathbb{R}, defined on a Carnot group N with (homogeneous) dilations \displaystyle \delta_{\varepsilon}, is convex if for any x,y \in N and for any \varepsilon \in [0,1] we have

f( x \delta_{\varepsilon}(x^{-1} y)) \leq f(x) + \varepsilon (-f(x) + f(y)) .

There are conditions in terms of higher order horizontal derivatives (if the function is derivable in the classical sense) which are sufficient for the function to be convex (in the mentioned sense). Note that the positivity of the horizontal hessian is not enough! It would be nice to have a more intrinsic differential condition, which does not use classical horizontal derivatives. Con: as in classical analysis, we can do well without second order derivatives when we study convexity. In fact convex analysis is so funny because we can do it without the need of differentiability.

2. Differential operators Pro: Speaking about higher order horizontal derivatives, notice that the horizontal laplacian is not expressed in an intrinsic manner (i.e. as a combinaion of higher order Pansu derivatives). It would be interesting to have such a representation for the horizontal laplacian, at least for not having to use “coordinates” (well, these are families of horizontal vector fields which span the distribution) in order to be able to define the operator. Con: nevertheless the horizontal hessian can be defined intrinsically in a weak sense, using only the sub-riemannian distance (and the energy functional associated to it, as in the classical case). Sobolev spaces and others are a flourishing field of research, without the need to appeal to higher order Pansu derivatives. (pro: this regards the existence of solutions in a weak sense, but to be honest, what about the regularity business?)

3. Curvature Pro: What is the curvature of a level set of a function defined on a Carnot group? Clearly higher order derivatives are needed here. Con: level set are not even rectifiable in the Carnot world!

Besides all this, there is a general:

Con: There are not many functions, from a Carnot group to itself, which are Pansu derivable everywhere, with continuous derivative. Indeed, for most Carnot groups (excepting the Heisenberg type and the jet type) only left translations are “smooth” in this sense. So even if we could define higher order derivatives, there is not much room to apply them.

However, I think that it is possible to define derivatives of Pansu type such that always there are lots of functions derivable in this sense and moreover it is possible to introduce higher order derivatives of Pansu type (i.e. which can be expressed with dilations).

UPDATE:  This should be read in conjunction with this post. Please look at Lemma 11   from the   last post of Tao    and also at the notations made previously in that post.  Now, relation (4) contains an estimate of a kind of discretization of a second order derivative. Based on Lemma 11 and on what I explained in the linked post, the relation (4) cannot hold in the sub-riemannian world, that is there is surely no bump  function \phi such that d_{\phi} is equivalent with a sub-riemannian distance (unless the metric is riemannian). In conclusion, there are no “interesting” nontrivial C^{1,1} bump functions (say quadratic-like, see in the post of Tao how he constructs his bump function by using the distance).

There must be something going wrong with the “Taylor expansion” from the end of the proof of Lemma 11,  if instead of a norm with respect to a bump function we put a sub-riemannian distance. Presumably instead of “n”  and  “n^{2}” we have to put something else, like   “n^{a}”    and  “n^{b}” respectively, with coefficients  a, b/2 <1 and also functions of (a kind of  degree,  say) of g. Well, the coefficient b will be very interesting, because related to some notion of curvature to be discovered.