Gromov’s Ergobrain

Misha Gromov updated his very interesting “ergobrain” paper

Structures, Learning and Ergosystems: Chapters 1-4, 6 

Two quotes I liked: (my emphasis)

The concept of the distance between, say, two locations on Earth looks simple enough, you do not think you need a mathematician to tell you what distance is. However, if you try to explain what you think you understand so well to a computer program you will be stuck at every step.  (page 76)

Our ergosystems will have no explicit knowledge of numbers, except may be for a few small ones, say two, three and four. On the contrary, neurobrains, being physical systems, are run by numbers which is reflected in their models, such as neural networks which sequentially compose addition of numbers with functions in one variable.

An unrestricted addition is the essential feature of “physical numbers”, such as mass, energy, entropy, electric charge. For example, if you bring together \displaystyle 10^{30}  atoms, then, amazingly, their masses add up […]

Our ergosytems will lack this ability. Definitely, they would be bored to death if they had to add one number to another \displaystyle 10^{30} times. But the \displaystyle 10^{30} -addition, you may object, can be implemented by \displaystyle log_{2} 10^{30} \sim 100 additions with a use of binary bracketing; yet, the latter is a non-trivial structure in its own right that our systems, a priori, do not have.  Besides, sequentially performing even 10 additions is boring. (It is unclear how Nature performs “physical addition” without being bored in the process.) (page 84)

Where is this going? I look forward to learn.

Two papers on arXiv

I put on arxiv two papers

The paper Computing with space contains too may ideas, is too dense, therefore much of it will not be read, as I was warned repeatedly. This is the reason to do again what I did with Introduction to metric spaces with dilations, which is a slightly edited part of the paper A characterization of sub-riemannian spaces as length dilation structures. Apparently the part (Introduction to ..), the small detail,  is much more read  than the whole (A characterization…).

Concerning the second paper “Normed groupoids…”, it is an improvement of the older paper. Why did I not updated the older paper? Because I need help, I just don’t understand where this is going (and why such direction of research was not explored before).

Escape property of the Gleason metric and sub-riemannian distances again

The last post of Tao from his series of posts on the Hilbert’s fifth problem contains interesting results which can be used for understanding the differences between Gleason distances and sub-riemannian distances or, more general, norms on groups with dilations.

For normed groups with dilations see my previous post (where links to articles are also provided). Check my homepage for more details (finally I am online again).

There is also another post of mine on the Gleason metric (distance) and the CC (or sub-riemannian) distance, where I explain why the commutator estimate (definition 3, relation (2) from the last post of Tao) forces “commutativity”, in the sense that a sub-riemannian left invariant distance on a Lie group which has the commutator estimate must be a riemannian distance.

What about the escape property (Definition 3, relation (1) from the post of Tao)?

From his Proposition 10 we see that the escape property implies the commutator estimate, therefore a sub-riemannian left invariant distance with the escape property must be riemannian.

An explanation of this phenomenon can be deduced by using the notion of “coherent projection”, section 9 of the paper

A characterization of sub-riemannian spaces as length dilation structures constructed via coherent projections, Commun. Math. Anal. 11 (2011), No. 2, pp. 70-111

in the very particular case of sub-riemannian Lie groups (or for that matter normed groups with dilations).

Suppose we have a normed group with dilations (G, \delta) which has another left invariant dilation structure on it (in the paper this is denoted by a “\delta bar”, here I shall use the notation \alpha for this supplementary dilation structure).

There is one such a dilation structure available for any Lie group (notice that I am not trying to give a proof of the H5 problem), namely for any \varepsilon > 0 (but not too big)

\alpha_{\varepsilon} g = \exp ( \varepsilon \log (g))

(maybe interesting: which famous lemma is equivalent with the fact that (G,\alpha) is a group with dilations?)
Take \delta to be a dilation structure coming from a left-invariant distribution on the group . Then \delta commutes with \alpha and moreover

(*) \lim_{\varepsilon \rightarrow 0} \alpha_{\varepsilon}^{-1} \delta_{\varepsilon} x = Q(x)

where Q is a projection: Q(Q(x)) = x for any x \in G.

It is straightforward to check that (the left-translation of) Q (over the whole group) is a coherent projection, more precisely it is the projection on the distribution!

Exercise: denote by \varepsilon = 1/n and use (*) to prove that the escape property of Tao implies that Q is (locally) injective. This implies in turn that Q = id, therefore the distribution is the tangent bundle, therefore the distance is riemannian!

UPDATE:    See the recent post 254A, Notes 4: Bulding metrics on groups, and the Gleason-Yamabe theorem by Terence Tao, for understanding in detail the role of the escape property in the proof of the Hilbert 5th problem.

Pros and cons of higher order Pansu derivatives

This interesting question from mathoverflow

Higher order Pansu derivative

is asked by nil (no website, no location). I shall try to explain the pros and cons of higher order derivatives in Carnot groups. As for a real answer to nil’s question, I could tell him but then …

For “Pansu derivative” see the paper: (mentioned in this previous post)

Métriques de Carnot-Carathéodory et quasiisométries des espaces symétriques de rang un, The Annals of Mathematics Second Series, Vol. 129, No. 1 (Jan., 1989), pp. 1-60

Such derivatives can be done in any metric space with dilations, or in any normed group with dilations in particular (see definition in this previous post).

Pros/cons: It would be interesting to have a higher order differential calculus with Pansu derivatives, for all the reasons which make higher derivatives interesting in more familiar situations. Three examples come to my mind: convexity, higher order differential operators and curvature.

1. Convexity pro: the positivity of the hessian of a function implies convexity. In the world of Carnot groups the most natural definition of convexity (at least that is what I think) is the following: a function f: N \rightarrow \mathbb{R}, defined on a Carnot group N with (homogeneous) dilations \displaystyle \delta_{\varepsilon}, is convex if for any x,y \in N and for any \varepsilon \in [0,1] we have

f( x \delta_{\varepsilon}(x^{-1} y)) \leq f(x) + \varepsilon (-f(x) + f(y)) .

There are conditions in terms of higher order horizontal derivatives (if the function is derivable in the classical sense) which are sufficient for the function to be convex (in the mentioned sense). Note that the positivity of the horizontal hessian is not enough! It would be nice to have a more intrinsic differential condition, which does not use classical horizontal derivatives. Con: as in classical analysis, we can do well without second order derivatives when we study convexity. In fact convex analysis is so funny because we can do it without the need of differentiability.

2. Differential operators Pro: Speaking about higher order horizontal derivatives, notice that the horizontal laplacian is not expressed in an intrinsic manner (i.e. as a combinaion of higher order Pansu derivatives). It would be interesting to have such a representation for the horizontal laplacian, at least for not having to use “coordinates” (well, these are families of horizontal vector fields which span the distribution) in order to be able to define the operator. Con: nevertheless the horizontal hessian can be defined intrinsically in a weak sense, using only the sub-riemannian distance (and the energy functional associated to it, as in the classical case). Sobolev spaces and others are a flourishing field of research, without the need to appeal to higher order Pansu derivatives. (pro: this regards the existence of solutions in a weak sense, but to be honest, what about the regularity business?)

3. Curvature Pro: What is the curvature of a level set of a function defined on a Carnot group? Clearly higher order derivatives are needed here. Con: level set are not even rectifiable in the Carnot world!

Besides all this, there is a general:

Con: There are not many functions, from a Carnot group to itself, which are Pansu derivable everywhere, with continuous derivative. Indeed, for most Carnot groups (excepting the Heisenberg type and the jet type) only left translations are “smooth” in this sense. So even if we could define higher order derivatives, there is not much room to apply them.

However, I think that it is possible to define derivatives of Pansu type such that always there are lots of functions derivable in this sense and moreover it is possible to introduce higher order derivatives of Pansu type (i.e. which can be expressed with dilations).

UPDATE:  This should be read in conjunction with this post. Please look at Lemma 11   from the   last post of Tao    and also at the notations made previously in that post.  Now, relation (4) contains an estimate of a kind of discretization of a second order derivative. Based on Lemma 11 and on what I explained in the linked post, the relation (4) cannot hold in the sub-riemannian world, that is there is surely no bump  function \phi such that d_{\phi} is equivalent with a sub-riemannian distance (unless the metric is riemannian). In conclusion, there are no “interesting” nontrivial C^{1,1} bump functions (say quadratic-like, see in the post of Tao how he constructs his bump function by using the distance).

There must be something going wrong with the “Taylor expansion” from the end of the proof of Lemma 11,  if instead of a norm with respect to a bump function we put a sub-riemannian distance. Presumably instead of “n”  and  “n^{2}” we have to put something else, like   “n^{a}”    and  “n^{b}” respectively, with coefficients  a, b/2 <1 and also functions of (a kind of  degree,  say) of g. Well, the coefficient b will be very interesting, because related to some notion of curvature to be discovered.

Topographica, the neural map simulator

The following speaks for itself:

 Topographica neural map simulator 

“Topographica is a software package for computational modeling of neural maps, developed by the Institute for Adaptive and Neural Computation at the University of Edinburgh and the Neural Networks Research Group at the University of Texas at Austin. The project is funded by the NIMH Human Brain Project under grant 1R01-MH66991. The goal is to help researchers understand brain function at the level of the topographic maps that make up sensory and motor systems.”

From the Introduction to the user manual:

“The cerebral cortex of mammals primarily consists of a set of brain areas organized as topographic maps (Kaas et al. 1997Vanessen et al. 2001). These maps contain systematic two-dimensional representations of features relevant to sensory, motor, and/or associative processing, such as retinal position, sound frequency, line orientation, or sensory or motor motion direction (Blasdel 1992Merzenich et al. 1975Weliky et al. 1996). Understanding the development and function of topographic maps is crucial for understanding brain function, and will require integrating large-scale experimental imaging results with single-unit studies of individual neurons and their connections.”

One of the Tutorials is about the Kohonen model of self-organizing maps, mentioned in the post  Maps in the brain: fact and explanations.

Numbers for biology, are them enough?

Very impressed by this post:

Numb or numbered?

from the blog of Stephen Curry.

Two reactions, opposite somehow, could be triggered by the parallel between physics (now a field respected by any  layman) and biology (the new challenger).

The glory of physics, as well as the industrial revolution, are a consequence of the discovery of infinitesimal calculus by  the Lucasian Professor of Mathematics Isaac Newton  and by the   philosopher, lawyer and mathematician Gottfried Leibniz. All of this started from the extraordinary creation of a gifted generation of thinkers. We may like this or not, but this is TRUE.

The reactions:

1. Positive: yes, definitely some mathematical literacy would do a lot of good to students from the biological sciences. In fact I am shocked that apparently there is resistance to this in the field. (Yes, mathematicians can be and are arrogant when interacting with other scientists, but in most of the cases that means that (a) they are bad mathematicians anyway, except when they are not, or  (b) that they react to the misconceptions of the other scientists (which, by manifesting such narrowness of view, are bad scientists, except when they are not))

2. Negative: Numeracy and preadolescent recipes (at least this is (or was)  the level of mathematics knowledge in the school curriculum in the part of the world where I grown up) are not enough. Mathematics was highly developed before infinitesimal calculus, but this was not sufficient for the newtonian revolution.

To finish,  Robert Hooke was in the same generation with Newton and Leibniz. So maybe biology could hurry up a bit in this respect.

computing with space | open notebook

%d bloggers like this: