# Emergent algebras as combinatory logic (Part II)

This post continues Emergent algebras as combinatory logic (Part I).  My purpose is to introduce the calculus standing behind Theorem 1 from the mentioned post.

We have seen (Definition 2) that there are approximate sum and difference operations associated to an emergent algebra. Let me add to them a third operation, namely the approximate inverse. For clarity I repeat here the Definition 2, supplementing it with the definition of the approximate inverse. This gives:

Definition 2′.   For any $\varepsilon \in \Gamma$ we give the following names to several combinations of operations of emergent algebras:

• the approximate sum operation is $\Sigma^{x}_{\varepsilon} (u,v) =$ $x \bullet_{\varepsilon} ((x \circ_{\varepsilon} u) \circ_{\varepsilon} v)$,
• the approximate difference operation is $\Delta^{x}_{\varepsilon} (u,v) = (x \circ_{\varepsilon} u) \bullet_{\varepsilon} (x \circ_{\varepsilon} v)$
• the approximate inverse operation is $inv^{x}_{\varepsilon} u = (x \circ_{\varepsilon} u) \bullet_{\varepsilon} x$.

The justification for these names comes from the explanations given in the post “The origin of emergent algebras (part II)“, where I discussed the sketch of a solution to the question “What makes the (metric)  tangent space (to a sub-riemannian regular manifold) a group?”, given by Bellaiche in the last two sections of his article  The tangent space in sub-riemannian geometry, in the book Sub-riemannian geometry, eds. A. Bellaiche, J.-J. Risler, Progress in Mathematics 144, Birkhauser 1996. We have seen there that the group operation (the noncommutative,  in principle, addition of vectors) can be seen as the limit of compositions of intrinsic dilations, as $\varepsilon$ goes to $0$. It is important that this limit exists and that it is uniform, according to Gromov’s hint.

Well,  with the notation $\delta^{x}_{\varepsilon} y = x \circ_{\varepsilon} y$, $\delta^{x}_{\varepsilon^{-1}} y = x \bullet_{\varepsilon} y$, it becomes clear, for example, that the composition of intrinsic dilations described in the figure from the post “The origin of emergent algebras (part II)” is nothing but the approximate sum from Definition 2′. (This is to say that formally, if we replace the emergent algebra operations with the respective intrinsic dilations, then the approximate sum operation $\Sigma^{x}_{\varepsilon}(y,z)$  appears as the red point E from the mentioned  figure. It is still left to prove that intrinsic dilations from regular sub-riemannian spaces give rise to emergent algebras, this was done in arXiv:0708.4298.)

We recognize therefore the two ingredients of Bellaiche’s solution into the definition of an emergent algebra:

• approximate operations, which are just clever compositions of intrinsic dilations in the realm of sub-riemannian spaces, which
• converge in a uniform way to the exact operations which give the algebraic structure of the tangent space.

Therefore, a rigorous formulation of Bellaiche’s solution is Theorem 1 from the previous post, provided that we extract,  from the long differential geometric work done by Bellaiche, only the part which is necessary for proving that intrinsic dilations produce an emergent algebra structure.

Nevertheless, Theorem 1 shows that the “emergence of operations” phenomenon is not at all specific to sub-riemannian geometry. In fact, once we get the idea of the right definition of approximate operations (from sub-riemannian geometry), we can simply try to prove the theorem by “abstract nonsense”, i.e. algebraically, with a dash of uniform convergence at the end.

For this we have to identify the algebraic relations which are satisfied by these approximate operations.  For example, is the approximate sum associative? is the approximate difference the inverse of the approximate sum? is the approximate inverse of an element the inverse with respect to the approximate sum? and so on. The answer to these questions is “approximately yes”.

It is clear that in order to find the right relations (approximate associativity and so on) between these approximate operations we need to reason in a more clear way. Just by looking at the expressions of the operations from Definition 2′, it is obvious that if we start with a brute force  “shut up and compute” approach  then we will end rather quickly with a mess of parantheses and coefficients. There has to be a more easy way to deal with those approximate operations than brute force.

The way I have found has to do with a graphical representation of these operations, a way which eventually led me to graphic lambda calculus. This is for next time.