… “full of sound and fury, signifying nothing.” And the editor believes it, even if it is self-contradictory, after sitting on the article for half a year.
There are two problems:
Both problems could be easily solved by open peer-review. See Open peer-review as a service.
The referee can well be anonymous, if he wishes, but a dialogue with the author and, more important, with other participants could only improve the quality of the review (and by way of consequence, the quality of the article).
I reproduce further such a review, with comments. It is about the article “Sub-riemannian geometry from intrinsic viewpoint” arXiv:1206.3093 . You don’t need to read it, maybe excepting the title, abstract and contents pages, which I reproduce here:
Sub-riemannian geometry from intrinsic viewpoint
Institute of Mathematics, Romanian Academy
P.O. BOX 1-764, RO 014700
This version: 14.06.2012
Gromov proposed to extract the (differential) geometric content of a sub-riemannian space exclusively from its Carnot-Caratheodory distance. One of the most striking features of a regular sub-riemannian space is that it has at any point a metric tangent space with the algebraic structure of a Carnot group, hence a homogeneous Lie group. Siebert characterizes homogeneous Lie groups as locally compact groups admitting a contracting and continuous one-parameter group of automorphisms. Siebert result has not a metric character.
In these notes I show that sub-riemannian geometry may be described by about 12 axioms, without using any a priori given differential structure, but using dilation structures instead.
Dilation structures bring forth the other intrinsic ingredient, namely the dilations, thus blending Gromov metric point of view with Siebert algebraic one.
MSC2000: 51K10, 53C17, 53C23
1 Introduction 2
2 Metric spaces, groupoids, norms 4
2.1 Normed groups and normed groupoids 5
2.2 Gromov-Hausdorff distance 7
2.3 Length in metric spaces 8
2.4 Metric profiles. Metric tangent space 10
2.5 Curvdimension and curvature 12
3 Groups with dilations 13
3.1 Conical groups 14
3.2 Carnot groups 14
3.3 Contractible groups 15
4 Dilation structures 16
4.1 Normed groupoids with dilations 16
4.2 Dilation structures, definition 18
5 Examples of dilation structures 20
5.1 Snowflakes, nonstandard dilations in the plane 20
5.2 Normed groups with dilations 21
5.3 Riemannian manifolds 22
6 Length dilation structures 22
7 Properties of dilation structures 24
7.1 Metric profiles associated with dilation structures 24
7.2 The tangent bundle of a dilation structure 26
7.3 Differentiability with respect to a pair of dilation structures 29
7.4 Equivalent dilation structures 30
7.5 Distribution of a dilation structure 31
8 Supplementary properties of dilation structures 32
8.1 The Radon-Nikodym property 32
8.2 Radon-Nikodym property, representation of length, distributions 33
8.3 Tempered dilation structures 34
9 Dilation structures on sub-riemannian manifolds 37
9.1 Sub-riemannian manifolds 37
9.2 Sub-riemannian dilation structures associated to normal frames 38
10 Coherent projections: a dilation structure looks down on another 41
10.1 Coherent projections 42
10.2 Length functionals associated to coherent projections 44
10.3 Conditions (A) and (B) 45
11 Distributions in sub-riemannian spaces as coherent projections 45
12 An intrinsic description of sub-riemannian geometry 47
12.1 The generalized Chow condition 47
12.2 The candidate tangent space 50
12.3 Coherent projections induce length dilation structures 53
Now the report:
“Referee report for the paper
One of the important theorems in sub-riemannian geometry is a result
credited to Mitchell that says that Gromov-Hausdorff metric tangents
to sub-riemannian manifolds are Carnot groups.
For riemannian manifolds, this result is an exercise, while for
sub-riemannian manifolds it is quite complicate. The only known
strategy is to define special coordinates and using them define some
approximate dilations. With this dilations, the rest of the argument
becomes very easy.
Initially, Buliga isolates the properties required for such dilations
more general settings (groupoids instead of metric spaces).
However, all the theory is discussed for metric spaces, and the
groupoids leave only confusion to the reader.
His claims are that
1) when this dilations are present, then the tangents are Carnot groups,
[Rmk. The dilations are assumed to satisfy 5 very strong conditions,
e.g., A3 says that the tangent exists – A4 says that the tangent has a
2) the only such dilation structures (with other extra assumptios) are
the riemannian manifolds.
He misses to discuss the most important part of the theory:
sub-riemannian manifolds admit such dilations (or, equivalently,
His exposition is not educational and is not a simplification of the
paper by Mitchell (nor of the one by Bellaiche).
The paper is a cut-and-past process from previous papers of the
author. The paper does not seem reorganised at all. It is not
consistent, full of typos, English mistakes and incomplete sentences.
The referee (who is not a spellchecker nor a proofread) thinks that
the author himself could spot plenty of things to fix, just by reading
the paper (below there are some important things that needs to be
The paper contains 53 definitions – fifty-three!.
There are 15 Theorems (6 of which are already present in other papers
by the author of by other people. In particular 3 of the theorems are
already present in .)
The 27 proofs are not clear, incomplete, or totally obvious.
The author consider thm 8.10 as the main result. However, after
unwrapping the definitions, the statement is: a length space that is
locally bi-lipschitz to a commutative Lie group is locally
bi-lipschitz to a Riemannian manifold. (The proof refers to Cor 8.9,
which I was unable to judge, since it seems that the definition of
“tempered” obviously implies “length” and “locally bi-lipschitz to the
The author confuses the reader with long definitions, which seems very
general, but are only satisfied by sub-riemannian manifolds.
The definitions are so complex that the results are tautologies, after
having understood the assumptions. Indeed, the definitions are as long
as the proofs. Just two examples: thm 7.1 is a consequence of def 4.4,
thm 9.9 is a consequence of def 9.7.
Some objects/notions are not defined or are defined many pages after
they are used.
Small remarks for the author:
def 2.21 is a little o or big O?
page 13 line 2. Which your convention, the curvdim of a come in infinite.
page 13 line -2. an N is missing in the norm
page 16 line 2, what is \nu?
prop 4.2 What do you mean with separable norm?
page 18 there are a couple of “dif” which should be fixed.
in the formula before (15), A should be [0,A]
pag 19 A4. there are uncompleted sentences.
Regarding the line before thm 7.1, I don’t agree that the next theorem
is a generalisation of Mitchell’s, since the core of his thm is the
existence of dilation structures.
Prop 7.2 What is a \Gamma -irq
Prop 8.2 what is a geodesic spray?
Beginning of sec 8.3 This is a which -> This is a
Beginning of sec 9 contains a lot of English mistakes.
Beginning of sec 9.1 “we shall suppose that the dimension of the
distribution is globally constant..” is not needed since the manifold
thm 9.2 rank -> step
In the second sentence of def 9.4, the existence of the orthonormal
frame is automatic.”
Now, besides some of the typos, the report is simply crap:
There is a part of the text which is especially perverse: The paper is a cut-and-past process from previous papers of the
Mind you, this is a course based on several papers, most of them unpublished! Moreover, every contribution from previous papers is mentioned.
Tell me what to do with these papers: being unpublished, can I use them for a paper submitted to publication? Or else, they can be safely ignored because they are not published? Hmm.
This shows to me that the referee knows what I am doing, but he does not like it.
Fortunately, all the papers, published or not, are available on the arXiv with the submission dates and versions.
See also previous posts:
I come back to the idea from the post Click and zip with bacterial conjugation , with a bit more details. It is strange, maybe, but perhaps is less strange than many other ideas circulating on the Net around brains and consciousness.
The thing is that bacteria can’t act based on semantics, they are more stupid than us. They have physical or chemical mechanisms which obviate the need to use semantics filters.
Bacteria are more simpler than brains, of course, but the discussion is relevant to brains as collections of cells.
On one side we have a biological phenomenon, bacterial conjugation. On the other side we have a logic world concept, beta reduction, which is the engine that moves lambda calculus, one of the two pillars of computation.
What is the relation between semantics, bacterial conjugation and beta reduction?
Lambda calculus is a rewrite system, with the main rewrite being beta reduction. A rewrite system, basically, says that whenever you see a certain pattern in front of you then you can replace this pattern by another.
Graphic lambda calculus is a graph rewrite system which is more general than lambda calculus. A graph rewrite system is like a rewrite system which used graphs instead of lines of text, or words. If you see certain graphical patterns then you can replace them by others.
Suppose that Nature uses (graphical) rewrite systems in the biological realm, for example suppose that bacteria interactions can be modeled by a graph rewrite system. Then, there has to be a mechanism which replaces the recognition of pattern which involves two bacteria in interaction.
When two bacteria interact there are at least two ingredients: spatial proximity (SP) and chemical interaction (CI).
SP is something we can describe and think about easily, but from the point of view of a microbe our easy description is void. Indeed, two bacteria in SP can’t be described as pairs of coordinate numbers which are numerically close, unless if each of the microbes has an internal representation of a coordinate system, which is stupid to suppose. Moreover, I think is too much to suppose that each microbe has an internal representation of itself and of it’s neighbouring microbes. This is a kind of a bacterial cartesian theater.
You see, even trying to describe what could be SP for a pair of bacteria does not make much sense.
CI happens when SP is satisfied (i.e. for bacteria in spatial proximity). There is of course a lot of randomness into this, which has to be taken into account, but it does not replace the fact that SP is something hard to make sense from the pov of bacteria.
In Distributed GLC we think about bacteria as actors (and not agents) and about SP as connections between actors. Those connections between actors change in a local, asynchronous way, during the CI (which is the proper graph rewrite, after the pattern between two actors in SP is identified).
In this view, SP between actors, this mysterious almost philosophical relation which is forced upon us after we renounce at the God eye point of view, is described as an edge in the actors diagram.
Such an edge, in Distributed GLC, it is always related to an oriented edge (arrow) in the GLC (or chemlambda) graph which is doing the actual computation. Therefore, we see that arrows in GLC or chemlambda graphs (may) have more interpretation than being chemical bonds in (artificial) chemistry molecules.
Actually, this is very nice, but hard to grasp: there is no difference between CI and SP!
Now, according to the hypothesis from this post and from the previous one, the mechanism which is used by bacteria for graph rewrite is to grow pili.
The following image (done with the tools I have access to right now) explains more clearly how bacterial conjugation may be (graphic) beta reduction.
In the upper part of the figure we see the lambda abstraction node (red) and the application node (green) as encoded by crossings. They are strange crossings, see the post Zipper logic and knot diagrams . Here the crossings are representing with a half of the upper passing thread half-erased.
Now, suppose that the lambda node is (or is managed by) a bacterial cell and that the application node is (managed by) anther bacterium cell. The fact that they are in SP is represented in the first line under the blue separation line in the picture. At the left of the first row (under the blue horizontal line) , SP is represented by the arrow which goes from the lambda node (of the bacterium at left) and the application node (of the bacterium at right). At the right of the first row, this SP arrow is represented as the upper arc which connects the two crossings.
Now the process of pattern recognition begin. In Nature, that is asymmetric: one of the bacterial cells grow a pilus. In this diagrammatic representation, things are symmetric (maybe a weakness of the description). The pilus growth is represented as the CLICK move.
This brings us to the last row of the image. Once the pattern is recognized (or in place) the graph reduction may happen by the ZIP move. In the crossing diagram this is represented by a R2 move, which itself is one of the ways to represent (graphic) beta moves.
Remark that in this process we have two arcs: the upper arc from the RHS crossing diagram (i.e the arc which represents the SP) and the lower arc appeared after the CLICK move (i.e. the pilus connecting the two bacteria).
After the ZIP move we get two (physical) pili, this corresponds to the last row in the diagram of bacterial conjugation, let me reproduce it again here from the wiki source:
After the ZIP move the arc which represents SP is representing a pilus as well!
The description of bacterial conjugation, as taken from the linked wiki page:
Conjugation diagram 1- Donor cell produces pilus. 2- Pilus attaches to recipient cell and brings the two cells together. 3- The mobile plasmid is nicked and a single strand of DNA is then transferred to the recipient cell. 4- Both cells synthesize a complementary strand to produce a double stranded circular plasmid and also reproduce pili; both cells are now viable donors.
Step 2 looks like a CLICK move from zipper logic:
Step 4 looks like a ZIP move:
Not convinced? Look then at the CLICK move as seen when zippers are made of crossing diagrams:
On the other side, the ZIP move is a form of graphic beta move. Which is involved in the behaviour 1 of GLC actors.
Imagine that each bacteria is an actor. You have a pair of (bacteria/actors) which (are in proximity/connected in the actor diagram) and they proceed to (bacterial conjugation/behaviour 1). In the most general form, which actually involves up to 6 actors, the bacteria :a and :b interact like this:
(in this figure we see only two nodes, each one belonging to one actor.) The case of bacterial conjugation is when there are only two actors, i.e. :a = :c = :f and :b = :d = :e . Each of the new arrows which appeared after the graphic beta move could be seen as a pilus.
Easy to describe it, but the mechanism of bacterial conjugation is fascinating. Can it be harnessed for (this type of) computation?
UPDATE: searching for “plasmid computing”, found Computing with DNA by operating on plasmids by T. Head, G. Rozenberg, R.S. Bladergroen, C.K.D. Breek, P.H.M. Lommerse, H.P. Spaink, BioSystems 57 (2000) 87 – 93.
They have a way to compute with plasmids. In this post is argued that bacterial conjugation itself (regardless of the plasmid involved) can be used as the main graph rewrite for doing (a graphic version of) lambda calculus, basically.
Zipper logic is a graph rewrite system. It consists in a class of graphs, called zipper graphs and a collection of moves (graph rewrites) acting on zipper graphs.
Let’s start by defining the zipper graphs. Such a graph is made by the basic ingredients described in the next two figures.
First there are two types of half-zippers and one type of zipper. For any natural number there is a half-zipper (first row), a half-zipper and a zipper.
At the right you see that these are just nodes with oriented arrows. At the left you see a more intuitive notation, which will be used further.
The numbering of the arrows indicate that there is an order on those arrows.
Besides the half-zippers and zippers, there are the already familiar nodes (a) fanout, (b) fanin from chemlambda. To them, we add (c) arrows, termination nodes and loops.
A zipper graph is formed by a finite number of those ingredients, which are connected according to the arrow orientations. Note that there might be arrows with one, or both ends free, and that a zipper graph does not have to be connected.
The zipper moves, now. There are the TOWER moves, which serve to stack half-zippers on top of others.
There is the CLICK move, described in the next figure for a particular case. In general, the CLICK moves creates a zipper from two opposed half-zippers, possibly also with a rest, which is a half zipper. It is very intuitive.
You shall see later that CLICK is a very funny move, one which formalizes the identification of a pattern.
The ZIP move is the one which gives the name to zipper logic. It looks like the action of zipping or unzipping a zipper.
The composite CLICK + ZIP plays the role of the graphic beta move, but here is a more subtle thing: CLICK is like identifying the good pattern for the graphic beta move and ZIP is like actually applying the graphic beta move.
Then you have the DIST moves, like in chemlambda, but for half-zippers:
And then there are the LOCAL PRUNING moves, some for half-zippers and other just like those in chemlambda.
Finally, there are some moves (among them the very important FAN-IN move) which involve only the familiar nodes from chemlambda.
It looks very much like chemlambda, right? That is true, with the subtlety of CLICK added, which is exploited when we find models of the zipper logic outside chemlambda.
Your computer could be sitting alone and still be completely outnumbered for your operating system is home to millions of tiny passengers – chemlambda molecules.
The programs making the operating system of your computer are made up of around ten million code lines, but you harbour a hundred million artificial life molecular beings. For every code line in your ancient windows OS, there are 100 virtual bacterial ones. This is your ‘microbiome’ OS and it has a huge impact on your social life, your ability to interact with the Internet of Things and more. The way you use your computer, in turn, affect them. Everything from the forums we visit to the way we use the Internet for our decentralized computations influences the species of bacteria that take up residence in our individual mocrobiome OS.
Text adapted from the article Microbiome: Your Body Houses 10x More Bacteria Than Cells, which I found by reading this G+ post by Lacerant Plainer.
This is a first example of a post which would respond to the challenge from Alife vs AGI. For commodity of the reader I reproduce it further:
In this post I want to propose a challenge. What I have in mind, rather vague but might be fun, would be to develop through exchanges a “what if” world, where, for example, not AI is the interesting thing when it comes about computers, but artificial biology. Not consciousness, but metabolism, not problem solving, but survival. Also related to the IoT which is a bridge between two worlds. Now, the virtual world could be as alive as the real one. Alive in the Avida sense, in the sense that it might be like a jungle, with self-reproducing, metabolic artificial beings occupying all virtual niches, beings which are designed by humans, for various purposes. The behaviour of these virtual creatures is not limited to the virtual, due to the IoT bridge. Think that if I can play a game in a virtual world (i.e. interact both ways with a virtual world) then why not a virtual creature can’t interact with the real world? Humans and social manipulations included.
If you start to think about this possibility, then it looks a bit like this. OK, let’s write such autonomous, decentralized, self sustained computations to achieve a purpose. May be any purpose which can be achieved by computation, be it secure communications, money replacements, or low level AI city management. What stop others to write their creatures, one for example for the fun of it, of writing across half of the world the name Justin by building at right GPS coordinates sticks with small mirrors on top, so that from orbit all shine the pixels of that name. Recall the IoT bridge and the many effects in the real world which can be achieved by really distributed, but cooperative computations and human interactions. Next: why don’t write a virus to get rid of all these distributed jokes of programs which run low level in all phones, antennas and fridges? A virus to kill those viruses. A super quick self-reproducer to occupy as much as possible of the cheap computing capabilities. A killer of it. And so on. A seed, like in Neal Stephenson, only that the seed is not real, but virtual, and it does not work on nanotechnology, but on any technology connected to the net via IoT.
Stories? Comics? Fake news? Jokes? Should be fun!
Together with Louis Kauffman, we submitted the following article:
M. Buliga, L.H. Kauffman, Chemlambda, universality and self-multiplication, arXiv:1403.8046
to the ALIFE 14: THE FOURTEENTH INTERNATIONAL CONFERENCE ON THE SYNTHESIS AND SIMULATION OF LIVING SYSTEMS .
The article abstract is:
We present chemlambda (or the chemical concrete machine), an artificial chemistry with the following properties: (a) is Turing complete, (b) has a model of decentralized, distributed computing associated to it, (c) works at the level of individual (artificial) molecules, subject of reversible, but otherwise deterministic interactions with a small number of enzymes, (d) encodes information in the geometrical structure of the molecules and not in their numbers, (e) all interactions are purely local in space and time.
This is part of a larger project to create computing, artificial chemistry and artificial life in a distributed context, using topological and graphical languages.
Please enjoy the nice text and the 21 figures!
In this post I want to explain in few words what is this larger project, because it is something which is open to anybody to contribute and play with.
We look at the real living world as something ruled by chemistry. Everywhere in the real world there are local chemical interactions, local in space and time. There is no global, absolute, point of view which is needed to give meaning to this alive world. Viruses, bacteria and even prebiotic chemical entities form the scaffold of this world, until very recently, when the intelligent armchair philosophers appeared and invented what they call “semantics”. Before the meaning of objects, there was life.
Likewise, we may imagine a near future where the virtual world of the Internet is seamlessly interlinked with the real world, by means of the Internet of Things and artificial chemistry.
Usually we are presented a future where artificial intelligences and rational machines give expert advices or make decisions based on statistics of real life interactions between us humans or between us and the objects we manipulate. This future is one of gadgets, useful objects and virtual bubbles for the bayesian generic human. Marginally, in this future, we humans, we may chit chat and ask corporations for better gadgets or for more useful objects. This is the future of cloud computing, that is centralized distributed computing.
This future world does not look at all like the real world.
Because the real world is not centralized. Because individual entities which participate in the real world do live individual lives and have individual interactions.
Because we humans want to discuss and interact with others more than we want better gadgets.
We think then about a future of a virtual world based on decentralized computing with an artificial chemistry, a world where individual entities, real or virtual, interact by the means of an artificial chemistry, instead of being baby-sitted by statistically benevolent artificial intelligences.
Moreover, the Internet of Things, the bridge between the real and the virtual world, should be designed as a translation tool between real chemistry and artificial chemistry. Translation of what? Of decentralized purely local computations.
This is the goal of the project, to see if such a future is possible.
It is a fun goal and there is much to learn and play with. It is by no means something which appeared from nowhere, instead it is a natural project, based on lots of bits of past and present research.