Discuss Holochain, white paper and stuff

By default this post will stay open for 1 month, after that it will be either deleted or I’ll keep it, depending of the quality and quantity of comments. So, if it turns out to be useful,  at least a little, then I’ll keep it, like I did with the old discussions about Euclideon/Unlimited Detail.

I recently got interested in Holochain, due to a tangential effect of this happening. I discovered that I like their (human) language a lot. There are some parts which trigger some warnings to the professional reader persona inside me.

Then, why not ask?  This is the Holochain white paper.  Let’s take this as the basis of the discussion (if any).

To open it, I like that they start with a generous model which has as particular cases:  git, bitcoin and ethereum, and holochain.

__________

I remind you that your first comment of this blog is moderated.

__________

 

Advertisements

Authors: hodl your copyright or be filtered

For me this is the only sane reaction to the EU Copyright Directive. The only thing to do is to keep your copyright. Never give it to another. You can give non-exclusive rights of dissemination, but not the copyright of your work.

So: if you care about your piece of work then hodl copyright, if you don’t care about it (produced it to satisfy a job demand, for example) then proceed as usual, is trash anyway.

For my previous comments see this and this.

If you have other ideas then share them.

 

The second Statebox Summit – Category Theory Camp uses my animation

with attribution.

UPDATE: the post was initially written as a reaction to the fact that the Open Science project chemlambda needs attribution when some product related to it is used (in this case an animation obtained from a dodecahedron molecule which produces 4 copies; it works because it is a Petersen graph). As it can be seen in the comments everything was fixed with great speed, thank you Jelle. Here’s the new page look

Screenshot from 2018-09-09 15:18:06.png

Wishing the best to the participants, I’d like to learn more about Holochain in particular.

The rest of the post follows. It may be nice because it made me think about two unrelated little facts: (1) I was noticed before about the resemblance between chemlambda molecules and the “vajra chains” (2) well, I CHING hexagrams structure and rewrites are close to the two families of chemlambda rewrites, especially as seen in the “genes” shadow of a molecule. So putting these two things together, stimulated to find an even more halucinatory application of chemlambda, I arrived to algorithmic divination. Interested? Write to me!

__________________________________________________

I hope they’ll fix this, the animation is taken probably from the slides I prepared for TED Chemlambda for the people (html+js).

Here’s a gif I made from what I see today Saturday 20:20 Bucharest time.

test_s

Otherwise I’m interested in the subject and open to discussions, if any which is not category theory PR, but of substance.

UPDATE: second thoughts

  • the halucinatory power of chemlambda manifests again 🙂
  • my face is good enough for a TED conference (source), now my animation is good for a CT conference, but not my charming personality and ideas
  • here is a very lucrative idea, contact me if you like it,  chemlambda OS research could be financed from that: I was notified about the resemblance between chemlambda molecules and the vajra chains of awareness, therefore what about making an app which would use chemlambda as a divination tool? Better than a horoscope, if well made, huge market. I can design some molecules and the algorithm for divination.

The mistery of dissipation and hamiltonian who share the same mathematical formalism

I love so much the  “Transparency is better than trust” idea  that I put it on the top of my page.  Like I did for em, I want to announce the start of a new draft where is formulated a general mathematical treatment which aims to solve a growing collection of coincidences between dissipation as treated in convex analysis and the hamiltonian formalism.

This form of dissipation function, discovered by De Saxce, called by him “bipotential”, shares some very intriguing features with hamiltonians. In past articles about the mathematical treatment of bipotentials these features were noted as curiosities (for example the ressemblance between convex lagrangian covers (remark 6.1 here) and lagrangian fibrations from quantization).

But with formalism from the draft, which extends the one from arXiv.1807.10480, indeed dissipation will be treated with the same mathematical formalism as hamiltonians (from a stochastic point of view).

I don’t say that dissipation is a sort of hamiltonian, mind you, I say that once again nature likes to repeat a winning pattern, in a different context.

So follow that link from time to time, because I am going to update it until it reaches the final form.

Comments welcome!

On the origin of artificial species

I read Newton but not Darwin’s On the origin of species, until now. Chance was that, looking for new things to read in the tired landscape of libraries, I felt on a translation of Darwin’s famous book. Is wonderful.

While reading it I was striken by the fact that genetics was unknown to him, Though, what a genius. I’m almost a professional reader (if you understand what I mean) and I passed by Newton, as I said, in original, by some of the ancient greek philosophers (an even greater experience). Now, as I’m reading Darwin in a translation, I am aware of the translation limitations but I can’t stop to think that, before reading it, I lived this experience.

The main gain of the chemlambda project was for me the building of a world which undoubtedly has an autonomous existence, whatever your opinions may be. In my dreams, as I read Darwin, I see a rewrite of this book based on the observations of the chemlambda’s 427 valid molecules (eliminate from the chemlambda library of molecules those from this list, what you get are all valid molecules).

What I don’t see, perhaps because of my ignorance, is that the logical last implication of Darwin’s work is that the theory of evolution refutes any semantics, in particular the semantics of species.

It is in probabilities the possible blend of individual evolution and species evolution into a new theory which is not unlike the evolution theory, but as much as different as possible from any actual political theory. A dream, of course, a Hari Seldon dream 🙂 because probabilities look as much as semantics as space.

Who really knows? Funding bodies, especially these private high risk takers, don’t seem to have the balls to risk in the field of fundamental research, the most riskier activity ever invented. Who knows? I may know, if this little cog in the evolution machine ever had a chance to.

Summer report 2018, part 2

Continues from Summer report 2018, part 1.

On the evolution of the chemlambda project and social context. 

Stories about the molecular computer. The chemlambda project evolved in a highly unexpected way, from a scientific quest done completely in the open to a frantic exploration of a new territory. It became a stories generator machine. I was “in the zone” for almost two years. Instead of the initial goal of understanding the computational content of emergent algebras, the minimalisic chemlambda artificial chemistry concentrated on the molecular computer ideas.

This idea can be stated as: identify molecules and chemical reactions which work as the  interaction nets rewrites style of chemlambda. See the article Chemlambda strings for a simple explanation, as well as a recent presentation of the newest (available) version of chemlambda: v3. (It is conservative in the numbers of nodes and links, the presentation is aimed for a larger audience.)

This idea is new. Indeed, there are many other efforts towards molecular computing. There is the old ALCHEMY (algorithmic chemistry) where lambda calculus serves as inspiration, by taking the application operation as a chemical reaction and the lambda abstration as reactive sites in a molecule. There is the field of DNA and RNA computing where computations are embodied as molecular machines made of DNA or RNA building blocks. There is the pi calculus formalism, as pure in a sense as lambda calculus, based on communication channels names exclusively, which can be applied to chemistry. There is the idea of metabolic networks based on graph grammars.

But there is nowhere the idea to embed interaction networks rewrites into real chemical reactions. So not arbitrary graph grammars, but a highly selected class. Not metabolical networks in general, but molecules designed so individually compute. Not solutions well stirred in a lab. Not static or barely dynamic lego-like molecules. Not boolean gates computing but functional programming like computing.

From the side of CS, this is also new, because instead of concentrating of these rewrites as a tool for understanding lambda calculus reductions, we go far outside of the realm of lambda calculus terms into a pure random calculus with graphs.

But it has to be tried, right? Somebody has to try to identify this chemistry. Somebody has to try to use the functional programming basic concepts from the point of view of the machine, not the programmer.

For the mathematical and computing aspects see this mathoverflow question and answers.

For the general idea of the molecular computer see these html/js slides. They’ve been prepared for a TED talk with a very weird, in my opinion, story.

For the story side and ethical concerns see for example these two short stories posted at telegra.ph : Internet of smells, Home remodeling (a reuse of Proust).

In order to advance there is the need to find either, rather both funding and brain time from a team dedicated to this. Otherwise this project is stalled.

I tried very hard to find the funding and I have not succeeded (other weird stories, maybe some day will tell them).

I was stalled and I had to go back to my initial purpose: emergent algebras. However, being so close to inverse engineering of the  nature’s  OS gives new ideas.

After a year of efforts I understood that it all comes to stochastic luck, which can be groomed and used (somehow). This brings me to the stories of the present, for another post.

 

Summer report 2018, part 1

In this report I intend to present explanations about the scientific evolution of the chemlambda project, more about the social context and about new projects (em and stochastic evolution). I shall also write about my motivations and future intentions.

On the evolution of the chemlambda project and social context. 

Inception and initial motivations. This open notebook contains many posts wittnessing the inception of chemlambda. I started to learn about and understand some aspects of the theory of computation as a geometer. My goal was to understand the computational contents of working with the formalism of emergent algebras. I thought that basically any differential geometric computation reduces to a graph rewrite automaton, without passing through the usual road which is non-geometrical, i.e. reduction to cartesian numerical manipulations. The interest is obvious to me, although I discovered soon that it is not obvious to many. A preferred analogy which I used was the one concerning the fly and the researcher who tries to understand the visual system of the fly. The fly’s brain, a marvel of nature, does not work with, nor it contains a priori knowledge of cartesian geometry, while in the same time the explanations of the researcher are almost completely based on cartesian geometry considerations. Which is then the way of the fly’s brain? And why can it be explained by recourse to sophisticated (for a fly) abstractions?

That’s why I advanced the idea that there is an embedded mechanism in nature which makes abstractions concrete and runs them by the dumbest algorithm ever: random and local. In this sense, if all differential geometric computations can be executed by a graph rewrite automaton, by using only random rewrites, applied only locally (i.e. by using only a small number of nodes and arrows), then the fly’s brain way and the researcher brain way are simply the same, only the semantics (the researcher has) is different, being only a historical building based on centuries of inverse engineering techniques called geometry, physics, mathematics.

The emergent algebras formalism has actually two parts, the first which can be easily reduced to graph rewrites, the second one which concerns passing to the limit in a precise sense and therefore obtaining new, “emergent” rewrites and equivalences of rewrites. At that initial point I had nothing, not the pure graph rewrites formalism, nor the passing to the limit formalism, except some particular results (in metric geometry and intriguingly in some problems related to approximate groups, then a hot work of Tao and collaborators).

That is how GLC (graphic lambda calculus) appeared. It was, in retrospect, a particular formulation analoguous with interaction graphs, with the new ideas that it is applicable to geometry, via the fact that the emergent algebra rewrites are of the same kind as the interaction graphs rewrites. Interaction graphs are an old subject in CS, only that my point of view was completely different than the classical one. Where the functional programming wizards were interested in semantics, global, concepts and the power of humanly designed abstractions, I was interested into the minimal, machine (or fly’s brain) like, random and automatic aspects.

Because the approximate groups were a hot subject then, I embedded a little part of what I was thinking about into a grant collaboration financed locally. Recall that I was always an Open Science researcher, therefore I concentrated on openly (i.e. back then via arXiv) constructing the fundamentals from where particular applications on approximate groups would have been low hanging fruits. However, for what I believe are political reasons (I publicly expressed as usual my strong feelings against academic and political corruption, which debase me as a citizen of my country which I always loved very much), my grant funding was cancelled even if I did a lot of relevant work, by far the most publicly visible and original. Oh well, that’s life, I was never interested much in these political aspects.I learned the hard way a truth: my country has a great pool of talents which make me proud, but in the same time talent is here choked by a group of mediocre and opportunistic managers. They thrive not because their scientific talent, which is not inexistent, but only modest, they thrive because their political choices. This state of affairs created an inverted pyramid of power (as seen from the talent point of view).

I filed therefore  in my notebooks the problem of understanding how a linear dilation structure emerges  from an approximate group.  There was nothing to stop me to go full OS.

I wrote therefore the Chemical concrete machine paper because I meaned it: there should be a way to make a machine, the dumbest of all, which works like Nature. This was  an advance over GLC, because it had almost all rewrites local (excepting global fan-out) and because it advanced the idea of the dumbest algorithm and that the dumbest algorithm is the way the Nature works.

Moreover the interest in GLC soared and I had the ocasion to talk a lot with Louis Kauffman, a wonderful researcher which I always admired, the king of knot theory. There were also lots of CS guys interested into GLC and they tried to convince me that maybe GLC has the key to true decentralized computing. A project with some of them and with Louis (contained in this arXiv paper) was submitted to an american agency. Unfortunately, even if the theoretical basis was appreciated, the IT part was not well done, actually is was almost inexistent. My problem was that the ideas I advanced were not (even by Louis sometimes) accepted, I needed somebody (I am a mathematician, not a programmer, see?) to write some pretty simple programs and let them work to see if I’m right and semantics is just human BS or not.

For an an artificial life conference I wrote with Louis another presentation of chemlambda, after the GLC project was not accepted for US funding. The formalism was still not purely local. There, Louis presented his older and very interesting points of view about computation and knot theory. These were actually different than mine, because for me knot theory is yet another graph rewriting automaton (without a defined algorithm for functioning). Moreover, recall emergent algebras, I have not made Louis to be interested in my point of view that the Rademacher 3 move is emergent, not fundamental.

Louis Kauffman is the first programmer of chemlambda. Indeed, he succeded to make some reductions in chemlambda using Mathematica. I don’t have Mathematica, as I never use on my computers anything which is not open. I longed for somebody, a real programmer, to make those darned simple programs for chemlambda.

I was interested back then into understanding chemlambda quines and complex reductions. On paper that was very very hard to progress.

Also, I have not succeded to gather interest for the emergent algebras aspect. Chemlambda simplified the emergent algebra side by choosing a minimal set of nodes, some of them which had an emergent algebra interpretation, but nobody cared. It is hard though to find anybody familiar with modern metric geometry and analysis and also familiar with interaction nets.

After some depressing months I wrote the programs in two weeks and got the first chemlambda reduction made with a combination of awk programs and d3.js.  The final repository is here.

The version of chemlambda (call it v2) used is explained in the article Molecular computers. It is purely local.

From there my choice was to make chemlambda a flagship of Open Science. You know much of the story but you may not know how and why I built more that 400 chemlambda molecules. The truth is that behind the pretty animations, almost each molecule deserves a separate article, or otherwise stated, when you look at a chemlambda molecule in action you see a visual version of a mathematical proof.

The chemlambda formalism has been externally validated, first by chemlambda-py (which has though a rewrite wrongly implemented, but otherwise is OK) then by chemlambda-hask which is much more ambitious, being a platform for a haskell version.

As for the connection with knot theory you have the Zipper Logic article (though it is like chemlambda v1 not a purely local agorithm, but it can be easily made so by the same techniques as chemlambda v2).

I also used figshare for the chemlambda collection of simulations (which covers the animations shown in the chemlambda collection on G+, see them starting from an independent list).

As concerns the social communication aspects of this OS project, it was a huge success.

computing with space | open notebook

%d bloggers like this: