Support the YODA bill!

This looks like a tremendously important bill!

“A bill introduced Sept. 18 would make clear that consumers actually owned the electronic devices, and any accompanying software on that device, that they purchased, according to sponsor Rep. Blake Farenthold’s (R-Texas).

The You Own Devices Act (H.R. 5586) would amend the Copyright Act “to provide that the first sale doctrine applies to any computer program that enables a machine or other product to operate.”

[taken from Own Your Own Devices You Will, Under Rep. Farenthold’s YODA Bill, by Tamlin Bason]

I just learned about YODA from reading the link via this   G+ post by Charles Hofacker.  The link points to the article

How an eBay bookseller defeated a publishing giant at the Supreme Court, at Ars Technica.

From this article:

Ted Olson stepped to the podium on behalf of Wiley and launched into an argument that Congress had amended the copyright law in 1976 in part to stop unauthorized importation of copyrighted works. Soon he began facing questions that put him on the defensive:

Justice Breyer: Now, under your reading…the millions of Americans who buy Toyotas could not resell them without getting the permission of the copyright holder of every item in that car which is copyrighted.

Olson seemed to have difficulty with the question, answering “that is not this case.” Justice Breyer continued to press:

Justice Breyer: Now, explain to me, because there are… millions of dollars’ worth of items with copyrighted indications of some kind in them that we import every year; libraries with three hundred million books bought from foreign publishers…; museums that buy Picassos… and they can’t display it without getting permission from the five heirs who are disputing ownership of the Picasso copyrights….

Again Olson tried to deflect the question, arguing that “we’re not talking about this case…” But Justice Anthony Kennedy wasn’t satisfied.

Justice Kennedy: You’re aware of the fact that if we write an opinion… with the rule that you propose, that we should, as a matter of common sense, ask about the consequences of that rule.

Olson countered that the “parade of horribles” was exaggerated. Justice Breyer observed wryly that “[s]ometimes horribles don’t occur because no one can believe it.”


With Justice Breyer authoring the majority opinion, the Court decided that the phrase “lawfully made under this title” wasn’t intended by Congress to impose a “geographical limitation.” Regarding market segmentation, the Court found no support for the notion that copyright “should include a right to divide markets or… to charge different purchasers different prices for the same book…”



Now you see why this is very very interesting.



Where’s the ship? (lots of questions part II)

I explain in Lots of questions, part I how Plato and Brazil made me want to switch from math to biology.  Eventually it seems I ended in fundamentals of computing, but there is this strange phenomenon. I can’t figure it how it works, or why, or even if is widespread or rare. I think is widespread, but I don’t have clear evidence about it other than the old saying that people don’t change.

So Plato+Rio gives geometry+biology gives artificial chemistry+distributed computing.


I don’t get how this functions.

Makes no sense.

Now I have a hint that we are the computation,  we execute ourselves during our lifetime, our brains are just part of the seed, part of the program. We don’t really have billions of neurons and cells, everything is just the state of a computation.  Part of the seed is our genetic inheritance, other part of the seed is our geographical and more largely cultural inheritance. We are not separated from the external medium, there is no external medium, exactly like there is no me and the Net, only many actors interacting asynchonously and locally according to some protocols. In the case of real life the protocols are casted in   real molecules, at a finer scale only emerging phenomena of a much faster and wider computation going on, of a geometrical nature. But the principle is scale independent, that is how we manage space (perception and interaction) in our brains.

So we don’t change.

Take this blog, I make from time to time some counts. For the last 3 months gives this. There are 491 posts on chorasimilarity. In the last week 78 of them have been read, last month 219, last quarter 331. This series makes no sense unlese there are very long range relations between the posts, relations which are perceived by enough readers of this blog.

Oh, great!

Two mysteries. The first is that I have no idea why exactly there long time correlations arrive in my writing. The second mystery is why do you perceive them too.

So there is this strange phenomenon, which I can’t explain.

I remark though that there has to be something starts the new computation cycle, the new turn of spiral, the new chamber of the snail shell.

It is stimulation.

Last time was Plato and Rio.

I feel that I lack something in order to tell you more and for me to learn more in the absence of enough external stimuli.

I know I can build really new and also classical stuff, but I loose interest in time without stimuli. That is why I change every few years what I do. It is not rewarding for me to see that after I left a field somebody picks an idea and makes it stronger, it is not rewarding to see that I was right when nobody believed.  Maybe I just have a nose for good ideas which float in the air and I detect them before many others, but I don’t have the right spce and culture position to make them grow really big. You know, just an explorer who comes back home after a lonely expedition and tells you about blue seas and wide skies with strange constellations. Yeah, OK creep. But then, after some years the trend is to go to bath in those blue seas. And where is the creep? Just coming home, telling about that new jungle and the road from there to the clouds.

Stimulation. Trust. New worlds await. Need my ship, now.




The clients of publishers. Before: readers. Now: authors.

There are two services offered by any publisher:

  • to help you learn what others have written
  • to help you show to others what you have written.

These are the two sides of the publishing business.

A long time ago, the main reason for the existence of publishers was to spread knowledge, i.e. to offer the first service. They used to multiply the author’s work  and to distribute it to libraries and bookstores.

Libraries and bookstores offered the space for the books to be examined by the people. Some of the books sold well, some not. Some books impressed a selected few, who then wrote other books which sold well. And so on and so forth.

Publishers, libraries and bookstores used to be a very efficient medium of spreading and selecting viable info.

But now, when publish is a button, it looks like the second service is more wanted. Now the publishers offer to authors the service of giving an authority stamp to … anything an author writes. The publishers seek the profit from their main clients, the authors.

Libraries and bookstores, the previous partners, try to find a way to survive.  Because publishers no longer need readers.



Mol language and chemlambda gui instead of html and web browsers gives new Net service?

The WWW is an Internet system, based on the following ingredients:

  • web pages (written in html)
  • a (web) browser
  •  a web server (because of the choice of client-server architecture)

Tim Berners-Lee wrote those programs. Then the WWW appeared and exploded.

The force behind this explosion comes from the separation of the system into independent parts. Anybody can write a web page, anybody who has the browser program can navigate the web, anybody who wants to make a web server needs basically nothing more than the program for that (and the  previously existing  infrastructure).

In principle it works because of the lack of control over the structure and functioning.

It works because of the separation of form from content, among other clever separations.

It is so successful, it is under our noses, but apparently very few people think about the applications of the WWW ideas in other parts of the culture.

Separation of form from content means that you have to acknowledge that meaning is not what rules the world. Semantics has only only a local, very fragile  existence, you can’t go too far if you build on semantics.

Leave the meaning to the user, let the web client build his meaning from the web pages he can access via his browser. He can access and get the info because the meaning has been separated from the form.

How about another Net service, like the WWW, but which does something different, which goes to the roots of computation?

It would need:

  • artificial molecules instead of web pages; these are files written in a fictional language called “Mol”
  • a gui for the chemlambda artificial chemistry, instead of a web browser;  one should think about it as a Mol compiler & gui,
  • a chemical server which makes chemical soups, or broths, leaving the reduction algorithm to the users;

This Mol language  is an idea which holds some potential, but which needs a lot of pondering. Because the “language” idea has bad effects on computation.



Hindley-Milner for chemlambda

 Some notes about Hindley-Milner in #chemlambda . It turns out that is pretty simple to do something very intuitive.
The starting point is the mol file which encodes a graph molecule.
Let me recall how this works.
The ingredients of a molecule are some nodes, which have ports. Nodes can be red, green, yellow or blue (until now, but you are free to add your own) and they appear as, say, 4px atoms with various colors.
Their ports are yellow if they are “in” ports and “blue” if they are out ports.
There is more.
Because the trivalent nodes (i.e. nodes with 3 ports) are always either with two “in” ports and 1 “out” port, or the other way (invert “in” with “out”) there is a need to differentiate the two “in” (or the two “out”) ports of a trivalent node, that is why one is at “left”, represented by a 2px atom.
So, for example, the application node appear as
A [ , in, out]
and in the mol file is represented by a line which looks like this:
A 14 abc 3
where the first argument is “A” (application) and 14 is the value of the port, abc is the value of the port in and 3 is the value of the port out.
A mol file is a list of such lines, which satisfies the condition that every port value appears at most twice, and if it appears twice then it has to be once in a in port and one in a out port.
That’s it.
Oh, maybe is good to say that the L (lambda) node has one in port and two out ports (and not two in ports and one out port!) and it appears as
L a b c
where a is “in”, b is “left.out” and c is “out”. For example if you want to write Lx.T then probably the L node will appear as
L a x y
and “a” will be a port value which appears as the out port value of “T”, whatever that means in your particular example.

Two colors are enough to distinguish the main nodes

  • A green, two ports in one port out
  • FO green, two ports out, one port in  (that’s a fanout node)
  • L red, two ports out, one port in
  • FI red, two ports in, one port out (that’s a fanin node)

When represented in the chemlambda gui, you don’t see any port variable, and they indeed not matter, their only scope is to represent an edge from a port in to a port out or a free in edge, or a free out edge.

Now, let’s go to types. In the previous post  I gave a link to

where is explained a simple procedure. Just take a graph (on graphic lambda calculus, therefore easily to translate into chemlambda) and give labels to all the edges, then use the “function” constructor -> and express relations between the edge labels according to the rules at each node. You get a magma, with the operation -> and generators the edge labels. If the magma is free then the term represented by the graph is well typed.

With the nodes FI and the FOE which are present in chemlambda, one has to use a FUN constructor, as previously, and a PAIR constructor.

The edge labels (which are type names, repeat: names, not types, so there is no problem related to polymorphism)  are of course, exactly the port variables from the mol file which represents the graph.

So the first thing to do is to translate the mol file into another file, line by line. Like this: use the same style and introduce the “type node”

FUN in out

(which represents c=a->b like this: FUN a b c)

and the “type node”

PAIR in out

(which represents c=(a,b) like this PAIR (a,b,c) )

Use also the FO node as previously.

Then translate

  • A a b c  >> FUN b c a
  • L a b c  >> FUN b a c
  • FI a b c >> PAIR b a c
  • FOE a b c >> PAIR b c a
  • FO a b c >> FO a b c

and delete all other nodes (like T, termination, or FRIN or FROUT)

The translation changes some properties of the ports, in the sense that some ports which were “in” become “out” ports, and conversely.

So what happens is that even if each port value appears at most twice in the new file, the following are possible for those values “a” which appear twice:

  • if it appears once as a “in” and once as a “out”, do nothing more
  • if it appears twice as a “in”, then delete the two occurrences and replace them by new names a1 and a2 and add “FO a a1 a2” to the file
  • if it appears twice as a “out” then add a new 2 valent node, call it
    EQ in , so add  “EQ a a”

EQ nodes represents relations between generators, the generators are  the port values.

By this simple procedure starting from the mol file, the rest consists only in giving the local moves for the new graphs obtained.

Which can be these local moves other than the translations of the old moves of chemlambda? That is the essence of the HM algorithm, as seen in chemlambda, of course. (Mind that in chemlambda may not stop, because chemlambda molecules are not representing lambda terms in general, even if any lambda term has a correspondent in chemlambda).

Lots of interesting things may happen, even if restricted to molecules which represent lambda terms. If the term has no normal form, of course the algorithm does not stop, for example.

Wait, what algorithm?

The algorithm has two ingredients:

  • the moves, which are translations of the chemlambda moves (that’s pretty intriguing, that even if the translation is bad, i.e. non invertible, the translated moves are still good, in the sense that it does not happen that for the same left pattern there are two different right patterns proposed)
  • and the reduction strategy which is left at your choice, from “stupid” (the one I’m currently using) to more intelligent artificial chemistry style or Actor Model ones, as described here.

The meaning of this is that one can imagine a whole lot of different HM kind of algorithms for (a limited) type inference, which can be as sequential or as concurrent as you want.

What could be the purpose of this HM like algorithm for chemlambda?

It is a mean to extract some very limited “objective” information from the molecule, even if in the process of reduction. Usually it will make no global sense (meaning that the algorithm will not stop, roughly) and moreover the algorithm will not be much less resources consuming than the reduction algorithm itself (which shows that already, theoretically, the chemlambda reduction algorithm, whichever variant of the reduction strategy you choose, should be very fast in it’s class).

I shall come back to this with lots of pictures and details, recurrently.
If you don’t make sense about what I’m talking about then go visit the chemlambda github repo and follow the links.

More detailed argument for the nature of space buillt from artificial chemistry

There are some things which were left uncleared. For example, I have never suggested to use networks of computers as a substitute for space, with computers as nodes, etc. This is one of the ideas which are too trivial. In the GLC actors article is proposed a different thing.

First to associate to an initial partition of the graph (molecule) another graph, with nodes being the partition pieces (thus each node, called actor, holds a piece of the graph) and edges being those edges of the original whole molecule which link nodes of graphs from different partitions. This is the actors diagram.
Then to interpret the existence of an edge between two actor nodes as a replacement for a spatial statement like (these two actors are close). Then to remark that the partition can be made such that the edges from the actor diagram correspond to active edges of the original graph (an active edge is one which connects two nodes of the molecule which form a left pattern), so that a graph rewrite applied to a left pattern consisting of a pair of nodes, each in a different actor part, produces not only a change of the state of each actor (i.e. a change of the piece of the graph which is hold by each actor), but also a change of the actor diagram itself. Thus, this very simple mechanism produces by graph rewrites two effects:

  • “chemical” where two molecules (i.e. the states of two actors) enter in reaction “when they are close” and produce two other molecules (the result of the graph rewrite as seen on the two pieces hold by the actors), and
  • “spatial” where the two molecules, after chemical interaction, change their spatial relation with the neighboring molecules because the actors diagram itself has changed.

This was the proposal from the GLC actors article.

Now, the first remark is that this explanation has a global side, namely that we look at a global big molecule which is partitioned, but obviously there is no global state of the system, if we think that each actor resides in a computer and each edge of an actor diagram describes the fact that each actor knows the mail address of the other which is used as a port name. But for explanatory purposes is OK, with the condition to know well what to expect from this kind of computation: nothing more than the state of a finite number of actors, say up to 10, known in advance, a priori bound, as is usual in the philosophy of local-global which is used here.

The second remark is that this mechanism is of course only a very
simplistic version of what should be the right mechanism. And here
enter the emergent algebras, i.e. the abstract nonsense formalism with trees and nodes and graph rewrites which I have found trying to
understand sub-riemannian geometry (and noticing that it does not
apply only to sub-riemannian, but seems to be something more general, of a computational nature, but which computation, etc). The closeness,  i.e. the neighbourhood relations themselves are a global, a posteriori view, a static view of the space.

In the Quick and dirty argument for space from chemlambda I propose the following. Because chemlambda is universal, it means that for any program there is a molecule such that the reductions of this molecule simulate the execution of the program. Or, think about the chemlambda gui, and suppose even that I have as much as needed computational power. The gui has two sides, one which processes mol files and outputs mol files of reduced molecules, and the other (based on d3.js) which visualizes each step. “Visualizes” means that there is a physics simulation of the molecule graphs as particles with bonds which move in space or plane of the screen. Imagine that with enough computing power and time we can visualize things in as much detail as we need, of course according to some physics principles which are implemented in the program of visualization. Take now a molecule (i.e. a mol file) and run the program with the two sides reduction/visualization. Then, because of chemlambda universality we know that there exist another molecule which admit chemlambda reductions which simulate the reductions of the first molecule AND the running of the visualization program.

So there is no need to have a spatial side different from the chemical side!

But of course, this is an argument which shows something which can be done in principle but maybe is not feasible in practice.

That is why I propose to concentrate a bit on the pure spatial part. Let’s do a simple thought experiment: take a system with a finite no of degrees of freedom and see it’s state as a point in a space (typically a symplectic manifold) and it’s evolution described by a 1st order equation. Then discretize this correctly(w.r.t the symplectic structure)  and you get a recipe which describes the evolution of the system which has roughly the following form:

  • starting from an initial position (i.e. state), interpret each step as a computation of the new position based on a given algorithm (the equation of evolution), which is always an algebraic expression which gives the new position as a  function of the older one,
  • throw out the initial position and keep only the algorithm for passing from a position to the next,
  • use the same treatment as in chemlambda or GLC, where all the variables are eliminated, therefore renounce in this way at all reference to coordinates, points from the manifold, etc
  • remark that the algebraic expressions which are used  always consists  of affine (or projective) combinations of  points (and notice that the combinations themselves can be expressed as trees or others graphs which are made by dilation nodes, as in the emergent algebras formalism)
  • indeed, that  is because of the evolution equation differential  operators, which are always limits of conjugations of dilations,  and because of the algebraic structure of the space, which is also described as a limit of  dilations combinations (notice that I speak about the vector addition operation and it’s properties, like associativity, etc, not about the points in the space), and finally because of an a priori assumption that functions like the hamiltonian are computable themselves.

This recipe itself is alike a chemlambda molecule, but consisting not only of A, L, FI, FO, FOE but also of some (two perhaps)  dilation nodes, with moves, i.e. graph rewrites which allow to pass from a step to another. The symplectic structure itself is only a shadow of a Heisenberg group structure, i.e. of a contact structure of a circle bundle over the symplectic manifold, as geometric  prequantization proposes (but is a mathematical fact which is, in itself, independent of any interpretation or speculation). I know what is to be added (i.e. which graph rewrites which particularize this structure among all possible ones). Because it connects to sub-riemannian geometry precisely. You may want to browse the old series on Gromov-Hausdorff distances and the Heisenberg group part 0, part I, part II, part III, or to start from the other end The graphical moves of projective conical spaces (II).

Hence my proposal which consist into thinking about space properties as embodied into graph rewriting systems, inspred from the abstract nonsense of emergent algebras, combining  the pure computational side of A, L, etc with the space  computational side of dilation nodes into one whole.

In this sense space as an absolute or relative vessel does not exist more than the  Marius creature (what does exist is a twirl of atoms, some go in, some out, but is too complex to understand by my human brain) instead the fact that all beings and inanimate objects seem to agree collectively when it comes to move spatially is in reality a manifestation of the universality of this graph rewrite system.

Finally, I’ll go to the main point which is that I don’t believe that
is that simple. It may be, but it may be as well something which only
contains these ideas as a small part, the tip of the nose of a
monumental statue. What I believe is that it is possible to make the
argument  by example that it is possible that nature works like this.
I mean that chemlambda shows that there exist a formalism which can do this, albeit perhaps in a very primitive way.

The second belief I have is that regardless if nature functions like this or not, at least chemlambda is a proof of principle that it is possible that brains process spatial information in this chemical way.


Lots of questions, part I

There is a button for “publish”. So what?

I started this open notebook  with the goal to disseminate some of my work and ideas.  There are always many subjects to write about, this open notebook has almost 500 posts. Lately the rhythm of about a post every 3 days slowed down to a post a week.  I have not run out of ideas, or opinions. It is only that I don’t get anything in return.

I explain what I mean by getting something in return. I don’t believe that one should expect a compensation for the time and energy converted into a post. There are always a million posts to read.  There is not a lot of time to read them. It is costly in brain time to understand them, and probably, from the point of view of the reader, the result of this investment does not worth the effort.

So it’s completely unreasonable to think that my posts should have any treatment out of the usual.

Then, what can be the motivation to have an open notebook, instead of just a notebook? Besides vanity, there is not much.

But vanity was not my motivation, although it feels very good to have a site like this one. Here is why:   from the hits I can see that people read old posts as frequently as new posts. You have to agree that this is highly unusual for a blog. So, incidentally,  perhaps this is not a blog, doh.

I put vanity aside and I am now closer to the real motivations for maintaining this open notebook.

Say you have a new idea, product, anything which behaves like a happy virus who’s looking for hosts to multiply. This is pretty much the problem of any creator: to find hosts.  OK, what is available for a creator who is not a behemoth selling sugar solutions or other BRILLIANT really simple viruses like phones, political ideas, contents for lazy thinking trolls and stuff like this?

What if I don’t want to sell ideas, but instead I want to find those rare people with similar interests?

I don’t want to entertain anybody, instead that’s a small fishing net in the big sea.

OK, this was the initial idea. That compared to the regular ways, meaning writing academic articles, going to conferences, etc, there might be more chances to talk with interesting people if I go fishing in the high seas, so to say.

These are my expectations. That I might find interesting people to work with, based on common passions, and to avoid the big latency of the academic world, so that we can do really fast really good things now.

I know that it helps a lot to write simple. To dilute the message. To appeal to authority, popularity, etc.

But I expect that there is a small number of you guys who really think as fast as I do. And then reply to me, simultaneously to and .

Now that my expectations are explained, let’s look at the results. I have to put things in context a bit.

This site was called initially . I wanted to start a blog about how is it to live in Rio with wife and two small kids. Not a bad subject, but I have not found the time for that side project, because I was just in the middle of an epiphany. I wanted to switch fields, I wanted to move from pure and applied mathematics to somewhere as close as possible to biology and neuroscience. But mind you that I wanted also to bring with me the math. Not to make a show of it, but to use the state of mind of a mathematician in these great emerging fields. So, instead of writing about my everyday life experiences, I started to write to everybody I found on the Net who was not (apparently) averse to mathematics and who was also somebody in neuroscience. You can imagine that my choices were not very well informed, because these fields were so far from what I knew before. Nevertheless I have found out interesting people, telling them about why I want to switch. Yes, why? Because of the following  reasons: (1) I am passionate about making models of reality, (2) I’m really good at finding unexpected points of view, (3) I learn very fast, (4) I understood that pure or applied math needs a challenge beyond the Cold War ones (i.e. theories of everything, rocket science, engineering).  OK, I’ll stop here with the list, but there were about 100 more reasons, among them being to understand what space is from the point of the view of a brain.

I got fast into pretty weird stuff. I started to read philosophy, getting hooked by Plato. Not in the way the usual american thinker does. They believe that they are platonic but they are empiricists, which is exactly the poor (brain) version of platonism. I shall stop giving kicks to empiricists, because they have advanced science in many ways in the last century.  Anyway empiricism looks more and more like black magic these days. Btw, have you read anything by Plato? If you do, then try to go to the source. Look for several sources,  you are not a good reader of ancient Greek.  Take your time, compare versions, spell the originals (so to say), discover the usual phenomenon that more something is appreciated, more shit inside.

Wow, so here is it a mathematician who wants to move to biology, and he uses Plato as a vehicle. That’s perhaps remarkabl…y stupid to do, Marius. What happened, have you ran out of the capacity to do math? Are you out in the field where people go when they can’t stand no more the beauty and hardness of mathematics? Everybody knows, since that guy who wrote with Ramanujan and later, after R was dead, told us that mathematics is for young people. (And probably white wealthy ones.)

No, what happened was that the air of Rio gave me the guts I have lost during the education process. Plato’s Timaeus spoke to me in nontrivial ways, in particular. I have understood that I am really on the side of geometers, not on the side of language people. And that there is more chance to understand brains if we try to model what the language people assume it works by itself, the low level, non rational processes of the brain. Those who need no names, no language, those highly parallel ones. For those, I discovered, there was no math to apply.  You may say that for example vision is one of the most studied subjects and that really there is a lot of maths already used for that. But if you say so then you are wrong.  There is no model of vision up to now, which explains how biological vision works without falling into the internal or external homunculus fallacies. If you look to computer vision, you know, you can do anything with computers, provided you have enough of them and enough time. There is a huge gap between computer vision and biological vision, a fundamental one.

OK, when I returned home to Bucharest I thought what if I reuse the and transform it into This word chorasimilarity is made of “chora”, which is the feminine version of “choros”, which means place or space. Plato invented the “chora” as a term he used in his writings. “Similarity” was because of my background in math: I was playing with “emergent algebras”, which I invented previously of going on the biology tangent. In fact these emergent algebras made me think first that it is needed a new math, and that maybe they are relevant for biological vision.

I stop a bit to point to the post Scale is a place in the brain, which is about research on grid cells and place cells (research which just got a Nobel in medicine in 2014).

Emergent algebras are about similarity. They make visible that behind is hidden an abstract graph rewrite system. Which in turn can be made concrete by transforming it into chemistry. An artificial chemistry.  But also, perhaps, a real one. Or, the brain is most of it chemistry. Do you see how everything gets in place?  Chora is just chemistry in the brain. Being universal, it is not surprising that we distilled, us humans, a notion of space from that.

There is a lot of infrastructure to build in order to link all these in a coherent way.