All posts by chorasimilarity

can do anything

Update the Panton Principles please

There is a big contradiction between the text of The Panton Principles and the List of the Recommended Conformant Licenses. It appears that it is intentional, I’ll explain in a moment why I write this.

This contradiction is very bad for the Open Science movement. That is why, please, update your principles.

Here is the evidence.

1. The second of the Panton Principles is:

“2. Many widely recognized licenses are not intended for, and are not appropriate for, data or collections of data. A variety of waivers and licenses that are designed for and appropriate for the treatment of data are described [here](http://opendefinition.org/licenses#Data). Creative Commons licenses (apart from CCZero), GFDL, GPL, BSD, etc are NOT appropriate for data and their use is STRONGLY discouraged.

*Use a recognized waiver or license that is appropriate for data.* ”

As you can see, the authors clearly state that “Creative Commons licenses (apart from CCZero) … are NOT appropriate for data and their use is STRONGLY discouraged.”

2. However, if you look at the List of Recommended Licenses, surprise:

Creative Commons Attribution Share-Alike 4.0 (CC-BY-SA-4.0) is recommended.

3. The CC-BY-SA-4.0 is important because it has a very clear anti-DRM part:

“You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, the Licensed Material if doing so restricts exercise of the Licensed Rights by any recipient of the Licensed Material.” [source CC 4.0 licence: in Section 2/Scope/a. Licence grant/5]

4. The anti-DRM is not a “must” in the Open Definition 2.1. Indeed, the Open Definition clearly uses “must” in some places and “may” in another places.  See

“2.2.6 Technical Restriction Prohibition

The license may require that distributions of the work remain free of any technical measures that would restrict the exercise of otherwise allowed rights. ”

5. I asked why is this here. Rufus Pollock, one of the authors of The Panton Principles and of the Open Definition 2.1, answered:

“Hi that’s quite simple: that’s about allowing licenses which have anti-DRM clauses. This is one of the few restrictions that an open license can have.”

My reply:

“Thanks Rufus Pollock but to me this looks like allowing as well any DRM clauses. Why don’t include a statement as clear as the one I quoted?”

Rufus:

“Marius: erm how do you read it that way? “The license may prohibit distribution of the work in a manner where technical measures impose restrictions on the exercise of otherwise allowed rights.”

That’s pretty clear: it allows licenses to prohibit DRM stuff – not to allow it. “[Open] Licenses may prohibit …. technical measures …”

Then:

“Marius: so are you saying your unhappy because the Definition fails to require that all “open licenses” explicitly prohibit DRM? That would seem a bit of a strong thing to require – its one thing to allow people to do that but its another to require it in every license. Remember the Definition is not a license but a set of principles (a standard if you like) that open works (data, content etc) and open licenses for data and content must conform to.”

I gather from this exchange that indeed the anti-DRM is not one of the main concerns!

6. So, until now, what do we have? Principles and definitions which aim to regulate what Open Data means which avoid to take an anti-DRM stance. In the same time they strongly discourage the use of an anti-DRM license like CC-BY-4.0. However, on a page which is not as visible they recommend, among others, CC-BY-4.0.

There is one thing to say: “you may use anti-DRM licenses for Open Data”. It means almost nothing, it’s up to you, not important for them. They write that all CC licenses excepting CCZero are bad! Notice that CC0 does not have anything anti-DRM.

Conclusion. This ambiguity has to be settled by the authors. Or not, is up to them. For me this is a strong signal that we witness one more attempt to tweak a well intended  movement for cloudy purposes.

The Open Definition 2.1. ends with:

Richard Stallman was the first to push the ideals of software freedom which we continue.

Don’t say, really? Maybe is the moment for a less ambiguous Free Science.

The price of publishing with GitHub, Figshare, G+, etc

Three years ago I posted The price of publishing with arXiv. If you look at my arXiv articles then you’ll notice that I barely posted on arXiv.org since then. Instead I went into territory which is even less recognized as serious by a big part of academia. I used:

The effects of this choice are put in front of my homepage, so go there to read them. (Besides, it is a good exercise to remember how to click on links and use them, that lost art from the age when internet was free.)

In this post I want to explain what is the price I paid for these choices and what I think now about them.

First, it is a very stressful way of living. I am not joking, as you know stress comes from realizing that there are many choices and one has to choose. Random reward from the social media is addictive. The discovery that there is a way to get out from the situation which keeps us locked into the legacy publishing system (validation). The realization that the problem is not technical but social. A much more cynical view of the undercurrents of the social life of researchers.

The feeling that I can really change the world with my research. The worries that some possible changes might be very dangerous.

The debt I owe concerning the scarcity of my explanations. The effort to show only the aspects I think are relevant, putting aside those who are not. (Btw, if you look at my About page then you’ll read “This blog contains ideas from the future”. It is true because I already pruned the 99% of the paths leading nowhere interesting.)

The desire to go much deeper, the desire to explain once again what and why, to people who seem either lacking long term attention capability or having shallow pet theories.

Is like fishing for Moby Dick.

Synergistics talks through his chemlambda Haskell version

… in a very nice and clear, 9:30 presentation. I especially enjoyed from 5:32, when he describes what enzymes are and further, but all of the presentation is instructive because it starts from 0.

The video talk is this

His github repository chemlambda-hask is this

https://github.com/synergistics/chemlambda-hask

Thank you J, very nice!

Pharma meets the Internet of Things

Pharma meets the Internet of Things, some commented references for this future trend. Use them to understand

[0] After the IoT comes Gaia
https://chorasimilarity.wordpress.com/2015/10/30/after-the-iot-comes-gaia/

There are two realms of computation, which should and will become one: the IT technology and biochemistry.

General stuff

The notion of computation is now well known, we speak about what is computable and about various models of computation (i.e. how we compute) which always turned out to be equivalent in the sense that they give the same class of computable things (that’s the content of the Church-Turing thesis).

It is interesting though how we compute, not only what is computable.

In IT perhaps the biggest (and socially relevant) problem is decentralized asynchronous computing. Until now there is no really working solution of a model of computation which is:
– local in space (decentralized)
– local in time (asynchronous)
– with no pre-imposed hierarchy or external authority which forces coherence

In biochemistry, people know that we, anything living, are molecular assemblies which work:
– local in space (all chemical interactions are local)
– local in time (there is no external clock which synchronizes the reactions)
– random (everything happens without any external control)

Useful links for an aerial view on molecular computing, seen as the biochemistry side of computation:

[1] https://www.britannica.com/technology/DNA-computing

Some history and details provided. Quote from the end of the section “Biochemistry-based information technology”

“Other experiments have shown that basic computations may be executed using a number of different building blocks (for example, simple molecular “machines” that use a combination of DNA and protein-based enzymes). By harnessing the power of molecules, new forms of information-processing technology are possible that are evolvable, self-replicating, self-repairing, and responsive. The possible applications of this emerging technology will have an impact on many areas, including intelligent medical diagnostics and drug delivery, tissue engineering, energy, and the environment.”

[2] http://www.owlnet.rice.edu/~Cyrus.Mody/MyPubs/Molecular%20Electronics.pdf

A detailed historical view (written in 2000) of the efforts towards “molecular electronics”. Mind that’s not the same subject as [1], because the effort here is to use biochemistry to mimic silicon computers. While [1] also contains such efforts (building logical gates with DNA, etc), DNA computing does propose also a more general view: building structure from structure as nature does.

[3] https://www.extremetech.com/tag/molecular-computer

Two easy to read articles about real applications of molecular computing:
– “Microscopic machine mimics the ribosome, forms molecular assembly line”
– “Biological computer can decrypt images stored in DNA”

[4] https://www.technologyreview.com/s/601842/inside-genomics-pioneer-craig-venters-latest-production/

Article about Craig Venter from 2016, found by looking for “Craig Venter Illumina”. Other informative searches would be “Digital biological converter” or anything “Craig Venter”

[5] https://www.ted.com/talks/lee_cronin_print_your_own_medicine/transcript?language=en

Interesting talk by an interesting researcher Lee Cronin

[6] The Molecular Programming Project http://molecular-programming.org/

Worth to be browsed in detail for seeing the various trends and results

Sitting in the middle, between biochemistry and IT:

[1] Algorithmic Chemistry (Alchemy) of Fontana and Buss
http://fontana.med.harvard.edu/www/Documents/WF/Papers/alchemy.pdf

Walter Fontana today: http://fontana.med.harvard.edu/www/index.htm

[2] The Chemical Abstract Machine by Berry and Boudol

http://www.lix.polytechnique.fr/~fvalenci/papers/cham.pdf

[3] Molecular Computers (by me, part of an Open Science project, see also my homepage http://imar.ro/~mbuliga/ and the chemlambda github page https://github.com/chorasimilarity/chemlambda-gui/blob/gh-pages/dynamic/README.md )

http://chorasimilarity.github.io/chemlambda-gui/dynamic/molecular.html

On the IT side there’s a beautiful research field, starting of course with lambda calculus by Church. Later on this evolved in the direction of rewriting systems, then graph rewriting systems. I can’t even start to write all that’s done in this direction, other than:

[1] Y. Lafont, Interaction Combinators
http://iml.univ-mrs.fr/~lafont/pub/combinators.ps

but see as well the Alchemy, which uses lambda calculus!

However, it would be misleading to reduce everything to lambda calculus. I came to the conclusion that lambda calculus or Turing machines are only two among the vast possibilities, and not very important. My experience with chemlambda shows that the most relevant mechanism turns around the triple of nodes FI, FO, FOE and their rewrites. Lambda calculus is obtained by the addition of a pair of A (application) and L (lambda) nodes, along with standard compatible moves. One might use as well nodes related to a  Turing Machine instead, as explained in

http://chorasimilarity.github.io/chemlambda-gui/dynamic/turingchem.html

Everything works just the same. The center, what makes things work, is not related to Logic or Computation as they are usually considered. More later.

How to use the chemlambda collection of simulations

The chemlambda_casting folder (1GB) of simulations is now available on Figshare [1].

How to use the chemlambda collection of simulations? Here’s an example. The synthesis from a tape video [2] is reproduced here with a cheap animated gif. The movie records the simulation file 3_tape_long_5346.html which is available for download at [1].

That simple.

If you want to run it in your computer then all you have to do is to download 3_tape_long_5346.html from [1], download from the same place d3.min.js and jquery.min.js (which are there for your convenience). Put the js libs in the same folder as the html file. Open the html file with a browser, strongly recommend Safari or Chrome (not Firefox which blocks with these d3.js animations, for reasons related to d3). In case your computer has problems with the simulation (I used a macbook pro with safari) then slow it like this: edit the html file (with any editor) and look for the line starting with

return 3000 + (4*(step+(Math.random()*

and replace the “4” by “150”, it should be enough.

Here is a longer explanation. The best would be to read carefully the README [4].
“Advanced”: If you want to make another simulation for the same molecule then follow the steps.

1. The molecule used is 3_tape_long_5346.mol which is available at the library of chemlambda molecules [3].

2. So download the content of the gh-pages branch of the chemlambda repository at github [4] as explained in that link.

3. then follow the steps explained there and you’ll get a shiny new 3_tape_long_5346.html which of course may be different in details than the initial one (it depends on the script used, if you use the random rewrites scripts then of course the order of rewrites may be different).

[1] The Chemlambda collection of simulations
https://doi.org/10.6084/m9.figshare.4747390.v1

[2] Synthesis from a tape
https://plus.google.com/+MariusBuliga/posts/Kv5EUz4Mdyp

[3] The library of chemlambda molecules
https://github.com/chorasimilarity/chemlambda-gui/tree/gh-pages/dynamic/mol

[4] Chemlambda repository (readme) https://github.com/chorasimilarity/chemlambda-gui/blob/gh-pages/dynamic/README.md

The chemlambda collection is a social hack, here’s why

 

People from data deprived places turn to available sources for scientific information. They have the impression that Social Media may be useful for this. Reality is that it is not, by design.

But we can socially hack the Social Media for the benefit of Open Science.

Social Media is not fit for Open Science by design. They are Big Data gatherers, therefore they are interested not in the content per se, but in the metadata. The huge quantity of metadata they suck from the users tells them about the instantaneous interests and social links or preferences. That is why cat pics are everywhere: the awww moment is data poor but metadata rich.

Open Science has as aim to share scientific data and rigorous validation means. For free! Therefore Open Science is data rich. It is also, by design, metadata poor, because at least if a piece of research is not yet popular, there is not much interaction (useful for example to advertisers or to tech companies or govenrnments) to be encoded in
metadata.

The public impression is that science is hard and many times boring. There are however many people interested in science, like for example smart kids or creative people living in data deprived places. There are so many people with access to the Social Media so that, in principle, even the most seemingly boring science project may gather the attentions of tens of thousands of them. If well done!

Such science projects may never see the light of the media attention because classical media works with big numbers and very low level content. Classical media has still to adapt to the new realities of the Net. One of them is that the Net people are in such a great number that there is no need to adapt a message for a majority of people which is not, generically, interested in science.

Likewise, Social Media is by design driven by big numbers (of metadata, this time). They couldn’t care less about the content provided that it generates big data exhaust (Zuboff, Big other: surveillance capitalism and the prospects of an information civilization).

They can be tricked!

This was the purpose of the chemlambda collection: beautiful animations, data rich content hidden behind for those interested. My previous attempts to use classical channels for Open Science gave only very limited results. Indeed, the same is true for a smart kid or a creative person from Africa.

If you are not born in the right place, studied at the right university and made the right friends then your ideas will not spread through the classical channels, unless your
ideas are useful to a privileged team. You, smart kid or creative person from Africa, will never advance your ideas to the world unless they are useful first not to you, but to privileged people from far away places. If this happens, the best you can expect is to be an useful servant for them.

So, with these ideas and experiences, I tried to socially hack the Big Data gatherers. I presented short animations (under 10s) obtained from real scientific simulations. I chose them among those which are visually appealing. Each of them can be reproduced and researched by anybody interested via a GitHub repository.

It worked. The Algorithmic Gods from Google decided to make chemlambda a featured collection. I had more than 50 000 followers and more than 50 millions views of these scientific, original simulations.

To compare, another collection, dedicated to censorship on social media, had no views!

I shall make, acording to my access to data, which is limited, an analysis of people who saw the collection.

It seems to me that there were far more women that men. Probably the algorithms used the prior that women, stupid as they are, are more interested in pictures than text. Great, let’s hack this stupid prior and turn it into a chance to help Women access to science 🙂

There were far more people from Asia and Africa than from the West. Because, of course, they are stupid and don’t speak the language (English), but they can look at the pictures. Great, let’s turn this snobbery into an advantage, because they are the main public which could benefit from Open Science.
The amazing (for me) popularity of this experiment showed that there is something more to dig in this direction!
Science can be made interesting and remain rigorous too.

Science and art are not as different as they look, in particular for this project the visual arts.

And the chemlambda project is very interesting, of course, because it a take on life at molecular level done by a mathematician. The biologists need this, not only mathematical tools, but also mathematical minds. Biologists, as the Social Media companies, sit on heaps of Big Data.

Finally, there is the following question I’d like to ask.
Scientific data is, in bits, a tiny proportion of the Big Data gathered everyday. Is tiny, ridiculously tiny.

Question: where to put it freely, so that it stays free and is treated properly, I mean as visible and easy to access as a cat pic? Would it be so hard to dedicate something like 1/10 000 of the servers used for Big Data in order to keep Open Science alive? In order to not let it rot along with older cat pics?