ArXiv is 3 times bigger than all megajournals taken together

 How big are the “megajournals” compared to arXiv?
I use data from the article

[1] Have the “mega-journals” reached the limits to growth? by Bo-Christer Björk ​https://dx.doi.org/10.7717/peerj.981 , table 3

and the arXiv monthly submission rates

[2] http://arxiv.org/stats/monthly_submissions

To have a clear comparison I shall look at the window 2010-2014.

Before showing the numbers, there are some things to add.

1.  I saw the article [1] via the post by +Mike Taylor

[3] Have we reached Peak Megajournal? http://svpow.com/2015/05/29/have-we-reached-peak-megajournal/

I invite you to read it, it is interesting as usual.

2. Usually, the activity of counting articles is that dumb thing which is used by managers to hide behind, in order to not be accountable for their decisions.
Counting  articles is a very lossy compression technique, which associates to an article a very small number of bits.
I indulged into this activity because of the discussions from the G+ post

[4] https://plus.google.com/+MariusBuliga/posts/efzia2KxVzo

and its clone

[4′] Eisen’ “parasitic green OA” is the apt name for Harnad’ flawed definition of green OA, but all that is old timers disputes, the future is here and different than both green and gold OA https://chorasimilarity.wordpress.com/2015/05/28/eisen-parasitic-green-oa-is-the-apt-name-for-harnad-flawed-definition-of-green-oa-but-all-that-is-old-timers-disputes-the-future-is-here-and-different-than-both-green-and-gold-oa/

These discussions made me realize that the arXiv model is carefully edited out from reality by the creators and core supporters of green OA and gold OA.

[see more about in the G+ variant of the post https://plus.google.com/+MariusBuliga/posts/RY8wSk3wA3c ]
Now, let’s see those numbers. Just how big is that arXiv thing compared to “megajournals”?

From [1]  the total number of articles per year for “megajournals” is

2010:  6,913
2011:  14,521
2012:   25,923
2013:  37,525
2014:  37,794
2015:  33,872

(for 2015 the number represents  “the articles published in the first quarter of the year multiplied by four” [1])

ArXiv: (based on counting the monthly submissions listed in [2])

2010: 70,131
2011: 76,578
2012: 84,603
2013: 92,641
2014:  97,517
2015:  100,628  (by the same procedure as in [1])

This shows that arXiv is 3 times bigger than all the megajournals at once, despite that:
– it is not a publisher
– does not ask for APC
– it covers fields far less attractive and prolific than the megajournals.

And that is because:
– arxiv answers to a real demand from researchers, to communicate fast and reliable their work to their fellows, in a way which respects their authorship
– also a reaction of support for what most of them think is “green OA”, namely to put their work there where is away from the publishers locks.

_____________________________________

Eisen’ “parasitic green OA” is the apt name for Harnad’ flawed definition of green OA, but all that is old timers disputes, the future is here and different than both green and gold OA

See this post and the replies on G+ at [archived post].

My short description of the situation: the future is here, and it is not gold OA (nor the flawed green OA definition which ignores arXiv). So, visually:

imageedit_34_6157098125

It has never occurred to me that putting an article in a visible place (like arXiv.org) is parasitic green OA+Michael B. Eisen  calls it parasitic because he supposes that this has to come along with the real publication. But what if not?

[Added: Eisen writes in the body of the post that he uses the definition given by Harnad to green OA, which ignores the reality. It is very conveniently for gold OA to have a definition of green OA which does not apply to the oldest (1991) and fully functional example of a research communication experiment which is OA and green: the arXiv.org.]
Then, compared to that, gold OA appears as a progress.
http://www.michaeleisen.org/blog/?p=1710

I think gold OA, in the best of cases, is a waste of money for nothing.

A more future oriented reply has +Mike Taylor
http://svpow.com/2015/05/26/green-and-gold-the-possible-futures-of-open-access/
who sees two possible futures, green (without the assumption from Eisen post) and gold.

I think that the future comes faster. It is already here.

Relax. Try validation instead peer review. Is more scientific.

Definition. Peer-reviewed article: published by the man who saw the man who claims to have read it, but does not back the claim with his name.

The reviewers are not supermen. They use the information from the traditional article. The only thing they are supposed to do is that they read it. This is what they use to give their approval stamp.

Validation means that the article provides enough means so that the readers can reproduce the research by themselves. This is almost impossible with  an article in the format inherited from the time when it was printed on paper. But when the article is replaced by a program which runs in the browser, which uses databases, simulations, whatever means which facilitate the validation, then the reader can, if he so wishes, make a scientific motivated opinion about this.

Practically the future has come already and we see it on Github. Today. Non-exclusively. Tomorrow? Who knows?

Going back to the green-gold OA dispute, and Elsevier recent change of sharing and hosting articles (which of course should have been the real subject of discussions, instead of waxing poetic about OA, only a straw man).

This is not even interesting. The discussion about OA revolves around who has the copyright and who pays (for nothing).

I would be curious to see discussions about DRM, who cares who has the copyright?

But then I realised that, as I wrote at the beginning of the post, the future is here.

Here to invent it. Open for everybody.

I took the image from this post by +Ivan Pierre and modified the text.
https://plus.google.com/+IvanPierreKilroySoft/posts/BiPbePuHxiH

_____________

Don’t forget to read the replies from the G+ post. I archived this G+ post because the platform went down. Read here why I deleted the chemlambda collection from G+.

____________________________________________________

Real or artificial chemistries? Questions about experiments with rings replications

The replication mechanism for circular bacterial chromosomes is known. There are two replication forks which propagate in two directions, until they meet again somewhere and the replication is finished.

Bidirectionalrep2

[source, found starting from the wiki page on circular bacterial chromosomes]

In the artificial chemistry chemlambda something similar can be done. This leads to some interesting questions. But first, here is a short animation which describes the chemlambda simulation.

ringduplication

The animation has two parts, where the same simulation is shown. In the first part some nodes are fixed, in order to ease the observation of the propagation of the replication, which is like the propagation of the replication forks. In the second part there is no node fixed, which makes easy to notice that eventually we get two ring molecules from one.

____________

If the visual evidence convinced you, then it is time to go to the explanations and questions.

But notice that:

  • The replication of circular DNA molecules is done with real chemistry
  • The replication of the circular molecule from the animation is done with an artificial chemistry model.

The natural question to ask is: are these chemistries the same?

The answer may be more subtle than a simple yes or no. As more visual food for thinking, take a look at a picture from the Nature Nanotechnology Letter “Self-replication of DNA rings” http://www.nature.com/nnano/journal/vaop/ncurrent/full/nnano.2015.87.html by Junghoon Kim, Junwye Lee, Shogo Hamada, Satoshi Murata & Sung Ha Park

nnano.2015.87-f1

[this is Figure 1 from the article]

This is a real ring molecule, made of patterns which, themselves are made of DNA. The visual similarity with the start of the chemlambda simulation is striking.

But this DNA ring is not a DNA ring as in the first figure. It is made by humans, with real chemistry.

Therefore the boldfaced question can be rephrased as:

Are there real chemical assemblies which react as of they are nodes and bonds of the artificial chemistry?

Like actors in a play, there could be a real chemical assembly which plays the role of a red atom in the artificial chemistry, another real chemical assembly which plays the role of a green atom, another for a small atom (called “port”) and another for a bond between these artificial chemistry atoms.

From one side, this is not surprising, for example a DNA molecule is figured as a string of letters A, C, T, G, but each letter is a real molecule. Take A (adenine)

800px-Adenine-3D-balls

[source]

Likewise, each atom from the artificial chemistry (like A (application), L (lambda abstraction), etc) could be embodied in real chemistry by a real molecule. (I am not suggesting that the DNA bases are to be put into correspondence with artificial chemistry atoms.)

Similarly, there are real molecule which could play the role of bonds. As an ilustration (only), I’ll take the D-deoxyribose, which is involved into the backbone structure of a DNA molecule.

D-deoxyribose_chain-3D-balls

[source]

So it turns out that it is not so easy to answer to the question, although for a chemist may be much easier than for a mathematician.

___________

 0. (few words about validation)  If you have the alternative to validate what you read, then it is preferable to authority statements or hearsay from editors. Most often they use anonymous peer-reviews which are unavailable to the readers.

Validation means that the reader can make an opinion about a piece of research by using the means which the author provides. Of course that if the means are not enough for the reader, then it is the author who takes the blame.

The artificial chemistry  animation has been done by screen recording of the result of a computation. As the algorithm is random, you can produce another result by following the instructions from
https://github.com/chorasimilarity/chemlambda-gui/blob/gh-pages/dynamic/README.md
I used the mol file model1.mol and quiner.sh. The mol file contains the initial molecule, in the mol format. The script quiner.sh calls the program quiner.awk, which produces a html and javascript file (called model1.html), which you can see in a browser.

I  added text to such a result and made a demo some time ago

http://chorasimilarity.github.io/chemlambda-gui/dynamic/model1.html

(when? look at the history in the github repo, for example: https://github.com/chorasimilarity/chemlambda-gui/commits/gh-pages/dynamic/model1.html)

1. Chemlambda is a model of computation based on artificial chemistry which is claimed to be very closed to the way Nature computes (chemically), especially when it comes to the chemical basis of computation in living organisms.
This is a claim which can be validated through examples. This is one of the examples. There are many other more; the chemlambda collection shows some of them in a way which is hopefully easy to read (and validate).
A stronger claim, made in the article Molecular computers (link further), is that chemlambda is real chemistry in disguise, meaning that there exist real chemicals which react according to the chemlambda rewrites and also according to the reduction algorithm (which does random rewrites, as if produced by random encounters of the parts of the molecule with invisible enzymes, one enzyme type per rewrite type).
This claim can be validated by chemists.
http://chorasimilarity.github.io/chemlambda-gui/dynamic/molecular.html

2. In the animation the duplication of the ring molecule is achieved, mostly, by the propagation of chemlambda DIST rewrites. A rewrite from the family DIST typically doubles the number of nodes involved (from 2 to 4 nodes), which vaguely suggest that DIST rewrites may be achieved by a DNA replication mechanism.
(List of rewrites here:
http://chorasimilarity.github.io/chemlambda-gui/dynamic/moves.html )

So, from my point of view, the question I have is: are there DNA assemblies for the nodes, ports and bonds of chemlambda molecules?

3. In the first part of the animation you see the ring molecule with some (free in FRIN and free out FROUT) nodes fixed. Actually you may attach more graphs at the free nodes (yellow and magenta 1-valent nodes in the animation).

You can clearly see the propagation of the DIST rewrites. In the process, if you look closer, there are two nodes which disappear. Indeed, the initial ring has 9 nodes, the two copies have 7 nodes each. That is because the site where the replication is initiated (made of two nodes) is not replicated itself. You can see the traces of it as a pair of two bonds which connect, each, a free in with a free out node.

In the second part of the animation, the same run is repeated, this time without fixing the FRIN and FROUT nodes before the animation starts. Now you can see the two copies, each with 7 nodes, and the remnants of the initiation site, as a two pairs of FRIN-FROUT nodes.

_________________________________________

Asynchronous and decentralized teleportation of Turing Machines

The idea is that it is possible to copy a computer which executes a program, during execution, and to produce working clones. All this without any control (hence decentralized) and any synchronization.

More than that, the multiple copies are done in the same time as the computers compute.

I think is crazy, there’s nothing like this on the offer. This is a proof of principle.  The post is a light (hopefully) introduction into the subject.

If you look for validation, then go follow the instructions from
and use the file bbdupli.mol
This animation starts with one Turing Machine and ends with three Turing Machines.
bbdupli
Huh?
Let’s take it slower.
1. Everybody knows what is a TM. Computers are the real thing which resembles most to a TM. There is lot of stuff added, like a screen, keyboard, a designer box, antennas and wires, but essentially a computer has a memory which is like a tape full of bits (0 and 1) and there is a processor which is like a read/write head with an internal state. When the head reads a bit from the tape then the internal state changes, the head writes in the tape and then moves, according to some internal rules.
2. These rules define the TM. Any rule goes, but among them there are rules which make the TM universal. That means that there are well chosen rules such that  if you first put on the tape a string of bits, which is like the OS of the computer, and if you place the head at the beginning of this string, then the TM (which obbeys those clever rules) uploads the OS and then it becomes the TM of your choice.
3. From here it becomes clear that these rules are very important. Everything else is built on them. There are many examples of rules for universal TM, the more or less precise estimate is that for a tape written with bits (0 and 1) the known universal TM need about 20 rules.
4. Just a little bit earlier than the TM invention, lambda calculus has been invented as well, by Alonzo Church. Initially the lambda calculus was a monumental formalism, but after the TM invention there was a variant of it, called untyped lambda beta calculus, which has been proved to be able to compute the same things as a TM can compute.
The untyped lambda beta calculus is famous among the CS nerds and much less known by others (synthetic biologists, I’m looking at you) who think that TM are the natural thing to try to emulate with biology and chemistry, because it is obviously something everybody understands, not like lambda calculus which goes into higher and higher abstractions.
There are exceptions, the most famous in my opinion is the Algorithmic Chemistry (or Alchemy) of Fontana and Buss. They say that lambda calculus is chemistry and that one operation (called application) of LC is like a chemical reaction and the other operation (called abstraction) is like a reactive site. (Then they change their mind, trying to fit types into the story, speaking about chemicals as functions, then they use pi calculus, all very interesting but outside of the local goal of this post. Will come back later to that.)
Such exceptions aside, the general idea is: TM easy, LC hard.That is false and I shall prove it. The side bonus is that I can teleport Turing machines.
5. If we want to compare TM with LC then we have to put them on the same footing, right? This same footing is to take the rules of TM and the reductions of LC as rewrites.
But, from the posts about chemlambda, rewrites are chemical reactions!
So let’s put all in the chemlambda frame.

http://chorasimilarity.github.io/chemlambda-gui/dynamic/turingchem.htmlConclusions:

  • LC is actually much simpler than TM, because it uses only one rewrite which is specific to it (the BETA rewrite) instead of about 20 min for the TM
  • LC and TM are both compatible with the chemical approach a la chemlambda, in the sense that chemlambda can be enhanced by the addition of “bits” (red and green 2 valent nodes), head move (Arrow element) and TM internal states (other 2-valent nodes) and by some “propagation” rewrites, such that chemlambda can now do both TM and LC in the same time!

In the animation you see, at the beginning, something like
a tree made of green fanout (FO) nodes (and yellow FRIN and magenta FROUT leaves) and
– a  small ring of green (2-valent, bit 0) and pale green (a state of a TM). That small ring is a TM tape which is “abstracted”, i.e. connected to a lambda abstraction node, which itself is “applied” (by an application A node) to the fanout tree.

What happens? Eventually there are produced 3 TM (you see 3 tapes) which function independently.

They are actually in different states, because the algorithm which does all this is random (it does a rewrite if a coin has the fancy to drop in the right state).

The algorithm is random (simulation of asynchronous behaviour) and all rewrites are local (simulation of decentralized).

Only a simulation which shows it is perfectly possible. The real thing would be to turn this into a protocol.

____________________________________________________________

Screen recording of the reading experience of an article which runs in the browser

The title probably needs parsing:

SCREEN RECORDING {

READING {

PROGRAM EXECUTION {

RESEARCH ARTICLE }}}

An article which runs in the browser is a program (ex. html and javascript)  which is executed by the browser. The reader has access to the article as a program, to the data and other programs which have been used for producing the article, to all other articles which are cited.

The reader becomes the reviewer. The reader can validate, if he wishes, any piece of research which is communicated in the article.

The reader can see or interact with the research communicated. By having access to the data and programs which have been used, the reader can produce other instances of the same research (i.e virtual experiments).

In the case of the article presented as an example, embedded in the article are animations of the Ackermann function computation and the other concerning the building of a molecular structure. These are produced by using an algorithm which has randomness in the composition, therefore the reader may produce OTHER instances of these examples, which may or may not be consistent with the text from the article. The reader may change parameters or produce completely new virtual experiments, or he may use the programs as part of the toolbox for another piece of research.

The experience of the reader is therefore:

  • unique, because of the complete freedom to browse, validate, produce, experiment
  • not limited to reading
  • active, not passive
  • leading to trust, in the sense that the reader does not have to rely on hearsay from anonymous reviewers

In the following video there is a screen recording of these possibilities, done for the article

M. Buliga, Molecular computers, 2015, http://chorasimilarity.github.io/chemlambda-gui/dynamic/molecular.html

This is the future of research communication.

____________________________________________________________

A neuron-like artificial molecule

This is a neuron-like artificial molecule, simulated in chemlambda.

neuron

It has been described in the older post

https://chorasimilarity.wordpress.com/2015/03/04/how-to-put-a-y-combinator-into-a-neuron-and-lock-it-with-a-quine/

Made with quiner.sh and the file neuron.mol as input.
If you want to make your own, then follow the instructions from here:

https://github.com/chorasimilarity/chemlambda-gui/blob/gh-pages/dynamic/README.md

It is on the list of demos: http://chorasimilarity.github.io/chemlambda-gui/dynamic/neuron.html

__________
Needs explanations, because this is not exactly the image used for a neuron in a neural network.

_________
1. In neural networks a neuron is a black box with some inputs and some outputs, which computes and sends (the same) signal at the outputs as a function of the signals from the inputs. A connection between an output of one neuron and an an input of another has a weight.

The interpretation is that outputs represent the axon of a neuron, which is like a tree with the root in the neuron’s soma. The inputs of the neuron represents the dendrites. For a neuron in a neural network, a connection between an input (leaf of a dendrite) and an output (leaf of an axon) has a weight because it models an average of many contact points (synapses) between dendrites and axons.

The signals sent represent electrical activity in the neural network.

2. Here, the image is different.

Each neuron is seen as a bag of chemicals. The electrical signals are only a (more easy to measure) aspect of the cascades of chemical reactions which happen inside the neuron’s some, in the dendrites, axon and synapses.

Therefore a neuron in this image is the chemical reactions which happen.

From here we go a bit abstract.

3. B-type unorganised machines. In his fundamental research “Intelligent Machinery” Turing introduced his  B-type neural networks
http://www.alanturing.net/turing_archive/pages/Reference%20Articles/connectionism/Turing%27s%20neural%20networks.html
or B-type unorganised machine, which is made by more or less identical neurons (they compute a boolean function) which are connected via a connection-modifier box.

A connection modifier box has an input and an output, which are part of the connection between two neurons. But it has also some training connections, which can modify the function  computed by the connection modifier.

What is great is that Turing explains that the connection modifier can me made by neurons
http://www.alanturing.net/turing_archive/archive/l/l32/L32-007.html
so that actually a B-type unorganised machine is an A-type unorganised machine (same machine without connection modifiers), where we see the connection modifiers as certain, well chosen patterns of neurons.

OK, the B-type machines compute by signals sent and received by neurons nevertheless. They compute boolean values.

4. What would happen  if we pass from Turing to Church?

Let’s imagine the same thing, but in lambda calculus: neurons which reduce lambda terms, in networks which have some recurring patterns which are themselves made by neurons.

Further: replace the signals (which are now lambda terms) by their chemical equivalent — chemlambda molecules — and replace the connection between them by chemical bonds. This is what you see in the animation.

Connected to the magenta dot (which is the output of the axon) is a pair of nodes which is related to the Y combinator, as seen as a molecule in chemlambda. ( Y combinator in chemlambda explained in this article http://arxiv.org/abs/1403.8046 )

The soma of this neuron is a single node (green), it represents an application node.

There are two dendrites (strings of red nodes) which have each 3 inputs (yellow dots). Then there are two small molecules at the end of the dendrites which are chemical signals that the computation stops there.

Now, what happens: the neuron uses the Y combinator to build a tree of application nodes with the leaves being the input nodes of the dendrites.

When there is no more work to do the molecules which signal these interact with the soma and the axon and transform all into a chemlambda quine (i.e. an artificial bug of the sort I explained previously) which is short living, so it dies after expelling some “bubbles”  (closed strings of white nodes).

5. Is that all? You can take “neurons” which have the soma any syntactic tree of a lambda term, for example. You can take neurons which have other axons than the Y-combinator. You can take networks of such neurons which build and then reduce any chemlambda molecule you wish.
__________________________________________

Artificial life, standard computation tests and validation

In previous posts from the [update: revived] chemlambda collection  I wrote about the simulation of various behaviours of living organisms by the artificial chemistry called chemlambda.
UPDATE: the links to google+ animations are no longer viable. There are two copies of the collection, which moreover are enhanced:

______________

There are more to show in this direction, but there is already an accumulation of them:
jellyfish
20_20_hyb
9_9_hyb
As the story is told backwards, from present to the past, there will be more about reproduction later.
Now, that is one side of the story: these artificial microbes or molecules manifest life characteristics, stemming from an universal, dumb simple algorithm, which does random rewrites as if the molecule encounters invisible rewriting enzymes.

 

So, the mystery is not in the algorithm. The algorithm is only a sketchy physics.

But originally this formalism has been invented for computation.

 

It does pass very well standard computation steps, as well as nonstandard ones (from the point of view of biologists, who perhaps don’t hold enough sophisticated views as to differentiate between boring boolean logic gates and recursive but not primitive recursive functions like the Ackermann function).

In the following animation you see a few seconds screenshot of the computation of a factorial function.

facto

Recall that the factorial is something a bit more sophisticated than a AND boolean gate, but it is primitively recursive, so is less sophisticated than the Ackermann function.

 

During these few seconds there are about 5-10 rewrites. The whole process has several hundreds of them.

How are they done? Who decides the order? How are criteria satisfied or checked, what is incremented, when does the computation stop?

That is why it is wonderful:

  • the rewrites are random
  • nobody decides the order, there’s no plan
  • there are no criteria to check (like equality of values), there are no values to be incremented or otherwise to be passed
  • the computation stops when there are no possible further rewrites (thus, according to the point of view used with the artificial life forms, the computation stops when the organism dies)

Then, how it works? Everything necessary is in the graphs and the rewrite patterns they show.

It is like in Nature, really. In Nature there is no programmer, no function, no input and output, no higher level. All these are in the eyes of the human observer, who then creates a story which has some predictive power if it is a scientific one.

All I am writing can be validated by anybody wishing to do it. Do not believe me, it is not at all necessary to appeal to authority here.

So I conclude:  this is a system of a simplified world which manifest life like behaviours and universal computation power, in the presence of randomness. In this world there is no plan, there is no control and there are no externally imposed goals.

Very unlike the Industrial Revolution thinking!

This thread of posts can be seen at once in the chemlambda collection
https://plus.google.com/u/0/collection/UjgbX

If you want to validate this wordy post yourself then go to the github repository and read the instructions
https://github.com/chorasimilarity/chemlambda-gui/blob/gh-pages/dynamic/README.md

The live computation of the factorial is here
http://chorasimilarity.github.io/chemlambda-gui/dynamic/lisfact_2_mod.html

Which means that if you want to produce another random computation of the factorial then you have to use the file
lisfact_2_mod.mol and to follow the instructions.

________________________________________________________

It is time to cast doubt on any peer reviewed but not validated research article

Any peer reviewed article which does not come at least with the reviews has only a social validation. With reviews which contain only value judgements, grammar corrections and impossible to validate assertions, there is not much more trust added.

As to the recourse to experts… what are we, a guild of wizards? It is true because somebody says some anonymous experts have  been consulted and they say it’s right or wrong?

Would you take a pill based on the opinion of an anonymous expert that it cures your disease?

Would you fly in a plane whose flight characteristics have been validated by the hear-say of unaccountable anonymous experts?

What is more than laughable is that it seems that mathematics is the field with the most wizards, full of experts who are willingly exchanging private value opinions, but who are reluctant to make them in public.

Case by case, building on concrete examples, in an incremental manner, it is possible to write articles which can be validated by using the means they provide (and any other available), by anyone willing to do it.

It is time to renounce at this wizardry called peer review and to pass to a more rigorous approach.

Hard, but possible. Of course that the wizards will complain. After all they are in material conflict of interests, because they are both goalkeepers and arbiters, both in academic and editorial committees.

But again, why should we be happy with “it’s worthy of publication or not because I say so, but do not mention my name” when there is validation possible?

The wizardry costs money, directed to compliant students, produces no progress elsewhere than in the management metrics, kills or stalls research fields where the advance is made harder than it should because of the mediocrity of these high, but oh so shy in public experts who are where they are because in their young time the world was more welcoming with researchers.

Enough!

_____________________________________________________________

Bemis and the bull

Bemis said:

“I fell at the foot of the only solitary tree there was in nine counties adjacent (as any creature could see with the naked eye), and the next second I had hold of the bark with four sets of nails and my teeth, and the next second after that I was astraddle of the main limb and blaspheming my luck in a way that made my breath smell of brimstone. I had the bull, now, if he did not think of one thing. But that one thing I dreaded. I dreaded it very seriously. There was a possibility that the bull might not think of it, but there were greater chances that he would. I made up my mind what I would do in case he did. It was a little over forty feet to the ground from where I sat. I cautiously unwound the lariat from the pommel of my saddle——”

“Your saddle? Did you take your saddle up in the tree with you?”

“Take it up in the tree with me? Why, how you talk. Of course I didn’t. No man could do that. It fell in the tree when it came down.”

“Oh—exactly.”

“Certainly. I unwound the lariat, and fastened one end of it to the limb. It was the very best green raw-hide, and capable of sustaining tons. I made a slip-noose in the other end, and then hung it down to see the length. It reached down twenty-two feet—half way to the ground. I then loaded every barrel of the Allen with a double charge. I felt satisfied. I said to myself, if he never thinks of that one thing that I dread, all right—but if he does, all right anyhow—I am fixed for him. But don’t you know that the very thing a man dreads is the thing that always happens? Indeed it is so. I watched the bull, now, with anxiety—anxiety which no one can conceive of who has not been in such a situation and felt that at any moment death might come. Presently a thought came into the bull’s eye. I knew it! said I—if my nerve fails now, I am lost. Sure enough, it was just as I had dreaded, he started in to climb the tree——”

“What, the bull?”

“Of course—who else?””

[ Mark Twain, Roughing It, chapter VII]

Like Bemis, legacy publishers hope you’ll not think the unthinkable.

That we can pass to a new form of research sharing.

In publicity they say that the public is like a bull.

When you read an article you are like a passive couch potato in front of the TV. They (the publishers, hand in hand with academic managers) cast the shows, you have the dubious freedom to tap onto the remote control.

Now, it is possible, hard but possible and doable on a case by case basis. It is possible to do more. Comparable to the experience you have in a computer game vs the one you have in front of the TV.

You can experience research actively, via research works which run in the browser. I’ll call them “articles” for the lack of the right name, but articles they are not.

An article which runs in the browser should have the following features:

  • you, the reader-gamer, can verify the findings by running (playing) the article
  • so there has to be some part, if not all of the content, into a form which is executed during gameplay, not only as an attached library of programs which can be downloaded and run by the interested reader (although such an attachment is already a huge advance over the legacy publisher pity offer)
  • verification (aka validation) is up to you, and not limited to a yes/no answer. By playing the game (as well as other related articles) you can, and you’ll be interested into discovering more, or different, or opposing results than the one present in the passive version of the article and why not in the mind of the author
  • as validation is an effect of playing the article, peer review becomes an obsolete, much weaker form of validation
  • peer review is anyways a very weird form of validation: the publisher, by the fact it publishes an article, implies that some anonymous members of the research guild have read the article. So when you read the article in the legacy journal you are not even told, only hinted that somebody from the editorial staff exchanged messages with somebody who’s a specialist, who perhaps read the article and thought it is worthy of publication. This is so ridiculous, but that is why you’ll find in many reviews, which you see as an author, so many irrelevant remarks from the reviewer, like my pet example of the reviewer who’s offput by my use of quotation signs. That’s why, because what the reviewer can do is very limited, so in order to give the impression he/she did something, to give some proof that he/she read the article, then it comes with this sort of circumstantial proof. Actually, for the most honest reviewer, the ideally patient and clever fellow who validates the work of the author, there is not much else to do. The reviewer has to decide if he believes it or not, from the passive form of the article he received from the editor, and in the presence of the conflict of interests which comes from extreme specialisation and low number of experts on a tiny subject. Peer review is not even a bad joke.
  • the licence should be something comparable to CC-BY-4.0, and surely not CC-BY-NC-ND. Something which leave free both the author and the reader/gamer/author of derivative works, and in the same time allows the propagation of the authorship of the work
  • finally, the article which runs in the browser does not need a publisher, nor a DRM manager. What for?

So, bulls, let’s start to climb the tree!

Related: https://chorasimilarity.wordpress.com/2015/04/28/one-of-the-first-articles-with-means-for-complete-validation-by-reproducibility/

_________________________________________

Visit the chemlambda collection

UPDATE: Chemlambda collection of animations is the version of the collection hosted on github. The original site is under very heavy traffic (in Jan 2020). Small images, about a 1/2 of the collection, due to memory limitations. But you can play the simulations in js!

__________

You don’t have to possess a Google+ account to visit the new

chemlambda collection

Kind of a micro-blogging place where you can read and see animated gifs about autonomous computing molecules, about the MicrobiomeOS in the making and easy clear intros to details of chemlambda.

If you are on G+ then don’t be shy and add it to one of your circles!

__________________________________________

let x=A in B in chemlambda

The beta reduction from lambda calculus is easy to be understood as the command
let x=B in A
It corresponds to the term (Lx.A)B which reduces to A[x=B], i.e to the term A where all instances of x have been replaced by B.
(by an algorithm which is left to be added, there are many choices among them!)

In Christopher P. Wadsworth, Semantics and Pragmatics of the Lambda Calculus , DPhil thesis, Oxford, 1971, and later John Lamping, An algorithm for optimal lambda calculus reduction, POPL ’90 Proceedings of the 17th ACM SIGPLAN-SIGACT symposium on Principles of programming languages, p. 16-30, there are proposals to replace the beta rule by a graph rewrite on the syntactic tree of the term.

These proposals opened many research paths, related to call by name strategies of evaluations, and of course going to the Interaction Nets.

The beta rule, as seen on the syntactic tree of a term, is a graph rewrite which is simple, but the context of application of it is complex, if one tries to stay in the real of syntactic trees (or that of some graphs which are associated to lambda terms).

This is one example of blindness caused by semantics. Of course that it is very difficult to conceive strategies of applications of this local rule (it involves only two nodes and 5 edges)  so that the graph after the rewrite has a global property (means a lambda term).

But a whole world is missed this way!

In chemlambda the rewrite is called BETA or A-L.
see the list of rewrites
http://chorasimilarity.github.io/chemlambda-gui/dynamic/moves.html

In mol notation this rewrite is

L 1 2 c , A c 3 4  – – > Arrow 1 4 , Arrow 3 2

(and then is the turn of COMB rewrite to eliminate the arrow elements, if possible)

As 1, 2, 3, 4, c are only port variables in chemlambda, let’s use other:

L A x in , A in B let  – – >  Arrow A let , Arrow B x

So if we interpret Arrow (via the COMB rewrite) as meaning “=”, we get to the conclusion that

let x=B in A

is in chemlambda

L A x in

A in B let

Nice. it almost verbatim the same thing.  Remark that the command”let” appears as a port variable too.

Visually the rewrite/command is this:

beta
As you see this is a Wadsworth-Lamping kind of graph rewrite, with the distinctions that:
(a) x, A, B are only ports variables, not terms
(b) there is no constraint for x to be linked to A

The price is that even if we start with a graph which is related to a lambda term, after performing such careless rewrites we get out of the realm of lambda terms.

But there is a more subtle difference: the nodes of the graphs are not gates and the edges are not wires which carry signals.

The reduction works well for many fundamental examples,
http://chorasimilarity.github.io/chemlambda-gui/dynamic/demos.html
by a simple combination of the beta rewrite and those rewrites from the DIST family I wrote about in the post about duplicating trees.
https://chorasimilarity.wordpress.com/2015/05/04/the-illustrated-shuffle-trick-used-for-tree-duplication/

So, we get out of lambda calculus, what’s wrong with that? Nothing, actually, it turns out that the world outside lambda, but in chemlambda, has very interesting features. Nobody explored them, that is why is so hard to discuss about that without falling into one’s preconceptions (has to be functional programming, has to be lambda calculus, has to be a language, has to have a global semantics).
___________________________________________

Nothing vague in the “no semantics” point of view

I’m a supporter of “no semantics” and I’ll try to convince you that it is nothing vague in it.

Take any formalism. To any term built from this formalism there is an associated syntactic tree. Now, look at the syntactic tree and forget about the formalism. Because it is a tree, it means that no matter how you choose to decorate its leaves, you can progress from the leaves to the root by decorating each edge. At each node of the tree you follow a decoration rule which says: take the decorations of the input edges and use them to decorate the output edge. If you suppose that the formalism is one which uses operations of bounded arity then you can say the following thing: strictly by following rules of decoration which are local (you need to know only at most N edge decorations in order to decorate another edge) you can arrive to decorate all the tree. Al the graph! And the meaning of the graph has something to do with this decoration. Actually the formalism turns out to be not about graphs (trees), but about static decorations which appear at the root of the syntactic tree.
But, you see, these static decorations are global effects of local rules of decoration. Here enters the semantic police. Thou shall accept only trees whose roots accept decorations from a given language. Hard problems ensue, which are heavily loaded with semantics.
Now, let’s pass from trees to other graphs.
The same phenomenon (there is a static global decoration emerged from local rules of decoration) for any DAG (directed acyclic graph). It is telling that people LOVE DAGs, so much so they go to the extreme of excluding from their thinking other graphs. These are the ones who put everything in a functional frame.
Nothing wrong with this!
Decorated graphs have a long tradition in mathematics, think for example at knot theory.
In knot theory the knot diagram is a graph (with 4-valent nodes) which surely is not acyclic! However, one of the fundamental objects associated to a knot is the algebraic object called “quandle”, which is generated from the edges of the graph, with certain relations coming from the edges. It is of course a very hard, fully loaded semantically problem to try to identify the knot from the associated quandle.
The difference from the syntactic trees is that the graph does not admit a static global decoration, generically. That is why the associated algebraic object, the quandle, is generated by (and not equal to) the set of edges.

There are beautiful problems related to the global objects generated by local rules. They are also difficult, because of the global aspect. It is perhaps as difficult to find an algorithm which builds an isomorphism between  two graphs which have the same associated family of decorations, as it is to  find a decentralized algorithm for graph reduction of a distributed syntactic tree.

But these kind of problems do not cover all the interesting problems.

What if this global semantic point of view makes things harder than they really are?

Just suppose you are a genius who found such an algorithm, by amazing, mind bending mathematical insights.

Your brilliant algorithm, because it is an algorithm, can be executed by a Turing Machine.

Or Turing machines are purely local. The head of the machine has only local access to the tape, at any given moment (Forget about indirection, I’ll come back to this in a moment.). The number of states of the machines is finite and the number of rules is finite.

This means that the brilliant work served to edit out the global from the problem!

If you are not content with TM, because of indirection, then look no further than to chemlambda (if you wish combined with TM, like in
http://chorasimilarity.github.io/chemlambda-gui/dynamic/turingchem.html , if you love TM ) which is definitely local and Turing universal. It works by the brilliant algorithm: do all the rewrites which you can do, nevermind the global meaning of those.

Oh, wait, what about a living cell, does it have a way to manage the semantics of the correct global chemical reactions networks which ARE the cell?

What about a brain, made of many neural cells, glia cells and whatnot? By the homunculus fallacy, it can’t have static, external, globally selected functions and terms (aka semantic).

On the other side, of course that the researcher who studies the cell, or the brain, or the mathematician who finds the brilliant algorithm, they are all using heavy semantic machinery.

TO TELL THE STORY!

Not that the cell or the brain need the story in order for them to live.

In the animated gif there is a chemlambda molecule called the 28 quine, which satisfies the definition of life in the sense that it randomly replenish its atoms, by approximately keeping its global shape (thus it has a metabolism). It does this under the algorithm: do all rewrites you can do, but you can do a rewrite only if a random coin flip accepts it.

semtree
Most of the atoms of the molecule are related to operations (application and abstraction) from lambda calculus.

I modified a bit a script (sorry, not in the repo this one) so that whenever possible the edges of this graph which MAY be part of a syntactic tree of a lambda term turn to GOLD while the others are dark grey.

They mean nothing, there’s no semantics, because for once the golden graphs are not DAGs, and because the computation consists into rewrites of graphs which don’t preserve well the “correct” decorations before the rewrite.

There’s no semantics, but there are still some interesting questions to explore, the main being: how life works?

http://chorasimilarity.github.io/chemlambda-gui/dynamic/28_syn.html

UPDATES:

Louis Kauffman reply to this:

Dear Marius,
There is no such thing as no-semantics. Every system that YOU deal with is described by you and observed by you with some language that you use. At the very least the system is interpreted in terms of its own actions and this is semantics. But your point is well-taken about not using more semantic overlay than is needed for any given situation. And certainly there are systems like our biology that do not use the higher level descriptions that we have managed to observe. In doing mathematics it is often the case that one must find the least semantics and just the right syntax to explore a given problem. Then work freely and see what comes.
Then describe what happened and as a result see more. The description reenters the syntactic space and becomes ‘uninterpreted’ by which I mean  open to other interactions and interpretations. It is very important! One cannot work at just one level. You will notice that I am arguing both for and against your position!
Best,
Lou Kauffman
My reply:
Dear Louis,
Thanks! Looks that we agree in some respects: “And certainly there are systems like our biology that do not use the higher level descriptions that we have managed to observe.” Not in others; this is the base of any interesting dialogue.
Then I made another post
Related to the “no semantics” earlier g+ post [*], here is a passage from Rodney Brooks “Intelligence without representation”

“It is only the observer of the Creature who imputes a central representation or central control. The Creature itself has none; it is a collection of competing behaviors.  Out of the local chaos of their interactions there emerges, in the eye of an observer, a coherent pattern of behavior. There is no central purposeful locus of control. Minsky [10] gives a similar account of how human behavior is generated.  […]
… we are not claiming that chaos is a necessary ingredient of intelligent behavior.  Indeed, we advocate careful engineering of all the interactions within the system.  […]
We do claim however, that there need be no  explicit representation of either the world or the intentions of the system to generate intelligent behaviors for a Creature. Without such explicit representations, and when viewed locally, the interactions may indeed seem chaotic and without purpose.
I claim there is more than this, however. Even at a local  level we do not have traditional AI representations. We never use tokens which have any semantics that can be attached to them. The best that can be said in our implementation is that one number is passed from a process to another. But it is only by looking at the state of both the first and second processes that that number can be given any interpretation at all. An extremist might say that we really do have representations, but that they are just implicit. With an appropriate mapping of the complete system and its state to another domain, we could define a representation that these numbers and topological  connections between processes somehow encode.
However we are not happy with calling such things a representation. They differ from standard  representations in too many ways.  There are no variables (e.g. see [1] for a more  thorough treatment of this) that need instantiation in reasoning processes. There are no rules which need to be selected through pattern matching. There are no choices to be made. To a large extent the state of the world determines the action of the Creature. Simon  [14] noted that the complexity of behavior of a  system was not necessarily inherent in the complexity of the creature, but Perhaps in the complexity of the environment. He made this  analysis in his description of an Ant wandering the beach, but ignored its implications in the next paragraph when he talked about humans. We hypothesize (following Agre and Chapman) that much of even human level activity is similarly a reflection of the world through very simple mechanisms without detailed representations.”

This brings to mind also this quote from the end of Vehicle 3 section from V. Braintenberg book Vehicles: Experiments in Synthetic Psychology:

“But, you will say, this is ridiculous: knowledge implies a flow of information from the environment into a living being ar at least into something like a living being. There was no such transmission of information here. We were just playing with sensors, motors and connections: the properties that happened to emerge may look like knowledge but really are not. We should be careful with such words.”

Louis Kauffman reply to this post:
Dear Marius,
It is interesting that some people (yourself it would seem) get comfort from the thought that there is no central pattern.
I think that we might ask Cookie and Parabel about this.
Cookie and Parabel and sentient text strings, always coming in and out of nothing at all.
Well guys what do you think about the statement of MInsky?

Cookie. Well this is an interesting text string. It asserts that there is no central locus of control. I can assert the same thing! In fact I have just done so in these strings of mine.
the strings themselves are just adjacencies of little possible distinctions, and only “add up” under the work of an observer.
Parabel. But Cookie, who or what is this observer?
Cookie. Oh you taught me all about that Parabel. The observer is imaginary, just a reference for our text strings so that things work out grammatically. The observer is a fill-in.
We make all these otherwise empty references.
Parabel. I am not satisfied with that. Are you saying that all this texture of strings of text is occurring without any observation? No interpreter, no observer?
Cookie. Just us Parabel and we are not observers, we are text strings. We are just concatenations of little distinctions falling into possible patterns that could be interpreted by an observer if there were such an entity as an observer?
Parabel. Are you saying that we observe ourselves without there being an observer? Are you saying that there is observation without observation?
Cookie. Sure. We are just these strings. Any notion that we can actually read or observe is just a literary fantasy.
Parabel. You mean that while there may be an illusion of a ‘reader of this page’ it can be seen that the ‘reader’ is just more text string, more construction from nothing?
Cookie. Exactly. The reader is an illusion and we are illusory as well.
Parabel. I am not!
Cookie. Precisely, you are not!
Parabel. This goes too far. I think that Minsky is saying that observers can observe, yes. But they do not have control.
Cookie. Observers seem to have a little control. They can look here or here or here …
Parabel. Yes, but no ultimate control. An observer is just a kind of reference that points to its own processes. This sentence observes itself.
Cookie. So you say that observation is just self-reference occurring in the text strings?
Parabel. That is all it amounts to. Of course the illusion is generated by a peculiar distinction that occurs where part of the text string is divided away and named the “observer” and “it” seems to be ‘reading’ the other part of the text. The part that reads often has a complex description that makes it ‘look’ like it is not just another text string.
Cookie. Even text strings is just a way of putting it. We are expressions in imaginary distinctions emanated from nothing at all and returning to nothing at all. We are what distinctions would be if there could be distinctions.
Parabel. Well that says very little.
Cookie. Actually there is very little to say.
Parabel. I don’t get this ‘local chaos’ stuff. Minsky is just talking about the inchoate realm before distinctions are drawn.
Cookie. lakfdjl
Parabel. Are you becoming inchoate?
Cookie. &Y*
Parabel. Y
Cookie.
Parabel.

Best,
Lou

My reply:
Dear Louis, I see that the Minsky reference in the beginning of the quote triggered a reaction. But recall that Minsky appears in a quote by Brooks, which itself appears in a post by Marius, which is a follow up of an older post. That’s where my interest is. This post only gathers evidence that what I call “no semantics” is an idea which is not new, essentially.
So let me go back to the main idea, which is that there are positive advances which can be made under the constraint to never use global notions, semantics being one of them.
As for the story about Cookie and Parabel, why is it framed into text strings universe and discusses about  a “central locus of control”? I can easily imagine Cookie and Parabel having a discussion before writing was invented, say for example in a cave which much later will be discovered by modern humans in Lascaux.
I don’t believe that there is a central locus of control. I do believe that semantics is a mean to tell the story, any story, as if there is a central locus of control. There is no “central” and there is very little “control”.
This is not a negative stance, it is a call for understanding life phenomena from points of view which are not ideologically loaded by “control” and “central”. I am amazed by the life variety, beauty and vastness, and I feel limited by the semantics point of view. I see in a string of text thousands of years of cultural conventions taken for granted, I can’t forget that a string of text becomes so to me only after a massive processing which “semantics” people take as granted as well, that during this discussion most of me is doing far less trivial stuff, like collaborating and fighting with billions of other beings in my gut, breathing, seeing, hearing, moving my fingers. I don’t forget that the string of text is recreated by my brain 5 times per second.
And what is an “illusion”?
A third post
In the last post https://plus.google.com/+MariusBuliga/posts/K28auYf69iy I gave two quotes, one from Brooks “Intelligence without representation” (where he quotes Minsky en passage, but contains much more than this brief Minsky quote) and the other from Braitenberg “Vehicles: Experiments in Synthetic Psychology”.
Here is another quote, from a reputed cognitive science specialist, who convinced me about the need for a no semantics point of view with his article “Brain a geometry engine”.
The following quote is by Jan Koenderink “Visual awareness”
http://www.gestaltrevision.be/pdfs/koenderink/Awareness.pdf

“What does it mean to be “visually aware”? One thing, due to Franz Brentano (1838-1917), is that all awareness is awareness of something. […]
The mainstream account of what happens in such a generic case is this: the scene in front of you really exists (as a physical object) even in the absence of awareness. Moreover, it causes your awareness. In this (currently dominant) view the awareness is a visual representation of the scene in front of you. To the degree that this representation happens to be isomorphic with the scene in front of you the awareness is veridical. The goal of visual awareness is to present you with veridical representations. Biological evolution optimizes veridicality, because veridicality implies fitness.  Human visual awareness is generally close to veridical. Animals (perhaps with exception of the higher primates) do not approach this level, as shown by ethological studies.
JUST FOR THE RECORD these silly and incoherent notions are not something I ascribe to!
But it neatly sums up the mainstream view of the matter as I read it.
The mainstream account is incoherent, and may actually be regarded as unscientific. Notice that it implies an externalist and objectivist God’s Eye view (the scene really exists and physics tells how), that it evidently misinterprets evolution (for fitness does not imply veridicality at all), and that it is embarrassing in its anthropocentricity. All this should appear to you as in the worst of taste if you call yourself a scientist.”  [p. 2-3]

[Remark: all these quotes appear in previous posts at chorasimilarity]

_______________________________________________________________

The illustrated shuffle trick, used for tree duplication

 Duplication of trees of fanouts is done in chemlambda by the shuffle trick.
The actors are the nodes FO (fanout), FOE (the other fanout) and FI (fanin). They are related by the rewrites FI-FOE (which resembles to the beta rewrite, but is a bit different), FO-FOE (which is a distributivity rewrite of the FO node wrt to the FOE node) and FI-FO (which is a distributivity rewrite of the FI node wrt the FO node.
In the shuffle trick there are used the rewrites FO-FOE and FI-FOE.
Properly speaking there is no trick at all, in the sense that it is unstaged. It happens when there is a FO-FOE pattern for a rewrite, which in turn, after application, creates a FI-FOE pattern.
The effects are several:
  • FO nodes migrate from the root to the leaves
  • FOE nodes migrate from the leaves to the root
  • and there is a shuffle of the leaves which untangles correctly the copies of the tree.

All this is achieved not by setting as goal the duplication! As I wrote, there is no scenario behind, it is just an emergent effect of local rewrites.

Here is the shuffle trick illustrated

shuffle

For explanatory reasons the free out ports (i.e the FROUT nodes) are coloured with red and blue.

Watch carefully to see the 3 effects of the shuffle trick!
Before: there are two pairs of free out ports, each pair made of a blue and a red out ports. After the shuffle trick there is a pair of blue ports and a pair of red ports.

Before: there is a green node (FO fanout) and two pale yellow nodes (FOE fanouts).
After: there is one pale yellow node instead (FOE fanout) and two green nodes (FO fanouts)

OK, let’s see how this trick induces the duplication of a tree made of FO nodes.
First, we have to add a FI node at the root and FOE nodes at the leaves. In the following illustrations I coloured the free in ports (of the FI node) and the free out ports (of the FROUT nodes) with red and blue, for explanatory purposes.
There are two extremal cases of a tree duplication.
The first is the duplication of a tree made of FO nodes, such that all right ports are leaves (thus the tree extends only in the left direction).
In this case the shuffle trick is applied (unstaged!) all over the tree at once!
fotreeleft
In the opposite extremal case we want to duplicate a tree made of FO nodes, such that all LEFT ports are leaves (thus the tree extends only in the RIGHT direction).
In this case the shuffle trick propagates towards the root.
fotreeright
______________________________________________________________

Lambda calculus and other “secret alien technologies” translated into real chemistry

That is the shortest description.
A project in chemical computing: lambda calculus and other “secret alien technologies” translated into real chemistry.
http://chorasimilarity.github.io/chemlambda-gui/dynamic/molecular.html
factor4
Question:
  • which programming language is based on “secret alien technology”?

The gif is a detail from the story of the factorial
http://chorasimilarity.github.io/chemlambda-gui/dynamic/lisfact_2_mod.html

Join me on erasing the distinction between virtual and real!

And do not believe me, nor trust authority (because it has its own social agenda). Use instead a validation technique:

  1. download the gh-pages branch of the repo from this link https://github.com/chorasimilarity/chemlambda-gui/archive/gh-pages.zip
  2. unzip it and go the “dynamic” folder
  3. edit the copy you have of the  latest main script  https://github.com/chorasimilarity/chemlambda-gui/blob/gh-pages/dynamic/check_1_mov2_rand_metabo_bb.awk to change the parameters (number of cycles, weights of moves, visualisation parameters)
  4. use the command “bash moving_random_metabo_bb.sh”   (see what it contains https://github.com/chorasimilarity/chemlambda-gui/blob/gh-pages/dynamic/moving_random_metabo_bb.sh )
  5. you shall see the list of all .mol files from the “dynamic” folder. If you want to reproduce a demo, or better a new random run of a computation shown in a demo, then choose the file.mol which corresponds to the file.html name of the demo page http://chorasimilarity.github.io/chemlambda-gui/dynamic/demos.html

An explanation of the algorithm embedded in the main script is here

https://chorasimilarity.wordpress.com/2015/04/27/a-short-complete-description-of-chemlambda-as-a-universal-model-of-computation/

but in the latest main script there are additioned new moves for a busy beaver Turing machine, see the explanations in  the work in progress

http://chorasimilarity.github.io/chemlambda-gui/dynamic/turingchem.html

On the other side, if you are a lazy person who wishes to rely on anonymous people who declare they read the work, then go play elsewhere. Here are used the highest standards for validity checks.

___________________________________________________