Take any formalism. To any term built from this formalism there is an associated syntactic tree. Now, look at the syntactic tree and forget about the formalism. Because it is a tree, it means that no matter how you choose to decorate its leaves, you can progress from the leaves to the root by decorating each edge. At each node of the tree you follow a decoration rule which says: take the decorations of the input edges and use them to decorate the output edge. If you suppose that the formalism is one which uses operations of bounded arity then you can say the following thing: strictly by following rules of decoration which are local (you need to know only at most N edge decorations in order to decorate another edge) you can arrive to decorate all the tree. Al the graph! And the meaning of the graph has something to do with this decoration. Actually the formalism turns out to be not about graphs (trees), but about static decorations which appear at the root of the syntactic tree.

But, you see, these static decorations are global effects of local rules of decoration. Here enters the semantic police. Thou shall accept only trees whose roots accept decorations from a given language. Hard problems ensue, which are heavily loaded with semantics.

Now, let’s pass from trees to other graphs.

The same phenomenon (there is a static global decoration emerged from local rules of decoration) for any DAG (directed acyclic graph). It is telling that people LOVE DAGs, so much so they go to the extreme of excluding from their thinking other graphs. These are the ones who put everything in a functional frame.

Nothing wrong with this!

Decorated graphs have a long tradition in mathematics, think for example at knot theory.

In knot theory the knot diagram is a graph (with 4-valent nodes) which surely is not acyclic! However, one of the fundamental objects associated to a knot is the algebraic object called “quandle”, which is generated from the edges of the graph, with certain relations coming from the edges. It is of course a very hard, fully loaded semantically problem to try to identify the knot from the associated quandle.

The difference from the syntactic trees is that the graph does not admit a static global decoration, generically. That is why the associated algebraic object, the quandle, is generated by (and not equal to) the set of edges.

There are beautiful problems related to the global objects generated by local rules. They are also difficult, because of the global aspect. It is perhaps as difficult to find an algorithm which builds an isomorphism between two graphs which have the same associated family of decorations, as it is to find a decentralized algorithm for graph reduction of a distributed syntactic tree.

But these kind of problems do not cover all the interesting problems.

**What if this global semantic point of view makes things harder than they really are?**

Just suppose you are a genius who found such an algorithm, by amazing, mind bending mathematical insights.

Your brilliant algorithm, because it is an algorithm, can be executed by a Turing Machine.

Or Turing machines are purely local. The head of the machine has only local access to the tape, at any given moment (Forget about indirection, I’ll come back to this in a moment.). The number of states of the machines is finite and the number of rules is finite.

This means that the brilliant work served to edit out the global from the problem!

If you are not content with TM, because of indirection, then look no further than to chemlambda (if you wish combined with TM, like in

http://chorasimilarity.github.io/chemlambda-gui/dynamic/turingchem.html , if you love TM ) which is definitely local and Turing universal. It works by the brilliant algorithm: do all the rewrites which you can do, nevermind the global meaning of those.

Oh, wait, what about a living cell, does it have a way to manage the semantics of the correct global chemical reactions networks which ARE the cell?

What about a brain, made of many neural cells, glia cells and whatnot? By the homunculus fallacy, it can’t have static, external, globally selected functions and terms (aka semantic).

On the other side, of course that the researcher who studies the cell, or the brain, or the mathematician who finds the brilliant algorithm, they are all using heavy semantic machinery.

TO TELL THE STORY!

Not that the cell or the brain need the story in order for them to live.

In the animated gif there is a chemlambda molecule called the 28 quine, which satisfies the definition of life in the sense that it randomly replenish its atoms, by approximately keeping its global shape (thus it has a metabolism). It does this under the algorithm: do all rewrites you can do, but you can do a rewrite only if a random coin flip accepts it.

Most of the atoms of the molecule are related to operations (application and abstraction) from lambda calculus.

I modified a bit a script (sorry, not in the repo this one) so that whenever possible the edges of this graph which MAY be part of a syntactic tree of a lambda term turn to GOLD while the others are dark grey.

They mean nothing, there’s no semantics, because for once the golden graphs are not DAGs, and because the computation consists into rewrites of graphs which don’t preserve well the “correct” decorations before the rewrite.

**There’s no semantics, but there are still some interesting questions to explore, the main being: how life works?**

http://chorasimilarity.github.io/chemlambda-gui/dynamic/28_syn.html

_______________________________________________________________

Filed under: Uncategorized Tagged: homunculus fallacy, knot diagrams, no semantics, quine, Turing machine ]]>

See the shuffle trick live at:

http://chorasimilarity.github.io/chemlambda-gui/dynamic/shuffle_trick.html

http://chorasimilarity.github.io/chemlambda-gui/dynamic/shuffle_trick.html

See the list of rewrites at:

http://chorasimilarity.github.io/chemlambda-gui/dynamic/moves.html

http://chorasimilarity.github.io/chemlambda-gui/dynamic/moves.html

The actors are the nodes FO (fanout), FOE (the other fanout) and FI (fanin). They are related by the rewrites FI-FOE (which resembles to the beta rewrite, but is a bit different), FO-FOE (which is a distributivity rewrite of the FO node wrt to the FOE node) and FI-FO (which is a distributivity rewrite of the FI node wrt the FO node.

In the shuffle trick there are used the rewrites FO-FOE and FI-FOE.

Properly speaking there is no trick at all, in the sense that it is unstaged. It happens when there is a FO-FOE pattern for a rewrite, which in turn, after application, creates a FI-FOE pattern.

The effects are several:

- FO nodes migrate from the root to the leaves
- FOE nodes migrate from the leaves to the root
- and there is a shuffle of the leaves which untangles correctly the copies of the tree.

All this is achieved not by setting as goal the duplication! As I wrote, there is no scenario behind, it is just an emergent effect of local rewrites.

Here is the shuffle trick illustrated

For explanatory reasons the free out ports (i.e the FROUT nodes) are coloured with red and blue.

Watch carefully to see the 3 effects of the shuffle trick!

Before: there are two pairs of free out ports, each pair made of a blue and a red out ports. After the shuffle trick there is a pair of blue ports and a pair of red ports.

Before: there is a green node (FO fanout) and two pale yellow nodes (FOE fanouts).

After: there is one pale yellow node instead (FOE fanout) and two green nodes (FO fanouts)

OK, let’s see how this trick induces the duplication of a tree made of FO nodes.

First, we have to add a FI node at the root and FOE nodes at the leaves. In the following illustrations I coloured the free in ports (of the FI node) and the free out ports (of the FROUT nodes) with red and blue, for explanatory purposes.

There are two extremal cases of a tree duplication.

The first is the duplication of a tree made of FO nodes, such that all right ports are leaves (thus the tree extends only in the left direction).

In this case the shuffle trick is applied (unstaged!) all over the tree at once!

In the opposite extremal case we want to duplicate a tree made of FO nodes, such that all LEFT ports are leaves (thus the tree extends only in the RIGHT direction).

In this case the shuffle trick propagates towards the root.

See the list of rewrites at

http://chorasimilarity.github.io/chemlambda-gui/dynamic/moves.html

http://chorasimilarity.github.io/chemlambda-gui/dynamic/moves.html

______________________________________________________________

Filed under: Uncategorized Tagged: chemlambda, shuffle trick, tree duplication ]]>

A project in chemical computing: lambda calculus and other “secret alien technologies” translated into real chemistry.

http://chorasimilarity.github.io/chemlambda-gui/dynamic/molecular.html

http://chorasimilarity.github.io/chemlambda-gui/dynamic/molecular.html

Question:

- which programming language is based on “secret alien technology”?

The gif is a detail from the story of the factorial

http://chorasimilarity.github.io/chemlambda-gui/dynamic/lisfact_2_mod.html

Join me on erasing the distinction between virtual and real!

And do not believe me, nor trust authority (because it has its own social agenda). Use instead a validation technique:

- download the gh-pages branch of the repo from this link https://github.com/chorasimilarity/chemlambda-gui/archive/gh-pages.zip
- unzip it and go the “dynamic” folder
- edit the copy you have of the latest main script https://github.com/chorasimilarity/chemlambda-gui/blob/gh-pages/dynamic/check_1_mov2_rand_metabo_bb.awk to change the parameters (number of cycles, weights of moves, visualisation parameters)
- use the command “bash moving_random_metabo_bb.sh” (see what it contains https://github.com/chorasimilarity/chemlambda-gui/blob/gh-pages/dynamic/moving_random_metabo_bb.sh )
- you shall see the list of all .mol files from the “dynamic” folder. If you want to reproduce a demo, or better a new random run of a computation shown in a demo, then choose the file.mol which corresponds to the file.html name of the demo page http://chorasimilarity.github.io/chemlambda-gui/dynamic/demos.html

An explanation of the algorithm embedded in the main script is here

but in the latest main script there are additioned new moves for a busy beaver Turing machine, see the explanations in the work in progress

http://chorasimilarity.github.io/chemlambda-gui/dynamic/turingchem.html

On the other side, if you are a lazy person who wishes to rely on anonymous people who declare they read the work, then go play elsewhere. Here are used the highest standards for validity checks.

___________________________________________________

Filed under: Uncategorized Tagged: artificial chemistry, chemlambda ]]>

M. Buliga, Molecular computers

is one of the first articles which comes with complete means of validation by reproducibility.

This means that along with the content of the article, which contains animations and links to demonstrations, comes a github repository with the scripts which can be used to validate (or invalidate, of course) this work.

I can’t show you here how the article looks like, but I can show you a gif created from this video of a demonstration which appears also in the article (however, with simpler settings, in order to not punish too much the browser).

This is a chemical like computation of the Ackermann(2,2) function.

In itself, is intended to show that if autonomous computing molecules can be created by the means proposed in the article, then impressive feats can be achieved.

This is part of the discussion about peer review and the need to pass to a more evolved way of communicating science.There are several efforts in this direction, like for example PeerJ’s paper-now commented in this post. See also the post Fascinating: micropublications, hypothes.is for more!

Presently one of the most important pieces of this is the peer review, which is the social practice consisting in declarations of one, two, four, etc anonymous professionals that they have checked the work and they consider it valid.

Instead, an ideal should be the article which runs in the browser, i.e. one which comes with means which would allow anybody to validate it up to external resources, like the works by other authors.

(For example, if I write in my article that “According to the work [1] A is true. Here we prove that B follows from A.” then I should provide means to validate the proof that A implies B, but it would be unrealistical to be ask me to provide means to validate A.)

This is explained in more detail in Reproducibility vs peer review.

Therefore, if you care about evolving the form of the scientific article, then you have a concrete, short example of what can be done in this direction.

Mind that I am stubborn enough to cling to this form of publication, not because I am afraid to submit these beautiful ideas to legacy journals, but because I want to promote new ways of sharing research by using the best content I can make.

_________________________________________

Filed under: Uncategorized Tagged: github, open access, open peer review ]]>

satisfies conditions written further. Take a countable collection of

variables, denote them by a, b, c, … . There is a list of symbols:

L, A, FI, FO, FOE, Arrow, T, FRIN, FROUT. A mol file is a list made of

lines of the form:

– L a b c (called abstraction)

– A a b c (called application)

– FI a b c (called fanin)

– FO a b c (called fanout)

– FOE a b c (called other fanout)

– Arrow a b (called arrow)

– T a (called terminal)

– FRIN a (called free in)

– FROUT a (called free out)

The variables which appear in a mol file are called port names.

Condition 1: any port name appears exactly twice

Depending on the symbol (L, A, FI, FO, FOE, Arrow, T, FRIN, FROUT),

every port name has two types, the first from the list (left, right,

middle), the second from the list (in,out). The convention is to write

this pair of types as middle.in, or left.out, for example.

Further I repeat the list of possible lines in a mol file, but I shall

replace a, b, c, … by their types:

– L middle.in left.out right.out

– A left.in right.in middle.out

– FI left.in right.in middle.out

– FO middle.in left.out right.out

– FOE middle.in left.out right.out

– Arrow middle.in middle.out

– T middle.in

– FRIN middle.out

– FROUT middle.in

Condition 2: each port name (which appears in exactly two places

according to condition 1) appears in a place with the type *.in and in

the other place with the type *.out

Two mol files define the same molecule up to the following:

– there is a renaming of port names from one mol file to the other

– any permutation of the lines in a mol file gives an equivalent mol file

(The reason is that a mol file is a notation for an oriented graph

with trivalent or 2-valent or univalent nodes, made of locally planar

trivalent nodes L, A, FI, FO, FOE , 2valent node Arrow and 1valent

nodes T, FRIN, FROUT. In this notation the port names come from an

arbitrary naming of the arrows, then the mol file is a list of nodes,

in arbitrary order, along with port names coming from the names of

arrows.)

(In the visualisations these graphs-molecules are turned into

undirected graphs, made of nodes of of various radii and colours. To

any line from a mol file, thus to any node from the oriented graph,

correspond up to 4 nodes in the graphs from the visualisations.

Indeed, the symbols L, A, FI, FO, appear as nodes or radius

main_const and colour red_col or green_col, FOE, T and Arrow have different colours. Their respective ports

appear as nodes of colour in_col or out_col, for the types “in” or

“out”, and radius “left”, “right”, “middle” for the corresponding

types.FRIN has the colour in_col, FROUT has the colour out_col)

The chemlambda rewrites are of the following form: replace a left

pattern (LT) consisting of 2 lines from a mol file, by a right pattern

(RT) which may consist of one, two, three or four lines. This is

written as LT – – > RT

The rewrites are: (with the understanding that port names 1, 2, … c,

from LT represent port names which exist in the mol file before the

rewrite and j, k, … from RT but not appearing in LT represent new

port names)

http://chorasimilarity.github.io/chemlambda-gui/dynamic/moves.html

– A-L (or beta):

L 1 2 c , A c 4 3 – – > Arrow 1 3 , Arrow 4 2

– FI-FOE (or fan-in):

FI 1 4 c , FOE c 2 3 – – > Arrow 1 3 , Arrow 4 2

– FO-FOE :

FO 1 2 c , FOE c 3 4 – – > FI j i 2 , FO k i 3 , FO l j 4 , FOE 1 k l

– FI-FO:

FI 1 4 c , FO c 2 3 – – > FO 1 i j , FI i k 2 , FI j l 3 , FO 4 k l

– L-FOE:

L 1 2 c , FOE c 3 4 – – > FI j i 2, L k i 3 , L l j 4 , FOE 1 k l

– L-FO:

L 1 2 c , FO c 3 4 – – > FI j i 2 , L k i 3 , L l j 4 , FOE 1 k l

– A-FOE:

A 1 4 c , FOE c 2 3 – – > FOE 1 i j , A i k 2 , A j l 3 , FOE 4 k l

– A-FO:

A 1 4 c , FO c 2 3 – – > FOE 1 i j , A i k 2 , A j l 3 , FOE 4 k l

– A-T: A 1 2 3 , T 3 – – > T 1 , T 2

– FI-T: FI 1 2 3 , T 3 – – > T 1 , T 2

– L-T: L 1 2 3 , T 3 – – > T 1 , T c , FRIN c

– FO2-T: FO 1 2 3 , T 2 – – > Arrow 1 3

– FOE2-T: FOE 1 2 3 , T 2 – – > Arrow 1 3

– FO3-T: FO 1 2 3 , T 3 – – > Arrow 1 2

– FOE3-T: FOE 1 2 3 , T 3 – – > Arrow 1 2

– COMB: any node M (excepting Arrow) any out

port c , Arrow c d – – > M d

Each of these rewrites are seen as a chemical interaction of the mol

file (molecule) with an invisible enzyme, which rewrites the LT into

RT.

(Actually one can group the rewrites into families, so there are

needed enzymes for (A-L, FI-FOE), for (FO-FOE, L-FOE, L-FO, A-FOE,

A-FO, FI-FOE, FI-FO) for (A-T, FI-T, L-T) and for (FO2-T, FO3-T,

FOE2-T, FOE3-T). COMB rewrites,a swell as the Arrow elements, have a

less clear interpretation chemically and there may be not needed, or

even eliminated, see further how they appear in the reduction

algorithm.)

The algorithm (called “stupid”) of application of the rewrites has

two variants: deterministic and random. Further I explain both. For

the random version there are needed some weights, denoted by wei_*

where * is the name of the rewrite.

(The algorithms actually used in the demos have a supplementary family

of weights, which are used in relation with the count of the enzymes

used, I’ll pass this.)

The algorithm takes as input a mol file. Then there is a cycle which

repeats (indefinitely, or a prescribed number of times specified at

the beginning of the computation, or it stops if there are no rewrites

possible).

Then it marks all lines of the mol file as unblocked.

Then it creates an empty file of replacement proposals.

Then the cycle is:

1- identify all LT which do not contain blocked lines for the move

FO-FOE and mark the lines from the identified LT as “blocked”. Propose

to replace the LT’s by RT’s. In the random version flip a coin with

weight wei-FOFOE for each instance of the LT identified and according

to the coin drop block or ignore the instance.

2- identify all LT which do not contain blocked lines for the moves

(L-FOE, L-FO, A-FOE, A-FO, FI-FOE, FI-FO) and mark the lines from

these LT’s as “blocked” and propose to replace these by the respective

RTs. In the random version flip a coin with the respective weight

before deciding to block and replacement proposals.

3 – identify all LT which do not contain blocked lines for the moves

(A-L, FI-FOE) and mark the lines from these LT’s as “blocked” and

propose to replace these by the respective RTs. In the random version

flip a coin with the respective weight before deciding to block and

replacement proposals.

4 – identify all LT which do not contain blocked lines for the moves

(A-T, FI-T, L-T) and for (FO2-T, FO3-T, FOE2-T, FOE3-T) and mark the

lines from these LT’s as “blocked” and propose to replace these by the

respective RTs. In the random version flip a coin with the respective

weight before deciding to block and replacement proposals.

5- erase all LT which have been blocked and replace them by the

proposals. Empty the proposals file.

6- the COMB cycle: repeat the application of COMB moves, in the same

style (blocks and proposals and updates) until no COMB move is

possible.

7- mark all lines as unblocked

The main cycle ends here.

The algorithm ends if (the number of cycles is attained, or there are

no rewrites performed in the last cycle, in the deterministic version,

or there were no rewrites in the last N cycles, with N predetermined,

in the random version).

___________________________________

Filed under: Uncategorized Tagged: chemlambda ]]>

Analysis: SPiM side

– The nodes of the graphs are molecules, the arrows are channels.

– As described there the procedure is to take a (huge perhaps) CRN and to reformulate it more economically, as a collection of graphs where nodes are molecules, arrows are channels.

– There is a physical chemistry model behind which tells you which probability has each reaction.

– During the computation the reactions are known, all the molecules are known, the graphs don’t change and the variables are concentrations of different molecules.

– During the computation one may interpret the messages passed by the channels as decorations of a static graph.

The big advantage is that indeed, when compared with a Chemical Reactions Network approach, the stochastic pi calculus transforms the CRN into a much more realistical model. And much more economical.

chemlambda side:

Take the pet example with the Ackermann(2,2)=7 from the beginning of http://chorasimilarity.github.io/chemlambda-gui/dynamic/molecular.html

(or go to the demos page for more http://chorasimilarity.github.io/chemlambda-gui/dynamic/demos.html )

– You have one molecule (it does not matter if it is a connected graph or not, so you may think about it as being a collection of molecules instead).

– The nodes of the molecule are atoms (or perhaps codes for simpler molecular bricks). The nodes are not species of molecules, like in the SPiM.

– The arrows are bonds between atoms (or perhaps codes for other simple molecular bricks). The arrows are not channels.

– I don’t know which are all the intermediary molecules in the computation. To know it would mean to know the result of the computation before. Also, there may be thousands possible intermediary molecules. There may be an infinity of possible intermediary molecules, in principle, for some initial molecules.

-During the computation the graph (i.e. the molecule) changes. The rewrites are done on the molecule, and they can be interpreted as chemical reactions where invisible enzymes identify a very small part of the big molecule and rewrite them, randomly.

Conclusion: there is no relation between those two models.

Now, chemlambda may be related to +Tim Hutton ‘s artificial chemistry.

Here is a link to a beautiful javascript chemistry

http://htmlpreview.github.io/?https://github.com/ModelingOriginsofLife/ParticleArtificialChemistry/blob/master/index.html

where you see that

-nodes of molecules are atoms

-arrows of molecules are bonds

-the computation proceeds by rewrites which are random

– but distinctly from chemlambda the chemical reactions (rewrites) happen in a physical 2D space (i.e based on the space proximity of the reactants)

As something in the middle between SPiM and jschemistry, there is a button which shows you a kind if instantaneous reaction graph!

Filed under: Uncategorized Tagged: artificial chemistry, chemlambda, SPiM, stochastic pi calculus, Tim Hutton ]]>

To be exactly sure about about the factor, I need to know the answer for the following question:

**What is the most complex chemical computation done without external intervention, from the moment when the (solution, DNA molecule, whatnot) is prepared, up to the moment when the result is measured?**

Attention, I know that there are several Turing complete chemical models of computations, but they all involve some manipulations (heating a solution, splitting one in two, adding something, etc).

I believe, but I may be wrong, depending on the answer to this question, that the said complexity is not bigger than a handful of boolean gates, or perhaps some simple Turing Machines, or a simple CA.

If I am right, then compare with my pet example: the Ackermann function. How many instructions a TM, or a CA, or how big a circuit has to be to do this? 1000 times is a clement estimate. This can be done in my proposal easily.

So, instead of trying to convince you that my model is interesting because is related with lmbda calculus, maybe I can make you more interested if I tell you that for the same material input, the computational output is way bigger than in the best model you have.

Thank you for answering to the question, and possibly for showing me wrong.

___________________________

Filed under: Uncategorized Tagged: chemical computing, chemlambda, complexity ]]>

“It is no good just finding particular instances where peer review has failed because I can point you to specific instances where peer review has been very successful,” she said.

She feared that abandoning peer review would make scientific literature no more reliable than the blogosphere, consisting of an unnavigable mass of articles, most of which were “wrong or misleading”.

She feared that abandoning peer review would make scientific literature no more reliable than the blogosphere, consisting of an unnavigable mass of articles, most of which were “wrong or misleading”.

This is a quote from one of the most interesting articles I read these days: “Slay peer review ‘sacred cow’, says former BMJ chief” by Paul Jump.

http://www.timeshighereducation.co.uk/news/slay-peer-review-sacred-cow-says-former-bmj-chief/2019812.article#.VTZxYhJAwW8.twitter

http://www.timeshighereducation.co.uk/news/slay-peer-review-sacred-cow-says-former-bmj-chief/2019812.article#.VTZxYhJAwW8.twitter

I commented previously about replacing peer-review with validation by reproducibility

but now I want to concentrate on this quote, which, according to the author of the article, has been made by “Georgina Mace, professor of biodiversity and ecosystems at University College London”.This is the pro argument in favour of the actual peer review system. Opposed to it, and main subject of the article, is”Richard Smith, who edited the BMJ between 1991 and 2004, told the Royal Society’s Future of Scholarly Scientific Communication conference on 20 April that there was no evidence that pre-publication peer review improved papers or detected errors or fraud.”

I am very much convinced by this, but let’s think coldly.

**Pro peer review** is that a majority of peer reviewed articles is formed by correct articles, while a majority of “the blogosphere [is] consisting of an unnavigable mass of articles, most of which were “wrong or misleading””.

**Contrary to peer review** is that, according to “Richard Smith, who edited the BMJ between 1991 and 2004″ :

“there was no evidence that pre-publication peer review improved papers or detected errors or fraud.”

“Referring to John Ioannidis’ famous 2005 paper “Why most published research findings are false”, Dr Smith said “most of what is published in journals is just plain wrong or nonsense”. […]

“If peer review was a drug it would never get on the market because we have lots of evidence of its adverse effects and don’t have evidence of its benefit.””

and moreover:

“peer review was too slow, expensive and burdensome on reviewers’ time. It was also biased against innovative papers and was open to abuse by the unscrupulous. He said science would be better off if it abandoned pre-publication peer review entirely and left it to online readers to determine “what matters and what doesn’t”.”

Which I interpret as confidence in the blogosphere-like medium.

**Where is the truth? In the middle, as usual.**

Here is my opinion, please form yours.

The new medium comes with new, relatively better means to do research. An important part of the research involves communication, and it is clear that the old system is already obsolete. It is kept artificially alive by authority and business interests.

However, it is also true that a majority of productions which are accessible via the new medium are of a very bad quality and unreliable.

To make another comparison, in the continuation of the one about the fall of academic painters and the rise of impressionists

https://chorasimilarity.wordpress.com/2013/02/16/another-parable-of-academic-publishing-the-fall-of-19th-century-academic-art/

a majority of the work of academic painters was good but not brilliant (reliable but not innovative enough), a majority of non academic painters produce crappy cute paintings which average people LOVE to see and comment about.

You can’t accuse a non affiliated painter that he shows his work in the same venue where you find all the cats, kids, wrinkled old people and cute places.

https://chorasimilarity.wordpress.com/2013/02/16/another-parable-of-academic-publishing-the-fall-of-19th-century-academic-art/

a majority of the work of academic painters was good but not brilliant (reliable but not innovative enough), a majority of non academic painters produce crappy cute paintings which average people LOVE to see and comment about.

You can’t accuse a non affiliated painter that he shows his work in the same venue where you find all the cats, kids, wrinkled old people and cute places.

Science side, we live in a sea of crappy content which is loved by the average people.

The so called attention economy consists mainly in shuffling this content from a place to another. This is because liking and sharing content is a different activity than creating content. Some new thinking is needed here as well, in order to pass over the old idea of scarce resources which are made available by sharing them.

It is difficult for a researcher, who is a particular species of a creator, to find other people willing to spend time not only to share original ideas (which are not liked because strange, by default), but also to invest work into understanding it, into validating it, which is akin an act of creation.

That is why I believe that:

– there have to be social incentives for these researchers (and that attention economy thinking is not helping this, being instead a vector of propagation for big budget PR and lolcats and life wisdom quotes)

– and that the creators of new scientific content have to provide as many as possible means for self-validation of their work.

_________________________________

Filed under: Uncategorized Tagged: peer review, reproducibility, validation ]]>

Proceed.

Chemlambda is a collection of rules about rewritings done on pieces of files in a certain format. Without an algorithm which tells which rewrite to use, where and when, chemlambda does nothing.

In the sophisticated version of the Distributed GLC proposal, this algorithmic part uses the Actor Model idea. Too complicated!, Let’s go simpler!

The simplest algorithm for using the collection of rewrites from chemlambda is the following:

- take a file (in the format called “mol”, see later)
- look for all patterns in the file which can be used for rewrites
- if there are different patterns which overlap, then pick a side (by using an ordering or graph rewrites, like the precedence rules in arithmetic)
- apply all the rewrites at once
- repeat (either until there is no rewrite possible, or a given number of times, or forever)

To spice things just a bit, consider the next simple algorithm, which is like the one described, only that we add at step 2:

- for every identified pattern flip a coin to decide to keep it or ignore it further in the algorithm

The reason is that randomness is the simplest way to say: who knows if I can do this rewrite when I want, or maybe I have in my computer only a part of the file, or maybe I have to know that a friend has a half of the pattern and I have the other, so I have to talk with him first, then agree to make together the rewrite. Who knows? Flip a coin then.

Now, proven facts.

Chemlambda with the stupid deterministic algorithm is Turing universal. Which means that implicitly this is a model of computation. Everything is prescribed from the top to the bottom. Is on the par with a Turing machine, or with a RAND model.

Chemlambda with the random stupid model seems to be also Turing universal, but I don’t have yet a proof for this. There is a reason for the fact that it is as powerful as the stupid deterministic model, but I won’t go there.

More, there are many examples of computations which work.

So the right image to have is that chemlambda with the described algorithm can do anything any computer can.

The first question is, how? For example how compares chemlambda with a Turing machine? If it is at this basic level then it means it is incomprehensible, because we humans can’t make sense of a scroll of bytecode, unless we are highly trained in this very specialised task.

All computers do the same thing: they crunch machine code. No matter which high language you use to write a program, it is then compiled and eventually there is a machine code which is executed, and that is the level we speak.

It does not matter which language you use, eventually all is machine code. There is a huge architectural tower and we are on the top of it, but in the basement all looks the same. The tower is here for us to be easy to use the superb machine. But it is not needed otherwise, it is only for our comfort.

This is very puzzling when we look at chemlambda because it is claimed that chemlambda has something to do with lambda calculus, or lambda calculus is the prototype of a functional programming language. So it appears that chemlamdba should be associated with higher meaning and clever thinking, and abstraction of the abstraction of the abstraction.

No, from the point of view of the programmer.

Yes, from the point of view of the machine.

In order to compare chemlambda with a TM we have to put it in the same terms. So you can easily put a TM in terms of a rewrite system, such that it works with the same stupid deterministic algorithm. http://chorasimilarity.github.io/chemlambda-gui/dynamic/turingchem.html

It is not yet put there, but the conclusion is obvious: chemlambda can do lambda calculus with one rewrite, while an Universal Turing Machine needs about 20 rewrites to do what TM do.

Unbelievable!

Wait, what about distributivity, propagation, the fanin, all the other rewrites?

They are common, they just form a mechanism for signal transduction and duplication!

Chemlambda is much simpler than TM.

So you can use directly chemlambda, at this metal level, to perform lambda calculus. Is explained here

https://chorasimilarity.wordpress.com/2015/04/21/all-successful-computation-experiments-with-lambda-calculus-in-one-list/

https://chorasimilarity.wordpress.com/2015/04/21/all-successful-computation-experiments-with-lambda-calculus-in-one-list/

And I highly recommend to try to play with it by following the instructions.

You need a linux system, or any system where you have sh and awk.

Then

1. click on that https://github.com/chorasimilarity/chemlambda-gui/archive/gh-pages.zip to get a copy of the repo

2. unzip it and go to the directory “dynamic”

3. open a shell and write: bash moving_random_metabo.sh

4. you will get a prompt and a list of files with the extension .mol , write the name of one of them, in the form file.mol

5. you get file.html. Open it with a browser with js enabled. For reasons I don’t understand, it works much better in safari, chromium, chrome than in firefox.

When you look at the result of the computation you see an animation, which is the equivalent of seeing a TM head running here and there on a tape. It does not make much sense at first, but you can convince that it works and get a feeling about how it does it.

Read about how to use with with lambda calculus here https://chorasimilarity.wordpress.com/2015/04/21/all-successful-computation-experiments-with-lambda-calculus-in-one-list/

Look at a collection of animations here http://chorasimilarity.github.io/chemlambda-gui/dynamic/demos.html

See more at the chemlambda index http://chorasimilarity.github.io/chemlambda-gui/index.html

Once you get this feeling I will be very glad to discuss more!

Recall that all this is related to the most stupid algorithm. But I believe it helps a lot to understand how to build on it.

____________________________________________________

Filed under: Uncategorized Tagged: chemlambda, graph rewriting systems, Turing machine ]]>

“Richard Smith, who edited the BMJ between 1991 and 2004, told the Royal Society’s Future of Scholarly Scientific Communication conference on 20 April that there was no evidence that pre-publication peer review improved papers or detected errors or fraud. […]

“He said science would be better off if it abandoned pre-publication peer review entirely and left it to online readers to determine “what matters and what doesn’t”.

“That is the real peer review: not all these silly processes that go on before and immediately after publication,” he said.”

That’s just a part of the article, go read the counter arguments by Georgina Mace.

Make your opinion about this.

Here is mine.

In the post Reproducibility vs peer review I write

“The only purpose of peer review is to signal that at least one, two, three or four members of the professional community (peers) declare that they believe that the said work is valid. Validation by reproducibility is much more than this peer review practice. […]

Compared to peer review, which is only a social claim that somebody from the guild checked it, validation through reproducibility is much more, even if it does not provide means to absolute truths.”

There are several points to mention:

- the role of journals is irrelevant to anybody else than publishers and their fellow academic bureaucrats who work together to maintain this crooked system, for their own $ advantage.
- indeed, an article should give by itself the means to validate its content
- which means that the form of the article has to change from the paper version to a document which contains data, programs, everything which may help to validate the content written with words
- and the validation process (aka post review) has to be put on the par with the activity of writing articles, Even if an article comes with all means to validate it (like the process described in Reproducibility vs peer review ), the validation supposes work and by itself it is an activity akin to the one which is reported in the article. More than this, the validation may or may not function according to what the author of the work supposes, but in any case it leads to new scientific content.

In theory sounds great, but in practice it may be very difficult to provide a work with the means of validation (of course up to the external resources used in the work, like for example other works).

My answer is that: concretely it is possible to do this and I offer as example my article Molecular computers, which is published on github.io and it comes with a repository which contains all the means to confirm or refute what is written in the article.

The real problem is social. In such a system the bored researcher has to spend more than 10 min top to read an article he or she intends to use.

Then it is much easy, socially, to use the actual, unscientific system of replacing validation by authority arguments.

As well, the monkey system — you scratch my back and I’ll scratch yours — which is behind most of the peer reviews (only think about the extreme specialisation of research which makes that almost surely a reviewer competes or collaborates with the author), well, that monkey system will no longer function.

This is even a bigger problem than the one that publishing and academic bean counting will soon be obsolete.

So my forecast is that we shall keep a mix of authority based (read “peer review”) and reproducibility (by validation), for some time.

The authority, though, will take another blow.

Which is in favour of research. It is also economically sound, if you think that probably today a majority of funding for research go to researchers whose work pass peer reviews, but not validation.

______________________________________________

Filed under: Uncategorized Tagged: cost of knowledge, peer review, reproducibility, validation ]]>

You don’t have to believe me, because you can check it independently, by using the programs available in the github repo.

Here is the list of demos where lambda terms are used.

- – experiments with lambda terms from a little lisper tutorial http://chorasimilarity.github.io/chemlambda-gui/dynamic/lisfact_4.html
- – factorial of 4 = 24 http://chorasimilarity.github.io/chemlambda-gui/dynamic/lisfact_2_mod.html
- – computation of the 4th and 5th Fobonacci numbers http://chorasimilarity.github.io/chemlambda-gui/dynamic/fibo.html
- – Ackermann(2,2)=7 http://chorasimilarity.github.io/chemlambda-gui/dynamic/random_ackermann_2_2.html
- – Ackermann(3,2)=29 http://chorasimilarity.github.io/chemlambda-gui/dynamic/ackermann_3_2.html (for the heck of it, you need more than 1h to see the visualisation)
- – Ackermann(2,2)=7 and simultaneously a self-multiplication of the graph ((i.e. the root of the syntactic tree of Ackermann(2,2) is connected to a fanout node (FO), so the reduction happens in the same time as the whole shebab is duplicated) http://chorasimilarity.github.io/chemlambda-gui/dynamic/fo_ackermann_2_2.html
- – the predecessor function http://chorasimilarity.github.io/chemlambda-gui/dynamic/pred_2.html
- – related to the predecessor, but outside lambda calculus: two entwined predecessors http://chorasimilarity.github.io/chemlambda-gui/dynamic/toorings.html
- – the Y combinator, writen with the S,K,I combinators as Y = S (K (S I I)) (S (S (K S) K) (K (S I I))), is applied to I. The out port of this graph (i.e. the root of the syntactic tree) is connected to a fanout node (FO). so the reduction of S (K (S I I)) (S (S (K S) K) (K (S I I))) I happens in the same time as the self-multiplication.http://chorasimilarity.github.io/chemlambda-gui/dynamic/yfrcombtreefo.html–
- The Y-v combinator and a reduction related to it. http://chorasimilarity.github.io/chemlambda-gui/dynamic/yv_combinator.html

Now, details of the process:

– I take a lambda term and I draw the syntactic tree

– this tree has as leaves the variables, bound and free. These are eliminated by two tricks, one for the bound variables, the other for the free ones. The bound variables are eliminated by replacing them with new arrows in the graph, going from one leg of a lambda abstraction node, to the leaf where the variable appear. If there are more places where the same bound variable appears, then insert some fanout nodes (FO). For the free variable do the same, by adding for each free variable a tree of FO nodes. If the bound variable does not appear anywhere else then add a termination (T) node.

– in this way the graph which is obtained is no longer a tree. It is a trivalent graph mostly, with some free ends. It is an oriented graph. The free ends which correspond to a “in” arrow are there for each free variable. There is only one end which correspond to an “out” arrow, coming from the root of the syntactic tree.

– I give a unique name to each arrow in the graph

– then I write the “mol file” which represents the graph, as a list of nodes and the names of arrows connected to the nodes (thus an application node A which has the left leg connected to the arrow “a”, the right leg connected to the arrow “b” and the out leg connected to “c”, is described by one line “A a b c” for example.

OK, now I have the mol file, I run the scripts on it and then I look at the output.

What is the output?

The scripts take the mol file and transform it into a collection of associative arrays (that’s why I’m using awk) which describe the graph.

Then they apply the algorithm which I call “stupid” because really is stupidly simplistic: do a predetermined number of cycles, where in each cycle do the following

– identify the places (called patterns) where a chemlambda rewrite is possible (these are pairs of lines in the mol file, so pairs of nodes in the graph)

– then, as you identify a pattern, flip a coin, if the coin gives “0” then block the pattern and propose a change in the graph

– when you finish all this, update the graph

– some rewrites involve the introduction of some 2-valent nodes, called “Arrow”. Eliminate them in a inner cycle called “COMB cycle”, i.e. comb the arrows

-repeat

As you see, there is absolutely no care about the correctness of the intermediary graphs. Do they represent lambda terms? Generically no!

Are there any variable which are passed, or evaluations of terms which are done in some clever order (eager, lazy, etc)? Not at all, there are no other variables than the names of the arrows of the graph, or these ones have the property that they are names which appear twice in the mol file (once in a port “in”, 2nd in a port “out”). When the pattern is replaced these names disappear and the names of the arrows from the new pattern are generated on the fly, for example by a counter of arrows.

The scripts do the computation and they stop. There is a choice made over the way of seeing the computation and the results.

One obvious choice would be to see the computation as a sequence of mol files, corresponding to the sequence of graphs. Then one could use another script to transform each mol file into a graph (via, say, a json file) and use some graph visualiser to see the graph. This was the choice in the first scripts made.

Another choice is to make an animation of the computation, by using d3.js. Nodes which are eliminated are first freed of links and then they vanish, while new nodes appear, are linked with their ports, then linked with the rest of the graph.

This is what you see in the demos. The scripts produce a html file, which has inside a js script which uses d3.js. So the output of the scripts is the visualisation of the computation.

Recall hat the algorithm of computation is random, therefore it is highly likely that different runs of the algorithm give different animations. In the demos you see one such animation, but you can take the scripts from the repo and make your own.

What is amazing is that they give the right results!

It is perhaps bizzare to look at the computation and to not make any sense of it. What happens? Where is the evaluation of this term? Who calls whom?

Nothing of this happens. The algorithm just does what I explained. And since there are no calls, no evaluations, no variables passed from here to there, that means that you won’t see them.

That is because the computation does not work by the IT paradigm of signals sent by wires, through gates, but it works by what chemists call signal transduction. This is a pervasive phenomenon: a molecule enters in chemical reactions with others and there is a cascade of other chemical reactions which propagate and produce the result.

About what you see in the visualisations.

Because the graph is oriented, and because the trivalent nodes have the legs differentiated (i.e. for example there might be a left.in leg, a right.in leg and a out leg, which for symmetry is described as a middle.out leg) I want to turn it into an unoriented graph.

This is done by replacing each trivalent node by 4 nodes, and each free end or termination node by 2 nodes each.

For trivalent nodes there will be one main node and 3 other nodes which represents the legs. These are called “ports”. There will be a color-coded notation, and the choice made is to represent the nodes A (application) and FO by the main node colored green, the L (lambda) and FI (fanin, exists in chemlamda only) by red (actually in the demos this is a purple)

and so on. The port nodes are coloured by yellow for the “in” ports and by blue for the “out” ports. The “left”, right”, “middle” types are encoded by the radius of the ports.

__________________________________________________

Filed under: Uncategorized Tagged: artificial chemistry, chemlambda, demos, lambda calculus ]]>

I want to understand how a single molecule interacts with others, chemically. You have to agree that this is a worthy goal.

What I say is this. By using a collection of made up molecules and made up chemical reactions, I proved that by the stupid deterministic algorithm I can do anything and by experiment that it seems that if I design well the initial molecule then I can do anything I proposed myself doing, with the stupid random algorithm (a molecule which encounters randomly enzymes which rewrite it by chemical reactions). For me, the molecule is not the process, it is just a bunch of atoms and bonds. But I proved I can do anything with it, without any lab supervision.Which is relevant because any real cell does that. It has no supervision, nor goals, nor understanding, is nothing else than a collection of chemicals which interact randomly.

My hypothesis is the following. There is a transformation from the made up chemlambda molecules, which TRANSFORMS:

– node into real molecule

– port into real molecule

– bond into real molecule

and some other real molecules called here “enzymes”, one per any type of graph rewrite

such that

– graph rewrite G which replaces this configuration LT of two nodes and 1 bond into that RT configuration TRANSFORMS into the chemical reaction between enzyme G and the transformation of LT into real chemicals, which gives the transformation of RT into real chemicals and the enzyme G (perhaps carrying away some other reaction products, to have conservation of # atoms)

The argument for that hypothesis is that the rewrites are so simple, compared with real chemistry of biomolecules, that there have to exist such reactions.

This is explained in the Molecular computers

http://chorasimilarity.github.io/chemlambda-gui/dynamic/molecular.html

Suppose that the hypothesis is confirmed. Either by identifying the TRANSFORM from scratch (i.e. by using chemistry knowledge to identify classes of reactions and chemicals which can model chemlambda), or by finding the enzymes G and the real molecules corresponding to node, port and bond in some fundamental biochemical processes (that would be even more wonderful).

Suppose this. What then?

Then, say I have the goal to design a molecule which does something inside a cell, when injected in the body. It does it by itself, in the cell medium. What it does can always (or in principle) be expressed as an algorithm, as a computation.

I use chemlambda and TRANSFORM to design the molecule and check that once I have it, it does the job. It is of course a problem to build it in reality, but for this I have the printer of Craig Venter, the digital biological converter https://youtu.be/DVL1nL3SU6I .

So I print it and that’s all. When injected in the body, once arrived in the cell, it does the job.

Other possibilities would open in the case some formalism like chemlambda (i.e. using individual molecules and rewrites, along with trivial random algorithms) is identified in real biochemistry. This would help enormously the understanding of biochemical processes, because instead of working empirically, like now when we work at the level of functions of molecules (knowing well that the same molecule does different things in different contexts and that molecule-function association is very fragile in biology), we might work inversely, from using functions as black boxes to being able to build functions. Even to go outside functions and understand chemistry as computation directly, not only as a random medium for encoding our theoretical notions of computation.

See more about this at the chemlambda index

http://chorasimilarity.github.io/chemlambda-gui/index.html

_________________________________________________________________

Filed under: Uncategorized Tagged: algorithmic chemistry, artificial chemistry, biological computing, chemlambda, Craig Venter ]]>

One computer, WHILE IT WORKS, is multiplied into 3 computers which continue to work, each of them. This feat is done without any higher control and there are never conflicts which appear. All computers work asynchronously (mimicked here by randomness). Moreover, eventually they arrive to read and write on the same tape.

There are no calls, nothing you have seen before.Everything is local, no matter how you slice it into separated parts.

_________________________________________

Filed under: distributed GLC Tagged: busy beaver, chemlambda, randomness, Turing machine ]]>

What happens when you apply the Church number 3 to a busy beaver? You get 3 busy beavers on the same tape.

Details will be added into the article Turing machines, chemlambda style.

If you want tot experiment, then click on “fork me on github” and copy the gh-pages branch of the repo. Then look in the dynamic folder for the script moving_random_metabo_bb.sh. In a terminal type “bash moving_random_metabo_bb.sh”, then type “church3bb.mol”. You shall get the file church3bb.html which you can see by using a js enabled browser.

The sh script calls an awk script, which produces the html file. The awk script is check_1_mov2_rand_metabo_bb.awk. Open it with a text editor and you shall see at the beginning all kinds of parameters which you can change (before calling the sh script), so that you may alter the duration, the speed, change between deterministic and random algorithms.

Finally, you also need a mol file to play. For this demo has been used the mol file church3bb.mol. You can also open it with a text editor and play with it.

**UPDATE:** Will tweak it more the next days, but the idea which I want to communicate is that TM can be seen as chemistry, like in chemlambda, and it can interact very well with the rest of the formalism. So you have these two pillars of computation on the same footing, together, despite the impression that they are somehow very different, one like hardware and the other like software.

______________________________________

Filed under: Uncategorized ]]>

Once again: why do chemists try to reproduce silicon computers? Because that’s the movie.

- The two pillars of computation are:
- the Turing Machine (Alan Turing 1936)
- untyped lambda calculus (Alonzo Church 1936)

- At first sight they look very different. Therefore, if we want to compare them, we have to reformulate them in ways which are similar. Rewrite systems are appropriate here.

- The Turing Machine (TM) is a model of computation which is well-known as being a formalisation of what a computer is. It is a machine which has some internal states (from a set S), has a head which read/writes symbols (from a set A) on a tape. The tape is seen as an infinite word made of letters from A. The set A has a special letter (call it “b” from blank) and the infinite words which describe tapes have to be such that all but a finite number of letters of that word are different from “b”. Imagine an infinite, in both directions, tape which has written symbols on it, such that “b” represents an empty space. The tape has only a finite part of it filled with letters from A, others than the blank letter.

- The action of the machine depends on the internal state and on the symbol read by the head. It is therefore a function of the internal state of the machine (element of S), the letter from the tape which is read (element of A), and outputs a letter from the alphabet A (which is written in the case where previously was the letter which has been read, changes its internal state, and the head moves one step along the tape, to the left (L) or right (R), or does not move at all (N).

- The TM can be seen as a rewrite system.

- For example, one could see a TM as follows. (Pedantically this is seen as a multi-headed TM without internal states; the only interest in this distinction is that it raises the question if there is really any meaning into discerning internal states from the tape symbols.) We start from a set (or type) of internal states S. Such states are denoted by A, B, C (thus exhibiting their type by the type of the font used). There are 3 special symbols: < is the symbol of the beginning of a list (i.e. word), > is the symbol of the end of a list (word) and M is the special symbol for the head move (or of the type associated to head moves). There is an alphabet A of external states (i.e. tape symbols), with b (the blank) being in A.

- A tape is then a finite word (list) of one of the forms < w A w’ > , < w M A w’ > , < w A M w’ >, where A is an internal state and w and w’ are finite words written with the alphabet A, which can be empty words as well.

- A rewrite replaces a left pattern (LT) by a right pattern (RT), and there are denoted as LT – – > RT . Here LT and RT are sub-words of the tape word. It supposed that all rewrites are context independent, i.e. LT is replaced by RT regardless of the place where LT appears in the tape word. The rewrite is called “local” if the lengths (i.e. number of letters) of LT and RT are bounded a priori.

- A TM is given as a list of Turing instructions, which have the form (current internal state, tape letter read, new internal state, tape letter write, move of the tape). In terms as explained here, all this can be expressed via local rewrites.
- Rewrites which introduce blanks at the extremities of the written tape:
- < A – – > < b A
- A > – – > A b >

- Rewrites which describe how the head moves:
- A M a – – > a A
- a M A – – > A a

- Turing instructions rewrites:
- a A c – – > d B M c , for the Turing instruction (A, a, B, d, R)
- a A c – – > d M B c , for the Turing instruction (A, a, B, d, L)
- a A c – – > d B c , for the Turing instruction (A, a, B, d, N)

- Together with the algorithm “at each step apply all the rewrites which are possible, else stop” we obtain the deterministic TM model of computation. For any initial tape word, the algorithm explains what the TM does to that tape. < don’t forget to link that to a part of the Cartesian method “to be sure that I made an exhaustive enumeration” which is clearly going down today > Other algorithms are of course possible. Before mentioning some very simple variants of the basic algorithm, let’s see when it works.

- If we start from a tape word as defined here, there is never a conflict of rewrites. This means that there is never the case that two LT from two different rewrites overlap. It might be the case though, if we formulate some rewrites a bit differently. For example, suppose that the Turing rewrites are modified to:

- a A – – > d B M , for the Turing instruction (A, a, B, d, R)
- a A – – > d M B , for the Turing instruction (A, a, B, d, L)

- Therefore, the LT of the Turing rewrites is no longer of the form “a A c”, but of the form “a A”. Then it may enter in conflict with the other rewrites, like in the cases:
- a A M c where two overlapping rewrites are possible
- Turing rewrite: a A M c – – > d M B M c   which will later produce two possibilities for the head moves rewrites, due to the string M B M
- head moves rewrite: a A M c – – > a c A   which then produces a LT for a Turing rewrite for c A, instead of the previous Turing rewrite for a A

- a A > where one may apply a Turing rewrite on a A, or a blank rewrite on A >

- The list is non exhaustive. Let’s turn back to the initial formulation to the Turing rewrites and instead let’s change the definition of a tape word. For example, suppose we allow multiple TM heads on the same tape, more precisely suppose that we accept initial tape words of the form < w1 A w2 B w3 C … wN >. Then we shall surely encounter conflicts between head moves rewrites for patterns as a A M B c.

- The most simple solution for solving these conflicts is to introduce a priority of rewrites. For example we may impose that blank rewrites take precedence over head moves rewrites, which take precedence over Turing rewrites. More such structure can be imposed (like some head moves rewrites have precedence over others). Even new rewrites may be introduced, for example rewrites which allow multiple TMs on the same tape to switch place.

- Let’s see an example: the 2-symbols, 3-states

- . Following the conventions from this work, the tape letters (i.e. the alphabet A) are “b” and “1”, the internal states are A, B, C, HALT. (The state HALT may get special treatment, but this is not mandatory). The rewrites are:
- Rewrites which introduce blanks at the extremities of the written tape:
- < X – – > < b X for every internal state X
- A > – – > A b > for every internal state X

- Rewrites which describe how the head moves:
- X M a – – > a X , for every internal state X and every tape letter a
- a M X – – > X a , for every internal state X and every tape letter a

- Turing instructions rewrites:
- b A c – – > 1 B M c , for every tape letter c
- b B c – – > b C M c , for every tape letter c
- b C c – – > 1 M C c , for every tape letter c
- 1 A c – – > 1 HALT M c , for every tape letter c
- 1 B c – – > 1 B M c , for every tape letter c
- 1 C c – – > 1 M A c , for every tape letter c

- We can enhance this by adding the priority of rewrites, for example in the previous list, any rewrite has priority over the rewrites written below it. In this way we may relax the definition of the initial tape word and allow for multiple heads on the same tape. Or for multiple tapes.

- Suppose we put the machine to work with an infinite tape with all symbols being blanks. This corresponds to the tape word < A >. Further are the steps of the computation:

- < A > – – > < b A >
- <b A > – – > < b A b >
- < b A b > – – > < 1 B M b >
- < 1 B M b > – – > < 1 b B >
- < 1 b B > – – > < 1 b B b >
- < 1 b B b > – – > < 1 b C M b >
- < 1 b C M b > – – > < 1 b b C >
- < 1 b b C > – – > < 1 b b C b >
- < 1 b b C b > – – > < 1 b 1 M C b >
- < 1 b 1 M C b > – – > < 1 b C 1 b >
- < 1 b C 1 b > – – > < 1 1 M C 1 b >
- < 1 1 M C 1 b > – – > < 1 C 1 1 b >
- < 1 C 1 1 b > – – > < 1 M A 1 1 b >
- < 1 M A 1 1 b > – – > < A 1 1 1 b >
- < A 1 1 1 b > – – > < b A 1 1 1 b >
- < b A 1 1 1 b > – – > < 1 B M 1 1 1 b >
- <1 B M 1 1 1 b > – – > < 1 1 B 1 1 b >
- < 1 1 B 1 1 b > – – > < 1 1 B M 1 1 b >
- < 1 1 B M 1 1 b > – – > < 1 1 1 B 1 b >
- < 1 1 1 B 1 b > – – > < 1 1 1 B M 1 b >
- < 1 1 1 B M 1 b > – – > < 1 1 1 1 B b >
- < 1 1 1 1 B b > – – > < 1 1 1 1 B M b >
- < 1 1 1 1 B M b > – – > < 1 1 1 1 b B >
- < 1 1 1 1 b B > – – > < 1 1 1 1 b B b >
- < 1 1 1 1 b B b > – – > < 1 1 1 1 b C M b >
- < 1 1 1 1 b C M b > – – > < 1 1 1 1 b b C >
- < 1 1 1 1 b b C > – – > < 1 1 1 1 b b C b >
- < 1 1 1 1 b b C b > – – > < 1 1 1 1 b 1 M C b >
- < 1 1 1 1 b 1 M C b > – – > < 1 1 1 1 b C 1 b >
- < 1 1 1 1 b C 1 b > – – > < 1 1 1 1 1 M C 1 b >
- < 1 1 1 1 1 M C 1 b > – – > < 1 1 1 1 C 1 1 b >
- < 1 1 1 1 C 1 1 b > – – > < 1 1 1 1 M A 1 1 b >
- < 1 1 1 1 M A 1 1 b > – – > < 1 1 1 A 1 1 1 b >
- < 1 1 1 A 1 1 1 b > – – > < 1 1 1 HALT M 1 1 1 b >
- < 1 1 1 HALT M 1 1 1 b > – – > < 1 1 1 1 HALT 1 1 b >

- At this stage there are no possible rewrites. Otherwise said, the computation stops. Remark that the priority of rewrites imposed a path of the rewrites applications. Also, at each step there was only one rewrite possible, even if the algorithm does not ask for this.

- More possibilities appear if we see the tape words as graphs. In this case we pass from rewrites to graph rewrites. Here is a proposal for this.

- I shall use the same kind of notation as in

- . It goes like this, explained for the busy beaver TM example. We have 9 symbols, which can be seen as nodes in a graph:
- < which is a node with one “out” port. Use the notation FRIN out
- > which is a node with one “in” port. Use the notation FROUT in
- b, 1, A, B, C, HALT, M which are nodes with one “in” and one”out” port. Use a notation Name of node in out

- The rule is to connect “in” ports with “out” ports, in order to obtain a tape word. Or a tape graph, with many busy beavers on it. (TO BE CONTINUED…)

Filed under: Uncategorized Tagged: artificial chemistry, chemlambda, graph rewriting systems, Turing machine ]]>

- a researcher makes public (i.e. “publishes”) a body of work, call it W. The work contains text, links, video, databases, experiments, anything. By making it public, the work is claimed to be valid, provided that the external resources used (as other works, for example) are valid. In itself, validation has no meaning.
- a second part (anybody) can also publish a validation assessment of the work W. The validation assessment is a body of work as well, and thus is potentially submitted to the same validation practices described here. In particular, by publishing the validation assessment, call it W1, it is also claimed to be valid, provided the external resources (other works used, excepting W) are valid.
- the validation assessment W1 makes claims of the following kind: provided that external works A,B,C are valid, then this piece D of the work W is valid because it has been reproduced in the work W1. Alternatively, under the same hypothesis about the external work, in the work W1 is claimed that the other piece E of the work D cannot be reproduced in the same.
- the means for reproducibility have to be provided by each work. They can be proofs, programs, experimental data.

As you can see the validation can be only relative, not absolute. I am sure that scientific results are never amenable to an acyclic graph of validations by reproducibility. Compared to peer review, which is only a social claim that somebody from the guild checked it, validation through reproducibility is much more, even if it does not provide means to absolute truths. What is preferable: to have a social claim that something is true, or to have a body of works where “relative truth” dependencies are exposed? This is moreover technically possible, in principle. However, this is not easy to do, at least because:

- traditional means of publication and its practices are based on social validation (peer review)
- there is this illusion that there is somehow an absolute semantical categorification of knowledge, pushed forward by those who are technically able to implement a validation reproducibility scheme at a large scale.

**UPDATE:** The mentioned illusion is related to outdated parts of the cartesian method. It is therefore a manifestation of the “cartesian disease”.

I use further the post More on the cartesian method and it’s associated disease. In that post the cartesian method is parsed like this:

- (1a) “never to accept anything for true which I did not clearly know to be such”
- (1b) “to comprise nothing more in my judgement than what was presented to my mind”
- (1c) “so clearly and distinctly as to exclude all ground of doubt”

- (2a) “to divide each of the difficulties under examination into as many parts as possible”
- (2b) “and as might be necessary for its adequate solution”

- (3a) “to conduct my thoughts in such order that”
- (3b) “by commencing with objects the simplest and easiest to know, I might ascend […] to the knowledge of the more complex”
- (3c) “little and little, and, as it were, step by step”

- (3d) “assigning in thought a certain order even to those objects which in their own nature do not stand in a relation of antecedence and sequence”

Let’s take several researchers who produce works, some works related to others, as explained in the validation procedure.

Differently from the time of Descartes, there are plenty of researchers who think in the same time, and moreover the body of works they produce is huge.

Every piece of the cartesian method has to be considered relative to each researcher and this is what causes many problems.

Parts (1a),(1b), (1c) can be seen as part of the validation technique, but with the condition to see “true”and “exclude all grounds of doubt” as relative to the reproducibility of work W1 by a reader who tries to validate it up to external resources.

Parts (2a), (2b) are clearly researcher dependent; in a interconnected world these parts may introduce far more complexity than the original research work W1.

Combined with (1c), this leads to the illusion that the algorithm which embodies the cartesian method, when run in a decentralized and asynchronous world of users, HALTS.

There is no ground for that.

But the most damaging is (3d). First, every researcher embeds a piece of work into a narrative in order to explain the work. There is nothing “objective” about that. In a connected world, with the help of Google and alike, who impose or seek for global coherence, the parts (3d) and (2a), (2b) transform the cartesian method into a global echo chamber. The management of work bloats and spill over the work itself and in the same time the cartesian method always HALT, but for no scientific reason at all.

__________________________________

Filed under: Uncategorized Tagged: cartesian disease, cartesian method, cost of knowledge, peer community, peer review, reproducibility, validation ]]>

I took the lambda term from there and I modified slightly the part which describes the IFTHEN (figured by an arrow in the wiki explanation)

IFTHEN a b appears in chemlambda as

A 1 a2 out

A a1 b 1

FO a a1 a2

which if you think a little bit, behaves like IFTHENELSE a b a.

Once I built a term like the “r” from the wiki explanation, instead of using rr, I made a graph by the following procedure:

– take the graph of r applied to something (i.e. suppose that the free out port of r is “1” then add A 1 in out)

– make a copy of that graph (i.e in mol notation duplicate the mol file of the previous graph, change the ports variable — here by adding the “a” postfix)

– then apply one to the other (i.e. modify

A 1 in out

A 1a ina outa

into

A 1 outa out,

A 1a out outa)

The initial mol file is:

A 1 outa out

A 1a out outa

L 2 3 1

A 4 7 2

A 6 5 4

FO 8 6 7

FO 3 9 10

A 9 10 8

L 2a 3a 1a

A 4a 7a 2a

A 6a 5a 4a

FO 8a 6a 7a

FO 3a 9a 10a

A 9a 10a 8a

The parameters are: cycounter=8; wei_FOFOE=0; wei_LFOELFO=0; wei_AFOAFOE=0; wei_FIFO=0; wei_FIFOE=0; wei_AL=0;

i.e is a deterministic run for 8 steps.

Done in chemlambda.

______________________________________________________________-

Filed under: Uncategorized Tagged: chemlambda, Curry's paradox ]]>

[update: github.io version]

The chemlambda project proposes the following. Chemlambda is a model of computation based on individual molecules, which compute alone, by themselves (in a certain well defined sense). Everything is formulated from the point of view of ONE molecule which interacts randomly with a family of enzymes.

So what?

Bad detail: chemlambda is not a real chemistry, it’s artificial.

Good detail: it is Turing universal in a very powerful sense. It does not rely on boolean gates kind of computation, but on the other pillar of computation which led to functional programming: lambda calculus.

So instead of molecular assemblies which mimic a silicon computer hardware, chemlambda can do sophisticated programming stuff with chemical reactions. (The idea that lambda calculus is a sort of chemistry appeared in the ALCHEMY (i.e. algorithmic chemistry) proposal by Fontana and Buss. Chemlambda is far more concrete and simple than Alchemy, principially different, but it nevertheless owes to Alchemy the idea that lambda calculus can be done chemically.)

From here, the following reasoning.

(a) Suppose we can make this chemistry real, as explained in the article Molecular computers. This looks reasonable, based on the extreme simplicity of chemlambda reactions. The citizen science part is essential for this step.

(b) Then is is possible to take further Craig Venter’s Digital Biological Converters (which already exist) idea and enhance it to the point of being able to “print” autonomous computing molecules. Which can do anything (amenable to a computation, so literary anything). Anything in the sense that they can do it alone, once printed.

The first step of such an ambitious project is a very modest one: identify the ingredients in real chemistry.

The second step would be to recreate with real chemistry some of the examples which have been already shown as working, such as the factorial, or the Ackermann function.

Already this second step would be a huge advance over the actual state of the art in molecular computing. Indeed, compare a handful of boolean gates with a functional programming like computation.

If it is, for example, a big deal to build with DNA some simple assemblies of boolean gates, then surely it is a bigger deal to be able to compute the Ackermann function (which is not primitive recursive, like the factorial) as the result of a random chemical process acting on individual molecules.

It looks perfect for a citizen science project, because what is missing is a human distributed search in existing databases, combined with a call for realization of possibly simple proofs of principles chemical experiments based on an existing simple and rigorous formalism.

Once these two steps are realized, then the proof of principle part ends and more practical directions open.

Nobody wants to compute factorials with chemistry, silicon computers are much better for this task. Instead, chemical tiny computers as described here are good for something else.

If you examine what happens in this chemical computation, then you realize that it is in fact a means towards self-building of chemical or geometrical structure at the molecular level. The chemlambda computations are not done by numbers, or bits, but by structure processing. Or this structure processing is the real goal!

Universal structure processing!

In the chemlambda vision page this is taken even further, towards the interaction with the future Internet of Things.

Filed under: Uncategorized Tagged: algorithmic chemistry, autonomous computing molecules, biological computing, chemlambda, digital biological converters, molecular computers ]]>

Craig Venter’s Digital Biological Converter

**which could print** autonomous computing molecules.

That’s more than just teleportation.

Suppose you want to do anything in the real world. You can design a microscopic molecular computer for that, by using chemlambda. Send it by the web to a DBC like device. Print it. Plant it. Watch it doing the job.

Anything.

___________________________________________________________

Filed under: Uncategorized Tagged: biological teleportation, chemlambda, Craig Venter, Digital Biological, digital biological converter, molecular computer, molecular computers, teleportation ]]>

-starting from this issue https://github.com/PeerJ/paper-now/issues/2 which proposes hypothes.is

– via the roadmap https://hypothes.is/roadmap/

– via the issue https://github.com/hypothesis/vision/issues/87

this:

Read and spread it if you think is interesting. I do because of what I want: a realization of the idea of an article which runs in the browser. They already formulated, and they are close to solve this since 2013 or earlier, in a vastly more precise and interesting way than what I described, without being aware, in the half-joking posts The shortest OA and new forms of publication question, The journal of very short papers and The journal of uncalled advices.

I’m a content producer and I have a direct interest into this. The first fully autonomous article I have, which runs in the browser, is Molecular computers. I want to see it reviewed, annotated, discussed.

I don’t want to use time and space by giving all answers for all possible questions, leaving them to eventual replies to interested people. This way the new content is maximized, instead of wasting time for the benefit of the lazy reader who does not want to understand the new but wants only to find confirmation of his ideas.

[this is http://abstrusegoose.com/354 ]

Filed under: Uncategorized Tagged: cost of knowledge, hypothes.is, JVSP, micropublications ]]>

This is a short and hopefully clear explanation, targeted for chemists (or bio-hackers), for the molecular computer idea.

http://chorasimilarity.github.io/chemlambda-gui/dynamic/molecular.html

http://chorasimilarity.github.io/chemlambda-gui/dynamic/molecular.html

I would be grateful for any input, including reviews, suggestions for improvements, demands of clarifications, etc.

(updated to fit phones formats)

____________________________________________

Filed under: Uncategorized Tagged: artificial chemistry, bio-hacker, chemistry, chemlambda, molecular computer ]]>

The moves, or graph rewrites, are visualised at the moves page.

The expression of the moves in mol format can be inferred from the main script, but perhaps this is tedious, therefore I shall give them here directly.

The graphical elements are L, A, FI, FO, FOE, Arow, T, FRIN, FROUT, each with a certain number of ports, as explained in the mentioned index page. Each port has two types:

- one can be “in” or “out”
- the other can be “middle”, “left” or “right”

and there is a convention of writing the ports of any graphical element in a given order.

Here I shall write a graphical element in the following form. Instead of the line “L a b c” from the mol file I shall write “L[a,b,c]”, and the same for all other graphical elements. Then the mol file will be seen as a commutative and associative product of the graphical elements.

This goes back to the initial proposal by Louis Kauffman, who tried to use the formal reduction from Mathematica for the graphic lambda calculus.

OK?

Now, we can improve the notation L[a,b,c] by thinking about a, b, c as indices of a tensor, of course. This is reasonable because in any mol file which represents a chemlambda molecule, any port variable appears at most twice, so it is like a summing variable.

Let’s do this by writing

in order to emphasize that “a” has type “in” and “b”, “c” have type out, but otherwise preserving the order a,b,c in the notation (this order allows to infer, from the symbol “L” of the graphical element, they other types of the ports, namely for the L element “a” has also type “middle”, “b” has type “left” and “c” has type “right”).

Same for all other elements.

Here are the moves, then, with the convention that “U=V” means “transform U into V”.

**COMB.** Denote by any graphical element with an out port “a”, then

**L-A (i.e. the BETA move).**

**FI-FOE(i.e. the FAN-IN move).**

**L-FO, L-FOE (aka DIST-L).**

and the same for L-FOE, where the left hand side … is replaced by and the rest stays the same.

**A-FO, A-FOE (aka DIST-A).**

and the same for A-FOE, where the left hand side … is replaced by and the rest stays the same.

**FI-FO (aka DIST-FI).**

**FO-FOE (aka DIST-FO).**

**PRUNING MOVES.**

___________________________

**The question, of course, is: if we see the moves as equalities and the graphical elements as tensors in a vector space, then how many solutions exist for the moves equations (perhaps eliminating the PRUNING moves, or by seeing as an element of the vector space)?**

________________________________________________

Filed under: Uncategorized Tagged: chemlambda ]]>

There are interesting reactions to this:

- Journal publishes 200-word papers by Chris Woolston in Nature
- Is this the ‘Twitter’ of STM publications? by Steve Odart in Ixxus

OK, what is this, in just a few words?

From the About page of the journal:

The Journal of Brief Ideas is a research journal, composed entirely of ‘brief ideas’. The goal here is to provide a place for short ideas to be described – in 200 words or less – for these ideas to be archived (courtesy of Zenodo), searchable and citable.

_______________________

I submitted the following: Build a molecular computer.

A visualisation for the Ackermann function here:

_______________________

In my opinion this is part of the exploration of new ways of communicate, do collaborative work and explore in the research world.

The article format is obsolete, even if put in digital form. More is needed, one of the ideas it to eventually arrive to run the article in the browser.

It is very encouraging to see that in only few days two excellent, **different initiatives** concerning new ways, new meanings of publication appeared, the Journal of Brief Ideas and PeerJ/paper-now.

_______________________

This new journal recalls me the proposal of a Journal of very short papers.

The idea behind JVSP was to use the legacy format for journals in order to peer-review articles from OA repositories, like arXiv.

After writing that article I got replies, resulting in an update which I reproduce here:

” Helger Lipmaa points to the journal “Tiny ToCS“. However, the real purpose of JVSP is not to be brief, but to create a “subversive”, but with** rigorous and solid results** old-school like journal for promoting free open access.Another journal could be “The RXI Journal of Mathematics” which is as rigorous as any journal, only it asks to have at least 3 occurences of the string ‘rxi’ in the text.David Roberts discusses about fitting a paper into a **refereed tweet**. It is an interesting idea, some statements are too long, but some of them not. On the top of my head, here is one: “A Connected Lie Group Equals the Square of the Exponential Image, Michael Wüstner, Journal of Lie Theory. Volume 13 (2003) 307–309 Proof: http://emis.math.ca/journals/JLT/vol.13_no.1/wuestla2e.pdf “, here is another which satisfies also the requirements of JVPS “W is a monad, David Roberts, Theorem: W:sGrp(S)->sS lifts to a monad. Proof:http://arxiv.org/abs/1204.4886 “, which ~~will~~ appeared in the New York Journal of Mathematics, in an open access journal.” [my comment: in a 10 pages long form which obsoletes arXiv:1204.4886] Interesting that the Twitter idea appears also.

But this is not about Twitter, nor about peer-reviews. It is a NEW idea.

The Journal of Brief Ideas makes the excellent proposal to attach DOI to ideas, in a short format (up to 200 words), but with enough place for using the power of the Net.

________________________

Can’t resist to point also to the Journal of Uncalled Advices, will it appear some day?

________________________________________________________________________

Filed under: Uncategorized Tagged: cost of knowledge, Journal of Brief Ideas, JVSP, Open Science, Twitter ]]>

Same in open access. How many blends of open access there are? It is unbelievable that there is anybody, excepting those with a (conflict of) interest(s) in it, who believe that Gold OA is anything else than a stupid idea. Dress it as you wish, but it is still the idea to take money from the authors, for doing nothing, because anyways you can’t take money from the readers anymore. Still, if you don’t know, there are blends and blends and blends of Gold OA, and politicians discuss at length which are the relative advances and why we can’t change fast this useless publication service.

Why can’t we change it? Because of the politicians from the academic management. How will they avoid being accountable for their decisions if they can’t hide behind numbers? Again, people can’t be that stupid when they pretend that the number of articles and the place they appear are relevant. So there has to be something else: hidden interest. They are the product of the system. To say that, because under publishers locks, their life work is as good as crap, is offending to them. They made bad choices and they want to impose them on you.

It is a society effect. The bosses are in conflict of interests or straightly corrupt. So they invent rules for you, rules which they change from time to time, but every time they avoid to look at the root of evil. Which is: they are and they try to transform you, researcher, passionate student, into a interchangeable unit of thinking person. They are dust and they want to transform you into dust.

It happened before. Favourite example: the painter artists in France, before the impressionist revolution, read here more about this comparison. Were they stupid to fight for the number of paintings accepted in academic exhibitions or the HEIGHT where the paintings were put during those exhibitions? No, the products of the system who were at the lead were interested and the rest were forced by the choice between their passion and their career.

So, say NO to politicians.

Live your passion.

Just to prove that I am not a hot head who writes hot air, I say that I did. My first paper on arXiv was in 2001. Since then I put almost everything there and I refuse, if possible, to publish elsewhere because I don’t want to support the system. Of course I am not crazy to impose my beliefs to co-authors, but still in these cases I try to use arXiv as well.

I told a bit about the effects of this choice on my career.

What I think now.

That OA is already old thing.

That discussions about who has the copyright are sterile at best (and interested possibly), because it is clear that DRM trumps the licence, see Use DRM for academics, enjoy watching them fight for the copyright.

So, please tell, what are you discussing about?

Now I think that the article format will change and this is a part of an ongoing revolution which went unnoticed by the politicians who live around OA. It’s Github, already 20 times bigger than arXiv (I give the example of arXiv because is greatest in OA, in my opinion, and because I’m familiar with it; however, look at this wonder of Github).

That is why I support and will use PeerJ/paper-now, read this.

As concerns publishers, I don’t wish they disappear, once because it’s not my problem and twice because it’s obvious we can reuse their infrastructure.

Is not good to wish bad things to people (except to politicians, maybe).

But it is obvious that not publishing is the service which has value. Peer-review is needed, pre- and post- publication. What they could do is to propose the service of organizing this peer-review.

Another, related and perhaps bigger opportunity is the management of scientific data, be them articles, experimental data, programs. This is related to the idea of running the article in the browser, sometime soon. This needs an infrastructure which, no! publisher, don’t try again, an infrastructure which is not based on artificial scarcity, but on overwhelming abundance.

Otherwise I’m good, thank you and I am still looking for people with enough guts and funds to make big things, like molecular computers, changing the IoT, understanding life, i.e. the chemlambda project.

_____________________________________________

Filed under: Uncategorized Tagged: cost of knowledge ]]>

If you are interested please read and play with both demos.

The purpose of the demo is simple: if anybody would identify real chemical reactions between small molecules and (invisible in the demo) other molecules, one per move (I call them “moves enzymes”), then it would be possible, in principle, to design molecules which compute, by themselves, without any laboratory management, by these chemical reactions. Any such molecule should be regarded as a program which executes itself. The result of the computation is encoded in the shape of the molecule, not in the number of (more or less arbitrary) chosen molecules.

See also the post Molecular computers.

_________________________________________________________________

Filed under: Uncategorized Tagged: chemlambda, demos, molecular computer, molecular computer demo ]]>

We may understand a cell as a chemical and physics based program which runs itself. The cell is a computational device in the sense that the outcomes of its activity are computable functions of its inputs (a reasonable hypothesis), but more than that: at the cell level there is no distinction between the computer, the program which runs on the computer, input and output data AND the execution of the program. All is at the same level, i.e. every abstraction is embodied in a concrete chemical or physical thing.

This concreteness part adds, I believe to the difficulty, because the usual thinking in computer science is all about structuring abstractions, while in biology everything is ultimately at only one level: real, physics and chemistry embodied.

This is a claim which needs strong supporting evidence. Being a matter of principle, it cannot be proved rigorously, but it might be given a rigorous support by constructing simple proofs of principle models.

There are many computing models which are inspired by chemistry, therefore in return they can be seen as such proof of principles.

There are Chemical Reaction Networks and Petri Nets models which are more like structuring tools than real embodied models of computation, because they don’t consider the structure of molecules (they are just nodes in a graph), nor the way chemical reactions happen (they are edges in a graph). They are very useful tools though and the description given here is very much simplified.

There is the CHAM (chemical abstract machine), G. Berry and G. Boudol. The chemical abstract machine. Theoretical Computer Science, 96(1):217–248, 1992. In this model states of the machine (imagine: a cell) “are chemical solutions where floating molecules can interact according to reaction rules” (cite from the abstract). In this model “solution” means a multiset of molecules, reaction rules are between molecules and they do not apply inside molecules. This is a limitation of the model because the structure of the molecule is not as important as the number of molecules in a species.

Another very interesting model is the Algorithmic Chemistry of Fontana and Buss. The main idea is that chemistry and computation are basically the same. The reasoning goes as following. There are two pillars of the rigorous notion of computation: the Turing Machine (well known) and Church’s lambda calculus. Lambda calculus is less known outside computer science, but is a formalism which may be more helpful to chemists, or biologists even, than the Turing machine. Fontana and Buss propose that lambda calculus is a kind of chemistry, in the sense that the its basic operations, namely abstraction and application, can be given chemical analogies. Molecules are like mathematical functions, abstractions are like reaction sites and applications are like chemical reactions.

The Algorithmic Chemistry is almost as closer as possible to be (proof of principle) answer to the question.

Finally I mention chemlambda, or the Chemical concrete machine, which is like Algorithmic Chemistry, but it is far more concrete. Molecules are graphs, applications and abstractions are molecules, chemical reactions are graph rewrites.

What is very interesting in all these models, in my opinion, is that they suggest that answering to the question “Can we understand a cell as an organic computational device?” is somehow relevant to the Computer Science question “How to design an asynchronous, decentralized Internet?”.

Filed under: Uncategorized Tagged: chemical reactions, models of computation ]]>

This may be a huge step forward from the discussions about OA because:

- offers a clear improvement of the article format, allowing it hopefully to merge with formats like animations, databases, programs which one can execute in the browser.
- it exports the format of the paper (this is like if latex were a publisher and decides to export the latex programs so that everybody could write a latex article)
- which has the obvious advantage that one can host on it’s page an article in an uniform format, idea which solves two things at once: (1) how to make an article friendly for future semantic queries (2) where to put the article on the web
- Github is already the answer and the perpetrator of a silent revolution (is already more than 10 times bigger than arXiv, and git is a model of collaboration tool which is not based on choke points and centralized thinking), so to export the PeerJ/paper-now to Github is natural and brilliant.

See also The shortest Open Access and New Forms of Publication question

____________________________________________________________

Filed under: Uncategorized Tagged: cost of knowledge, github, paper-now, PeerJ ]]>

Yes, it is possible. Here is why.

What you see in this video is a virtual molecule, which enters in random chemical reactions in a soup of invisible enzymes. All chemical reactions consist in the enzymes interacting with a small number of atoms (color coded in the video), in a random order, and facilitating certain, well chosen reactions.

Everything is purely local, there is nothing else behind the scenes.

And still it works.

This is not real chemistry (but who knows? I believe it is, with the condition to identify real chemical reactions like those virtual ones).

The chemical reactions needed are listed at this page.

http://chorasimilarity.github.io/chemlambda-gui/dynamic/moves.html

If you want to play with this (made-up) chemistry, called “chemlambda”, then go to the github repository

https://github.com/chorasimilarity/chemlambda-gui/tree/gh-pages/dynamic

clone it and just type:

bash moving_random_metabo.sh

and then choose one of the molecules, encoded as .mol files, from the list.

There are a bunch of demos, which show animations for different molecules, made in d3.js, at the link

http://chorasimilarity.github.io/chemlambda-gui/dynamic/demos.html

What is different in this video, which is a screen recording at 4X speed of an animation made with the script, is that it is now possible to see the virtual molecules as made of “atoms” (they may be smaller, well chosen molecules themselves).

This is possible because of a modification of the script called by the sh script, i.e. the awk script check_1_mov2_rand_metabo.awk .

The visualization of the “true” virtual atoms is realized by representing the chemlambda nodes and their ports according to a color scheme which uses the “all_nodes_atom” field of the nodes and ports.

**UPDATE:** here is d3.js demo page for that. http://chorasimilarity.github.io/chemlambda-gui/dynamic/molecular_comp.html

_________________________________________________

Filed under: Uncategorized Tagged: chemlambda, molecular computer, virtual molecules ]]>

The problem is then how to dispense with the Y combinator once we don’t need it any more.

The mentioned solution is to lock it into a quine. Here is how.

First, we put the combinator (in its purest, two nodes form) into a neuron architecture.

What we see: the pale blue dot from the upper right part of the picture is a FROUT (free out) port.

- NEURON AXON: connected to the free out is a pair of nodes which IS the Y combinator as seen in chemlambda (see this old demo of the reduction of the Y combinator to this pair of nodes, which after starts to shoot; I’ll probably add a page for this reduction at the demos site, this time with the new color convention, and for the chemlambda version with FOE nodes, but the old demo explains sufficiently well the phenomenon) .
- NEURON SOMA: in this example the soma is made by one green node (is an application A node), but more sophisticated graphs are acceptable, with the condition to be propagators (i.e. to “propagate” through FO or FOE nodes after a sequence of chemlambda moves; one can transform any lambda term expressed in chemlambda into a propagator)
- NEURON DENDRITES: in this example there are two dendrites, because the soma (A node) has two in ports. The dendrites have FRIN (free in) ports which can be connected to other graphs. In the picture the FRIN nodes are color coded.
- DENDRITES ENDS: at the end of each dendrite there is a small molecule, well chosen.

With the random reduction algorithm the following happen (see the demo in d3.js)

- the NEURON AXON grows and becomes a string of copies of the SOMA with the free in ports attached to it (check out that the colours match!) This is the way we use the Y combinator!
- when the dendrites are consumed, the graph built at the axon detaches from the rest of the neuron
- there are now two graphs, one is the one built at the axon and the other is the one which still contains the Y combinator gun,
- the DENDRITES ENDS, together with the SOMA, lock the Y combinator (i.e. the NEURON AXON) into a quine, i.e. they form together a quine
- the quine is chosen to be one which does not live long (from a probabilistic pov) so it dies after producing some bubbles (i.e, some Arrow elements with the in connected to the out or to other Arrow elements).

Nice!

____________________________________________________________

Filed under: Uncategorized Tagged: chemlambda, neuron, quine, Y combinator ]]>

This is the end of the computation of the monus function monus(a,b)=a-b if a>=b, else 0, for a=3 and b=20.

It is the last part of the computation, as recorded at 8X speed from my screen.

If you want to see all of it then go to https://github.com/chorasimilarity/chemlambda-gui/tree/gh-pages/dynamic , clone it and then type: bash moving_alt.sh and choose monus.mol from the list of mol files.

It is used the deterministic algorithm, this time, which makes all possible moves every time.

What is interesting about this computation is the quantity of trash it produces.

Indeed, I took a lambda term for the monus, applied it to 3 and 20 (in Church encoding), then applied it to successor then applied it to 0.

In the video you see that after the result (the small molecule connected to the blue dot, it i sthe chemlambda version of 0 in the Church encoding), there is still a lot of activity of destroying the rest of the molecule. It looks nice though.

Something strange happened when I tried to publish the video (there are so many strange things related to the dissemination of chemlambda that they become a family joke, some of the readers of this blog are aware of some of those).

After I completed the description I got the warning shown in the following video: “Brackets aren’t allowed in your description”

For example this previous video has brackets in the description and in the title as well

and all worked well.

Anyway I eliminated the brackets but the warning remained, so eventually I got out from my google account and done something else.

Sometimes later I tried again, experience described in this video (which took a LOT of time to be processed)

At the beginning the fields for title and description were void and no error was announced.

After I filled them happened what you see in the video.

I understand that Google uses a distributed system (which probably needs lots of syncs, because you know, that is how intelligent people design programs today), but:

- the “brackets are not allowed” is bogus, because a previous video worked perfect with brackets in the description,
- the “unknown error” means simply that there was some trace left on my computer that there was an imaginary error in the previous try, so instead of checking if there is an error this time, I suppose that the browser was instructed to read from the ton of extremely useful cookies google puts in the user’s place.

_________________________________________________________________

Filed under: Uncategorized Tagged: bug, chemlambda, Google, lambda calculus, video, Youtube ]]>

It is this one:

so the middle terms b and c switch their position.

Of course, by duality, that means that there is an axiom which is equivalent with both co-associativity and co-commutativity, in an algebra with co-unit.

This is a justification for the existence of TWO fanout nodes in chemlambda, called FO and FOE. The graphical representation of the shuffle, called now the shuffle trick, is simply a combination of two graph rewrites, as shown in the demo on the shuffle trick.

On this blog the shuffle trick is mentioned several times, and the “shuffle move” is proposed as various compositions of moves from graphic lambda calculus.

But the CO-COMM and CO-ASSOC moves from graphic lambda calculus are no longer needed, being replaced by FO-FOE and FI-FOE (aka fan-in) moves. This is good because the graph rewrites which express the associativity and commutativity are too symmetric, they don’t have a natural direction of application, therefore any choice of a preferred direction would be artificial.

The shuffle trick is essential for self-multiplication, where there has to be a way to multiply trees made by FO nodes, in such a way so that the leaves of the copies of the tree are not entangled, see this demo.

________________________________________________

Filed under: Uncategorized Tagged: associativity, chemlambda, commutativity, shuffle ]]>

**UPDATE:**

**0.(for chemists)** With chemlambda you can build molecular computers. For this you (in case you’re a chemist) could identify the real chemical reactions which embody the moves of this made-up chemistry. They are simple enough to be possible in the real world. See All chemical reactions needed for a molecular computer.

___________________

**1. Is chemlambda another visualization tool for lambda calculus?**

NO. Visualization is an independent part, for the moment I use d3.js. Moreover chemlambda is only tangentially related to lambda calculus.

**2. What is new in chemlambda then?**

You have an example of a computation model which does not use values, calls, variables, variable passing.

Have you seen any other model with these properties?

**3. Why should I be excited about a computation with no values, no evaluations, no calls, etc?**

For several reasons:

- nobody thinks about that (excepting perhaps biology related researchers, who don’t see, by looking through the microscope, the structure loved by CS people)
- this is a blind spot, an accident of the history, because computers appeared after the telephone, during and after the WW2 when the problem was to encrypt and decrypt a message sent from A to B, and because physically the computers we know how to build are based on electronics
**think, if you don’t have the problem of how to pass a value, or how to call a function in the internet, then what could be possible which now is not?****think about building real or virtual computers which are life like**, in the sense that their work is complex to the point of seeming incomprehensible, but their complexity is based on very simple mechanisms, used again and again, without any director to channel them and without any hierarchy to maintain or enforce.

**4. So should I pack my bags and look for other research subjects than what is popular now, or maybe should I just ignore chemlambda and go with the business as usual? After all, you know, cloud computing is great. Process calculi, types, all this.
**

It’s your choice, but now you know that there is something which is closer to real life like computation, which may hold the promise for a free sky, cloudless internet.

**5. Suppose I want to play a bit with these ideas, but not to the point of getting out of my comfort zone. Which questions would be interesting to ask?**

Right, this is a good strategy. Here are some questions which are related to lambda calculus, say.

- there is an algorithm which transforms a (untyped lambda beta calculus) term into a chemlambda molecule, but even if one can prove that beta move translates into the BETA move (like Wadsworth-Lamping) and that eventually the translations of SKI or BCKW combinators reduce in chemlambda like they should, there is the question: how does chemlambda does by local writes the thing which replaces an evaluation strategy? A study by examples may give interesting insights.
- how to program with chemlambda? Indeed, it can reproduce some behaviours of untyped lambda beta, but even so it does it in ways which are not like usually expected, as proven by the demos. To take an older example, the Y combinator simplifies to a 2 nodes molecule, story told in this article http://mitpress.mit.edu/sites/default/files/titles/content/alife14/ch079.html . The fact is that because one does not have eta (which looks like a global move in chemlambda, i.e. there is no a priory bound on the number of nodes and links to check for application of eta) then there are no functions. It’s a kind of extremal functional programming where there are no functions.
- (c) what are the right replacements for lists, currying, booleans, which could take advantage from chemlambda?
- (d) by a try and explore procedure, which are the less stupid reduction algorithms and in which sense do they work exactly? This has to do with locality, which constrains a lot the design of those algorithms.
- (e) finally, what can chemlambda do outside the sector of untyped lambda beta, which is only a small part of the possible chemlambda molecules?

_____________________________________________________________________

Filed under: Uncategorized Tagged: chemlambda, computation model, lambda calculus, virtual worlds ]]>

Now, I learned how to do it and it works all the time (compared with 20% success last time).

Last time I took a lambda term for the factorial from the lambda calculus tutorial by Mayer Goldberg from the little-lisper.org, par. 57, page 14. Then I modified it and got a molecule which computes the factorial in about 20% of the cases. Now, in this working factorial example, I made two supplementary modifications. The first consists in starting from a lambda term which uses the mutiplication in the form L mnf.m(nf) instead of the one used in the tutorial. Secondly, the result of the computation (i.e. the “value” of the factorial) is applied to a SUCC (successor) which is then applied to c0, which result in the generation of the correct result.

Link to the demo with factorial(4)=24.

Here is the video, recorded as seen in safari, with 2X speed (firefox behaves crappy with the d3.js demos I make, have no precise idea why; that is why I recorded my experience with the demo, then re-recorded the video with 2X speed, all this done with QuickTime)

It works very well also with factorial(5)=120, but because the visualization of that computation takes some time (which may challenge people with short attention span), here is a video with the last part of the computation at 8X speed.

____________________________________________________

Filed under: Uncategorized Tagged: chemlambda, factorial, Mayer Goldberg ]]>

This is the first part, there will be a second one.]

____________________________________________________________

See also: FAQ: chemlambda in real and virtual worlds

____________________________________________________________

**1.** **Can I play with chemlambda like in the demos?**

YES, if you go at https://github.com/chorasimilarity/chemlambda-gui/tree/gh-pages/dynamic , then you’ll find everything needed to play with chemlambda.

What you need:

- scripts like moving_random_metabo.sh, which calls
- the main program which is check_1_mov2_rand_metabo.awk
- you need initial molecules, which are in the files with the extension .mol. Read here about the mol format (1), (2). It’s just a list of nodes and their ports.
- You shall need also the file firstpart.txt.

Then figure it out:

- bash moving_random_metabo.sh asks you to type the name of a mol file, say lalala.mol
- it produces lalala.html which you can see in the browser, as a d3.js animation.

In the awk script you have at the beginning the variables: metabo (which decides the color of the new nodes; periodically, with period metabo, the color of the new nodes is turned to #222 or it is left unchanged, in order to help visualize the flow and replenishment of the new nodes in the molecule, i.e. the metabolism of the molecule) and cycount which is the maximum number of steps in the main cycles (if you expect that the computation stops quick, or if on the contrary, you suspect it will never stop, then take cycount=100, if you are sure the computation will stop, but perhaps after more than 100 steps, then take cycount=1000 for example).

The main cycle starts at line 931.

At about the line 900 there is a function nextval()… where there is a piece “return 3000 + (5*step“, now you can modify 3000 to something else to decide the initial time before the animation starts, and 5 to something like 30 or bigger, if you want to look at the result in firefox or if generally the browser has problems with rendering.

For the list of moves and more details see the page of moves. Go down to that page to see a list of articles.

**2. Is chemlambda something fixed or it changes? Can I change it? What can I do if I want to tinker with it, just for fun? **

It changes. First there was “graphic lambda calculus”, then there was chemlambda without the FOE node and moves, now after many experiments there is THIS chemlambda.

You can change it any way you want (and be polite to cite my version and to notice me somehow, preferably by posting something in a public place).

If you have real programming skills, not like me (I’m just a mathematician) then you can take the awk script and:

- transform it into a js script, instead of the actual system
- make it quicker (presently there is a preset of 5ms which decides the speed of the animation, but this may give problems to many browsers; why? I have no idea, but it may be a trivial or a clever fix to that)
- add a molecule builder UI
- make a game out of it
- make an app
- make a script which converts lambda terms to chemlambda molecules, according to the algorithm from here.

These are immediately useful things to do. There are others too, see next question.

**3. What does the script about chemlambda? **

The script implements a model of computation based on chemlambda.

So, chemlambda is a purely local graph rewrite system (it is a list of patterns with up to 4 nodes which are replaced by other patterns, perhaps according to rules which involve a condition which can be verified by checking maybe yet another N (a priori fixed, small) links and nodes).

It does not compute by itself, you have to add an algorithm of reductions, which says how to use the graph rewrites, aka the moves.

In the script is used the following algorithm, called in previous post the “stupid” one, because it is really the most simple: after the mol file is converted into a graph (given as nodes and links arrays), after the head of the html file is written, then the algorithm enters a cycle. The structure of this cycle is the following:

- look for patterns to change, in the order of the graph rewrites priority. (What’s this priority? It may happen that there are nodes or links which are part of two distinct, but overlapping patterns for graph rewrites. This results into a conflict: which move to apply? By close examplnation of the moves, there is a order of looking for the moves which eliminates the conflicts if we look first at the move FO-FOE, then at the other DIST moves, then at BETA (i.e. A-L) move and FAN-IN (i.e. FI-FOE), then at the PRUNING moves, and each time we find a pattern we put the respective move in a list of proposed moves and we block the nodes so they are not available for being part of other patterns during the search of patterns )
- do the proposed moves (in the awk script this means to do the moves in the sense of the graph data variables from the script and in the same time, to write in the html file what you just did, in a format intelligible in d3.js)
- there may be lots of Arrow nodes which appear, and in order to eliminate as many as possible there is an Arrow cycle (see the moves page) which eliminates all Arrow nodes which can be eliminated by COMB moves. (Same thing apply here, you have to write in the html file what you did with the graph during that internal cycle)

This is the stupid **deterministic** algorithm.

There is a **random** variant of it, which is exactly what the awk script does.

Whenever a pattern is identified, there is a coin flipped, and with probability about 50% (see further a small detail) the move goes to the list of proposed moves and the nodes are blocked, or not.

In this way the priority of moves is made much less important.

Small detail: there are two kinds of moves, those who increase the number of nodes (DIST) and the others, who decrease the number of nodes (Arrow not taken into consideration). The small detail is that, after each step of the cycle, the probabilities of the moves are slightly modified so that if in the cycle there has been more DIST moves than the others then the probability of DIST moves decrease, or else if there has been less DIST moves than the others then the probability of DIST move increases in the next cycle.

Why randomness: is a cheap substitute, in this stupid reduction version, for asynchronous.

**4. How can I seriously change the chemlambda reduction algorithm? **

By changing from “stupid” to more interesting, it’s up to you. What about bringing into the game Node.js? Know why? because if you look at a mol file, you notice that if you split the file in ten pieces, then each of them is again a mol file.

_______________________________________________________________________

Filed under: Uncategorized Tagged: chemlambda ]]>

Welcome to the thing.

_________________________________________________

Filed under: Uncategorized Tagged: Artificial Agora ]]>

**UPDATE:** See also the post The working factorial and the video

_______________

There are five of them.

The mentioned tutorial is the source for the lambda term which I translated into chemlambda and then used in the Ackermann function demos. The most outrageously successful is the computation of Ack(2,2) while self-multiplicating. The daughter molecules do not end at the same time, moreover.

Here is a video of a screen recording at 2X speed.

Now, there is a very puzzling thing here. In chemlambda with a random, purely local graph rewriting algorithm I can compute the Ackermann function. But what about simpler functions, like the factorial?

I tried the easy way consisting into translation of lambda terms for the factorial into chemlambda and the results are ugly. As a geometer who wants to decipher computation without using variables, values, variable passing or evaluations, I know that chemlambda is different because of this feature it has: it uses something like signal transduction instead of gates-and-wires general model from IT. So there has to be consequences of that.

Let’s see what happens in the case of the factorial. In this demo I took the lambda term from the same tutorial, par. 57, page 14. I hoped will work better than other proposals, based on the experience with the Ackermann function. I noticed in other experiments with terms for the factorial that the reduction in chemlambda is extremely wasteful, producing thousands of nodes and lots of trash. Moreover, there is a problem with the fix point combinator, explained in the article with Louis Kauffman Chemlambda, universality and self multiplication, and described in the older demos like this one. In chemlambda, the molecule which corresponds to the Y combinator reduces to a very simple one (which does not has an equivalent as a lambda term), made by only two nodes, fanout and application. Then the molecule becomes a gun which shoots pairs of fanout and application node. There is no mystery about the Y combinator in chemlambda, the real mechanism of it consists in the self-multiplication by the fanout node. [Note: in the old demo and in the article linked there is no FOE fanout. The FOE and the moves associated to it are a recent addition to the formalism, see the page of moves.]

The problem of using the Y combinator is that it never stops generating pairs of A and FOE nodes. In a computation which implements recurrence by the Y combinator, at some point, according to a IFTHENELSE or ISZERO, the cycle of Y is broken. But in chemlambda the Y gun molecule continues to exist and this leads to a never ending computation. From one side this is OK, see for example the life-like computations with chemlambda quines. Actually there is a stronger relation between chemlambda quines and the Y combinator molecule. One can design the computation to be such that when the Y molecule is no longer needed, it is encapsulated into a quine, but this is for another time to explain in detail.

I come back to the factorial example. In the demo I linked you can see that the computation of the factorial is wasteful (and paradoxically leads to a Y molecule), even if it does not use a fix point combinator.

Why?

First I thought it is because of currying and uncurrying. In chemlambda, because it is a graph rewrite system, there is no particular need to use currying all the time.

Then, to check this, I modified the molecule from the little lisper tutorial in order to geometrically compute the repeated application of a function f(a,b)=(succ(a), succ(a)b). The function is a piece of a graph with two in and two out links which is self-multiplying under the action of a number in the Church encoding.

Here is a successful computation with this molecule. But does it work all the time or have I been lucky? The reduction algorithm is random and different runs may give different results. It is the case with the Ackermann function computation as well, but that one was successful all the time.

Oh, it turns out that the computation with that molecule works well in about 20% of the runs. Here is an unsuccessful run.

So there is still a problem, but which one?

Under close examination the computation is still wasteful, because of the term (piece of molecule) c0, for the 0 in the Church encoding. In chemlambda this term corresponds to a small molecule which has a termination node inside.

When we want to thrash, like programmers do, something useless, in chemlambda the useless piece does not disappear. Is like in nature, the thing continues to exist and continues to interact, while in the process of degradation, with the rest of the environment.

The termination node, via the PRUNING moves, destroys slowly the useless part, but pother pieces form the useless part continue to interact with the rest of the graph.

Is this the problem?

In order to check this I further modified the molecule which was successful 20% of the time. I just replaced c0 by c1, which is the (molecule for) the lambda term for 1 in the Church encoding. Now, c1 does not have any termination node inside.

The price is that I no longer compute the factorial, but instead I compute the repeatedly applied function

F(a,b)=(succ(a), tms(a,b))

where tms(a,b)=ab+1. Here is a demo for the computation of tms(3,3) in chemlambda., and further is a video for tms(5,5)=26, where you can see a new creature, more complex than the walker discover in the predecessor.

I checked to see what is the closed expression, if any, for the function I compute, namely

f(0)=1, f(n+1) = (n+1)f(n) + 1

and the Wolfram Alpha engine has an interesting answer.

Well, this time I hit the right spot. The computation works, again and again and again.

So we have to learn to program ecologically with chemlambda.

____________________________________________________________________

Filed under: Uncategorized Tagged: Ackermann function, chemlambda, computation, demos, factorial, lambda calculus, Little Lisper, Mayer Goldberg ]]>

I want to advance the following hypothesis about the origin of life.

Life is a manifestation of the computational universality of a collection of chemical reactions.

Indeed, there probably are many small collections of chemical reactions which, coupled with a random chemical reduction algorithm, form a universal computing model.

A proof of principle for this is chemlambda. There are still to discover real chemical reactions which implement the (invisible in chemlambda formalism, for the moment) moves shown at the chemlambda moves page.

But they are so simple that there have to be many, many such chemical reaction.

In a system, in a chemical soup, if it happens to appear these chemical reactions, the following is a game of computation and self-multiplication.

Because universality means, in this particular case, that with non-negligible probability, anything can be achieved.

The role of randomness is tricky. On one side randomness selects resilient creatures. That’s a funny thing, for example in chemlambda good candidates for living creatures are quines.

A quine in chemlambda is a molecule which stays the same in a daterministic world. This gives to the quine molecule a certain resilience when faced with randomness, which makes it to have a life: it may grow, it may decrease, for a time it may play around the deterministic state, and it may eventually die.

This is illustrated in the first battle of microbes demo, where several quines are put together and “fed” with enzymes, which appear randomly, but such that if at a moment there are more enzymes for the moves which increase the number of nodes, then the next time the probability of appearance of such enzymes decreases in favour of those which decrease the number of moves.

So globally it appears as if the the quines compete for the moves and those quines having a greater diversity of possible moves thrive, while the other die.

The 9_quine is the most fragile quine, as you see in the demo many of them die (i.e. they transform into a small molecule which is inert to any reduction).

There is a lot to add about this, for example there are other quines which behave like they adopt the strategy of spores, i.e. they regress to an “egg” state and they flourish later, from time to time, when they have to “compete” with bigger quines.

Of course, all this is in the eye of the observer, it is an emergent behaviour, like the intelligence of a Braitenberg vehicle.

But what if quines are a bit too fragile for life? Maybe there are molecules who grow to an approximately stable configuration, in random conditions, for a time, at least until they self-multiply.

[Have you seen the story of the 16 bubble quine, from egg to motherhood?]

Suppose, just suppose that in deterministic conditions such a molecule would grow slowly, instead of being a quine.

This is consistent with the requirement to be resilient in random conditions, there will be a second part of this post when the demos are prepared.

But it has a curious corollary, namely that such a living creature will blow out, like a cancer, in too calm, too deterministic conditions.

The example I have and play with is a molecule made by two 9_quine and a 5 atoms molecule which, if left single, it grown in a regular pattern, but in the deterministic algorithm, when coupled by some bonds with the two quines, it grows very very slow.

This molecule, under random conditions, display an amazing variety of shapes. But all the runs show the same thing, more or less: that the two 9_quines have a role of taming the growth of the molecule, keeping it controlled, but at some moment the 9_quines die, somewhere in the molecule, in some unrecognizable shape, and after that the molecule reverts slowly to the regular growth pattern (which makes it unsustainable if there are phisical limits to the growth).

So not only that randomness select creatures who can survive (and self-multiply) in random conditions, but it may select creatures who live in random conditions, but who die in deterministic conditions.

Maybe that is why life hates when everything is predictable.

I close this post with the comment that however, there are molecules which arrive at a determined state in random conditions.

This may be very useful for computer like computations. The exmple I have is again the remarkable molecule for the Ackermann function.

See it in this video self-reproducing while it computes.

Apparently some molecules display a huge resilience to randomness. The Ackermann function molecule daughters finish the computation at different times, but they do finish it.

_______________________________________________________________________

Filed under: Uncategorized Tagged: artificial life, body as computation, chemical reactions, chemlambda, life, living creatures, random conditions ]]>

It will be hopefully something clear with the help of some visual demos.

[Don’t believe what I write and proceed by the scientific method by checking my experiments yourself. You can easily do this by using the links in the demo. They point to a github repo. You have to switch to the gh-pages branch and there you find the scripts and the mol files which are needed to reproduce the experiments. You can of course make new experiments of your own! By looking in the scripts you shall see how things work. It is much more rigorous and easier to check than the written proof version. On the other side it is of course less authority-friendly and demands a bit larger attention span, but with a small attention span nobody can understand anything, right? For more on this “publishing philosophy” see the post The Shortest Open Access and New forms of Publication Question.]

I start.

Chemlambda is interesting because it is good at doing two different things at once:

- it can compute as computers do it (i.e. is Turing universal)
- it can also simulate chemical like, or even biological like phenomena.

This is great because there is no other system (excepting Nature) which can do this with such a small effort, at once (i.e. with the same tools).

Indeed, you have artificial life proposals, like for example swarm chemistry, which can simulate some simple life like phenomena but which can’t compute something as sophisticated as the Ackermann function (my favorite catch demo for CS people).

There is the amazing Game of Life, which can do both, but: for a Turing like computation one needs hundreds of thousands of nodes, on a fixed predefined grid, and synchronously updated.

What enables chemlambda to do that?

In the development of chemlambda I followed some principles as thought constraints. These principles shaped, and will shape further, this project. They are:

- (locality) every piece of the formalism or implementation has to be local in space, time or in control terms.
- (no meaning) global meaning is just an illusion, which is moreover hard to maintain or enforce, Nature does not work by high level meaning
- (topology does compute) signal transduction, not signal transmission.

While locality seems a natural and desirable feature to have in many formalisms, it is unsurprisingly difficult to achieve. The reason for this, in my opinion, is cultural: we are the children of the Industrial Revolution and so we are trained and our minds are shaped to think in terms of a global, god-like point of view, which gives us total powers over space, time, matter, and total control over all the universe at once. This is visible in the scientific explanations in particular, where, just because we want to explain a simple idea, we have then to build a formal notational frame around, like external coordinates, names for the variables (coming with their complex bookkeeping formalism) and to appeal to reification. While all these ingredients are useful for the transmission of a scientific explanation, they are not, by themselves, needed for the understanding. Example: suppose I’m explaining you the plot of a film. At some point you get lost with the names of the characters and then I react like this: “Look, is simple: A wants to kill B and for that he hires the hitman C. But C has a love affair with B and together they trick A into believing that …” Now, “A”, “B” and “C” may be useful to transmit my understanding of the movie plot to you, but they are not needed for my understanding. In the frame of mind of the Industrial Revolution, the world is a system which can be globally mapped into a hierarchical flow of processes, where everything falls well into this or that case of study. You have the system, you control the world, as if it is an industrial process.

The latest installment of this way of thinking (I’m aware about) is category theory.

The downside of this is the loose of locality and the profusion of higher and higher levels of abstraction, which is the quick fix of the cracks in the globality illusion.

Maybe now it becomes clear the second principle: no meaning. Many, if not all natural things don’t have a global, fixed or indisputable meaning. Still Nature works beautifully. The logical conclusion is that meaning is something us humans use and seek (sometimes), but not a necessary ingredient in Nature’s workings.

The concrete use of the “no meaning” principle consists into the elimination of any constraints which may introduce globality by the back door: there are no “correct” graphs in chemlambda, nor there exist any local decoration of the edges of the chemlambda graphs which give a global “meaning” to the chemlambda graphs.

Elimination of names, elimination of evaluations.

The third principle is called “topology does compute” as an allusion to the NTC vs TC discussion. The idea is very simple: instead of thinking in terms of wires which transmit signals, which are then processed by gates, think about signal transduction as an emergent phenomenon from the locality of the graph rewrites.

Signal transduction is a notion from biology: a molecule binds to a receptor (molecule), which trigger a chain, a cascade of other chemical reactions. Each chemical reaction is of course, local, but the emergent phenomenon is the moving of a “signal” (as seen by our minds, but non existent as a well defined entity in reality). We identify the “signal”, but things happen without needing the “signal” entity as a prerequisite.

Chemlambda works like that.

In order to explain this I shall use the “walker” example.

It is instructive to see how the computational and the biological like features of chemlambda led to the discovery of the walker.

The initial goal was to see how do various lambda calculus work in chemlambda. I knew that there is an algorithm which associates to any lambda term a chemlambda molecule, so just by picking interesting lambda terms, I could then see how they reduce in chemlambda, exclusively by local graph rewrites.

One of these terms is the predecessor, see Arithmetic in lambda calculus. Numbers appear (in chemlambda) as ladders of pairs of nodes, with the beginning and the end ladder connected by abstraction nodes. The whole topology is one of a circular ladder, roughly.

One can translate also the predecessor lambda term, and apply it to the number. In lambda calculus the predecessor applied to the number N gives N-1 (if N>0, otherwise the predecessor of 0 in lambda calculus is 0).

In chemlambda the predecessor is another molecule, which looks like a small bag. To apply the predecessor to a number translates in chemlambda into putting a supplementary A (application) node, and to connect some ports of the A node with the circular ladder (the number) and with the bag (the predecessor).

The first few reductions are trivial and they transform the molecule I described into a circular one, where on top of the ladder there is a molecule and at the end of the ladder there is another, smaller one.

All in all it looks like a circular train track with something on the tracks.

Now it gets interesting: the reduction process (in chemlambda) looks like there is a “creature” which travels along the train tracks.

This is the walker. You can see it in this first demo. Or you may look at this video

(however I do suggest the demo, the live version is far more impressive).

It is a beautiful example of signal transduction.

In the chemlambda reduction algorithm which is deterministic (all graph rewrites which are possible are applied) the walker keeps its shape, i.e. periodically we find the same graph (the walker graph) in different places on the train track (the number).

The random reduction algorithm breaks this regularity (and synchronicity as well) because in this algorithm a coin is flipped before application of any move, and the move is applied or not with 50% probability.

That is what you see in the demo: the walker looks like a kind of a wave which propagates along the train tracks, until it hits the end of the track (i.e. the small molecule I mentioned previously) and then it is destroyed progressively. It leaves behind a train track with a pair of nodes less than before the reduction.

So, inside the reduction mechanism of a lambda term (pure computation side) there is a self-maintaining, propagating entity, the walker, which travels in a biological fashion through the representation of a number!

This led me to the notion of a chemlambda quine in a rather obvious way:

- let’s eliminate the beginning and the end of the ladder and replace it by circular ladder with no start or end parts, then the walker entity would go and go in circles, endlessly; this is the “ouroboros“, a older explanation,
- remark that as a graph, in the deterministic reduction algorithm, after each reduction step the “ouroboros” is unchanged as a graph
- go to the extreme and use an ouroboros on a single train track pair, this is the 28-quine, see it in the second demo.

Let’s study more the signal transduction aspect. What can happen if two walkers interfere?

The experiment for this can be seen in the third demo.

It works like this. Take two predecessor reduction graphs, two copies of the graph from the first demo (they have each 20 pairs of nodes in their ladders).

Now, cross the ladders. How?

By a Holliday junction, in biochemical terms. I looks like this

[source]

Mathematically this is a product of two ladder graphs, where two links are cut, crossed and glued back.

The amazing thing is that the two walkers go their way, they mix and then they continue, each on others track, until the end points.

They behave as if they are waves passing one through the other.

**UPDATE:** This is a screen recording of the experiment

__________________________________________________________________________

Filed under: Uncategorized Tagged: chemlambda, Holliday junction, lambda calculus, signal transduction ]]>

- program is stronger than proof
- I show the program
- I show the demo

then wtf is the article good for?

**UPDATE:** at figshare, they think about that. Great!

**UPDATE 2:** for no particular reason, here is an accompanying short video done with the program

**UPDATE 3:** See “Publish your computer code: it is good enough” by Nick Barnes, Nature 467, 753 (2010) | doi:10.1038/467753a

“I accept that the necessary and inevitable change I call for cannot be made by scientists alone. Governments, agencies and funding bodies have all called for transparency. To make it happen, they have to be prepared to make the necessary policy changes, and to pay for training, workshops and initiatives. But the most important change must come in the attitude of scientists. If you are still hesitant about releasing your code, then ask yourself this question: does it perform the algorithm you describe in your paper? If it does, your audience will accept it, and maybe feel happier with its own efforts to write programs. If not, well, you should fix that anyway.”

____________________________________________________________

Filed under: Uncategorized Tagged: future of publishing, github, open access ]]>

Serious name: “Birth and metabolism of a chemlambda quine”. But it’s a chick, ‘cos it makes eggs. An artificial chick.

More seriously, this artificial life proposal satisfies the full definition of life, something no other proposal does.

_______________________________________________________

Filed under: Uncategorized Tagged: artificial life, chemlambda, visualizations ]]>

Before that, or after that you may want to go check the new chemlambda demos, or to look at the older ones down the page, in case you have not noticed them. There are also additions in the moves page. Most related to this post is the vision page.

“First, I have lots of examples which are hand drawn. Some of them appeared from theoretical reasons, but the bad thing used to be that it was too complex to follow (or to be sure of that) on paper what’s happening. From the moment I started to use the mol files notation (aka g-patterns) it has been like I suddenly became able to follow in much more depth. For example that’s how I saw the “walker”, “ouroboros” and finally the first quine by reducing the graph associated to the predecessor (as written in lambda calculus). Finally, I am able now to see what happens dynamically.

However, all these examples are far from where I would like to be now, they correspond to only the introduction into the matter. But with every “technical” improvement there are new things which appear. For example, I was able to write a program which generates, one by one, each of about 1000 initial graphs made by 2 nodes and follow their reduction behaviour for a while, then study the interesting exemplars. That is how I found all kind of strange guys, like left or write propagators, switchers, back propagators, all kind of periodic (with period > 1) graphs.

So I use mainly as input mol files of graphs which I draw by hand or which I identify in bigger mol files as interesting. This is a limitation.

Another input would be a program which turns a lambda term into a mol file. The algorithm is here.

Open problem: pick a simplified programming language based on lambda calculus and then write a “compiler”, better said a parser, which turns it into a mol file. Then run the reduction with the favourite algorithm, for the moment the “stupid” determinist and the “stupid” random ones.

While this seems feasible, this is a hard project for my programming capabilities (I’d say more because of my bad character, if I don’t succeed fast then I procrastinate in that direction).

It is tricky to see how a program written in that hypothetical simple language reduces as a graph, for two reasons:

- it has to be pure lambda beta, but not eta (so the functions are not behaving properly, because of the lack of eta)
- the reductions in chemlambda do not parallel any reduction strategy I know about for lambda calculus.

The (2) makes things interesting to explore as a side project, let me explain. So there is an algorithm for turning a lambda term into a mol file. But from that part, the reductions of the graph represented by the mol file give other graphs which typically do not correspond to lambda terms. However, magically, if the initial lambda term can be reduced to a lambda term where no further reductions are possible, then the final graph from the chemlambda reduction does correspond to that term. (I don’t have a proof for that, because even if I can prove that if restricted to lambda terms written onlly as combinators, chemlambda can parallel the beta reduction, the reduction of the basis combinators and is able to fan-out a term, it does not mean that what I wrote previously is true for any term, exactly because of the fact that during chemlambda reductions one gets out from the real of lambda terms.)

In other cases, like for the Y combinator, there is a different phenomenon happening. Recall that in chemlambda there is no variable or term which is transmitted from here to there, there is nothing corresponding to evaluation properly. In the frame of chemlambda the behaviour of the Y combinator is crystal clear though. The graph obtained from Y as lambda term, connected to an A node (application node) starts to reduce even if there is nothing else connected to the other leg of the A node. This graph reduces in chemlambda to just a pair of nodes, A and FO, which then start to shoot pairs of nodes A and FOE (there are two fanout nodes, FO and FOE). The Y is just a gun.

Then, if something else is connected to the other leg of the application node I mentioned, the following phenomenon happens. The other graph starts to self-multiply (due to the FOE node of the pair shot by Y) and, simultaneously, the part of it which is self-multiplied already starts to enter in reaction with the application node A of the pair.

This makes the use of Y spectacular but also problematic, due to too much exuberance of it. That is why, for example, even if the lambda term for the Ackermann function reduces correctly in chemlambda, (or see this 90 min film) I have not yet found a lambda term for the factorial which does the same (because the behaviour of Y has to be tamed, it produces too many opportunities to self-multiply and it does not stop, albeit there are solutions for that, by using quines).

So it is not straightforward that mol files obtained from lambda terms reduce as expected, because of the mystery of the reduction strategy, because of the fact that intermediary steps go outside lambda calculus, and because the graphs which correspond to terms which are simply trashed in lambda calculus continue to reduce in chemlambda.

On the other side, one can always modify the mol file or pre-reduce it and use it as such, or even change the strategy of the algorithm represented by the lambda term. For example, recall that because of lack of eta, the strategy based on defining functions and then calling them, or in general, just currying stuff (for any other reasons than for future easy self-multiplication) is a limitation (of lambda calculus).

These lambda terms are written by smart humans who stream everything according to the principle to turn everything into a function, then use the function when needed. By looking at what happens in chemlambda (and presumably in a processor), most of this functional book-keeping is beautiful for the programmer, but a a limitation from the point of view of the possible alternative graphs which do the same as the functionally designed one, but easier.

This is of course connected to chemistry. We may stare to biochemical reactions, well known in detail, without being able to make sense of them because things do happen by shortcuts discovered by evolution.

Finally, the main advantage of a mol file is that any collection of lines from the mol file is also a mol file. The free edges (those which are capped by the script with FRIN (i.e. free in) and FROUT (i.e. free out) nodes) are free as a property of the mol file where they reside, so if you split a mol file into several pieces and send them to several users then they are still able to communicate by knowing that this FRIN of user A is connected (by a channel? by an ID?) to that FROUT of user B. But this is for a future step which would take things closer to real distributed computing.

And to really close it, if we would put on every computer linked to internet a molecule then the whole net would be as complex (counting the number of molecules) as a mouse. But that’s not fair, because the net is not as structured as a mouse. Say as a handful of slime mold. That’s how smart is the whole net. Now, if we could make it smarter than that, by something like a very small API and a mol file as big as a cookie… “

_________________________________________________________________________

Filed under: distributed GLC Tagged: artificial chemistry, artificial life, chemlambda, distributed computing, lambda calculus ]]>

or the live version, funnier.

Of course, an apology of the no semantics idea.

Filed under: distributed GLC ]]>

The form of the article as a mean for disseminating research is more and more questioned. I liked Idiot things that we we do in our papers out of sheer habit by Mike Taylor, as an example.

An article is only the tip of an iceberg of results, proofs, experiments, software and hardware. There are more and more platforms of publication, or better said dissemination, where articles come together with auxiliary data.

In math there is the HoTT book example, the result of a wonderful collaboration on github, which gives not only data, but also programs.

Let’s think about a hypothetical article about the smell of the rose. In reality that smell is a manifestation of a host of chemical reactions in the rose, in the nose, in the brain, etc. Taking example from the HoTT book, in the hypothetical article we would write about these reactions and other phenomena, we would add data, methodology explanations, and … why not the “smell program” itself, that is not only a static description of the chemical reaction networks involved in the smell process, but a simulation of this as well.

That would be great: instead of talking about it, we could experience it, tweak it, comment it!

It is technically possible, but is there somebody who does it?

I am motivated to ask this question because of a concrete need I have.

I’m preparing a web document which is something in between an article and a (say) remark.js slide show, which uses the demos from here http://chorasimilarity.github.io/chemlambda-gui/dynamic/demos.html Now, how could I submit something like this for peer review? That’s the question. Just the text, without the dynamic explanations, is too bland. Just the demos, with as many as possible text explanations, are not in the article ball park. Just the programs from the github repository, that’s not inviting.

But if it is possible to make it, why not try it?

_________________________________________________________________________

Filed under: Uncategorized ]]>

I came up with this. (See more at chemlambda vision page.)

They are artificial microbes that can be created to make a computer do anything without knowing what they are doing, without needing supervision while doing it.

They are not viruses, because viruses need a host. Computer viruses have the OS as the host.

They are the host. Together they form a Microbiome OS, which is as unique as your own biological microbiome, shaped by the interactions you had with the world.

Because they don’t know what they are doing, it is as hard for an external viewer to understand the meaning of their activity as it is for a researcher looking through a microscope to understand the inner workings of real microbes.

Because they don’t need supervision to do it, they are ideal tools for the truly decentralized Internet of Things.

They are the means towards a cloudless future.

_____________________________________________________________

Filed under: Uncategorized Tagged: chemlambda, microbes, Microbiome OS ]]>

Please be as harsh as possible. Thank you!

I am waiting for your comments. If you want to make a private comment then add in your message the following string

pe-240v

and the comment will go to the moderation queue.

If you have not made any comments here, until now, then by default the comment goes to moderation.

So, please mention in the comment if you want to keep it private.

Assessment for what?

or anything about chemlambda.

This is a big project, I see people are interested in more advanced stuff, like distributed computing, but they usually fail to understand the basics.

On the other side, I am a mathematician learning to program. So I’m lousy at that (for the moment), but I hope I make my point about the basics with these demos and help pages.

_________________________________________________________________________

Filed under: Uncategorized Tagged: chemlambda, open peer review ]]>

Bookmark these pages if you are interested, because there you shall find new stuff on a day-by-day basis.

______________________________________________________

Filed under: Uncategorized Tagged: artificial chemistry, artificial chemistry visualizations, artificial life, chemlambda, d3.js, demos, gui, visual tutorial, visualizations ]]>

I just started new pages where you can see the last living computations with chemlambda:

- a 20 nodes creature which I qualified previously as a quine, but is not, struggles to survive in a random environment (random reduction method) here
- the reduction of the predecessor function from lambda calculus turned into a chemlambda reduction (random too) here
- the self multplication of the S combinator in random conditions here
- the reduction of Ackermann(2,2) in the random model here (this is the one used for the video from the last post).
- a complex reduction in chemlamdba. Here is the recipe:

– you can write the Y combinator as an expression in the S,K,I, combinators: Y = S (K (S I I)) (S (S (K S) K) (K (S I I)))

– so take this expression and apply it to the identity I. In combinatory logic this should reduce to something equivalent to YI, which then reduces forever, because it does not have a normal form

-but we do something more funny, namely all this long string of combinators is transformed into a chemlambda molecule, and we add on top of it a node FO which makes all this big thing to self-reproduce.

So, we have a bunch of reductions (from the long expression to YI) in parallel with the self-reproduction of the whole thing.

Now, what you see is this, in a model of computation which uses a random reduction strategy!

See it live here.

The sources are in this github repository.

_______________________________________________________________________

Filed under: Uncategorized Tagged: artificial life, chemlambda, gui, lambda calculus ]]>

This video contains two examples of the computation of the Ackermann function by using artificial chemistry. The (graph rewriting) rules are those of chemlambda and there are two models of computation using them.

In the first one the rules are applied deterministically, in the order of their importance. The rules which increase the number of nodes are considered more important than those which decrease the number of nodes. The rules are applied in parallel, as long as there is no conflict (i.e. as long as they don’t apply to the same node). When there is conflict the rule with higher importance takes the priority.

In the first part of the video you see what happens if this model is applied to a graph which represents (according to the rules of chemlambda) the Ackermann function applied to (2,2). The expected result is 7 (as it appears in the Church encoding, which is then transformed in the chemlambda convention). There are no tricks, like pre-computing the expression of the function, everything goes at a very basic level. The application of the rules does not parallel the application of lambda calculus reduction rules to the lambda term which represents the Ack(2,2), with any reduction strategy. However, the result is obtained correctly, even if many of the intermediary steps are not graphs which represent a lambda term.

The model does not use any variable passing, nor any evaluation strategy, moreover!

In the second example is used a different model of computation. The rules are applied randomly. That means the following. For any configuration from the graph which may be subject to a rule, a coin is flipped and the rule is applied with probability 50%. The rules are equally important. The rules are still applied in parallel, in the sense that an update on the graph is done after all edges are visited.

As you see in the second part of the video, the process takes longer, because essentially at each step there are always less rules applied to the whole graph. The comparison is not very accurate, because the reduction process may depend on the particular run of the program. Even if lambda beta calculus (with some model of reduction) is confluent, chemlambda is surely not. It is an open problem if, starting from a graph which represents a lambda term, in chemlambda, knowing that in lambda calculus the term has a normal form, then the random model of computation with chemlambda always arrives eventually to the graph which represents the normal form of the respective term.

At least for the term I use here for Ack(2,2), it looks like that it does. This is of course not a proof.

**UPDATE:** A quine is reduced in chemlambda, first using a deterministic model of computation then using a model which has a random ingredient.

These two models are the same as the ones used for the Ackermann function video.

The quine is called the 9_quine, has been introduced in https://chorasimilarity.wordpress.com/…

In the deterministic reduction you see that at each step the graph reduces and reconstruct itself. It goes on forever like that.

In the random reduction the process is different. In fact, if you look at the list of reductions suffered by the 9_quine, then you see that after each cycle of reduction (in the deterministic version) the graph is isomorphic with the one before because there is an equilibrium between the rules which add nodes and the rules which destroy nodes.

In the random version this equilibrium is broken, therefore you see how the graph grows either by having more and more red nodes, or by having more and more green nodes.

However, because each rule is applied with equal probability, in the long term the graph veers towards a dominant green or towards dominant red states, from one to the other, endlessly.

This is proof that the reductions in chemlambda vary according to the order of application of moves. On the other side, this is evidence (but not proof) that there is a sort of fair effort towards eventual confluence. I use “confluence” in a vague manner, not related to lambda calculus (because the 9_quine does not come from a lambda term), but more related to the graph rewriting world.

_________________________________________________________________________

Filed under: Uncategorized Tagged: Ackermann function, artificial life, chemlambda, lambda calculus, models of computation, random ]]>

UPDATE: and a nice video about the Omega combinator.

… and there is more.

I took two different reduction strategies for the same artificial chemistry (#chemlambda) and looked what they give in two cases:

– the Ackermann function

– the self-multiplication and reduction of (the ensuing two copies of) the Omega combinator.

The visualization is done in d3.js.

The first reduction strategy is the one which I used previously in several demonstrations, the one which I call “stupid” also because is the simplest one can imagine.

The second reduction strategy is a random variant of the stupid one, namely a coin is flipped for each edge of the graph, before any consideration of performing a graph rewrite there.

The results can be seen in the following pages:

– Ackermann classic http://imar.ro/~mbuliga/ackermann_2_2.html

-Ackermann random http://imar.ro/~mbuliga/random_ackermann_2_2.html

-Omega classic http://imar.ro/~mbuliga/omegafo.html

-Omega random http://imar.ro/~mbuliga/random_omegafo.html

I’ve always told that one can take any reduction strategy with chemlambda and that all are interesting.

– the Ackermann function

– the self-multiplication and reduction of (the ensuing two copies of) the Omega combinator.

The visualization is done in d3.js.

The first reduction strategy is the one which I used previously in several demonstrations, the one which I call “stupid” also because is the simplest one can imagine.

The second reduction strategy is a random variant of the stupid one, namely a coin is flipped for each edge of the graph, before any consideration of performing a graph rewrite there.

The results can be seen in the following pages:

– Ackermann classic http://imar.ro/~mbuliga/ackermann_2_2.html

-Ackermann random http://imar.ro/~mbuliga/random_ackermann_2_2.html

-Omega classic http://imar.ro/~mbuliga/omegafo.html

-Omega random http://imar.ro/~mbuliga/random_omegafo.html

I’ve always told that one can take any reduction strategy with chemlambda and that all are interesting.

___________________________________

Filed under: distributed GLC Tagged: artificial chemistry, chemlambda, universal constructor ]]>

How to simulate a move with a quine. Let’s think.

The starting point is a mol file which describes the graph. Then there is a program which does the reductions. We should see the program (or the specific part of it which does a move) as the enzyme of that move. OK, but a program is something and a mol file is a different thing.

Recall though that chemlambda is Turing universal. That means in particular that for any particular mol file A.mol and any particular program P which reduces mol files there is a chemlambda molecule and a reduction algorithm which simulate how the program reduces the mol file. Kind of a bootstrapping, right?

Yes, but let’s say it again: given A.mol and P there exist B.mol such that P reduces B.mol simulates the computation (P reduces A.mol).

In particular there is a part E.mol of the B.mol file (or more abstractly a subgraph of the molecule B) whose reductions in the “context” B simulate the part of the program P acting on A and doing a reduction.

That part is actually a molecule, just as A, which can be thought as being the enzyme of the move (which is implemented in P).

Then, instead of making a (theoretically possible) algorithm which generates from A.mol and P the file B.mol, etc, instead of that we may do something simpler.

We would like to use a quine as E.mol.

That is because it respects the chemical analogy in the sense that the reduction of E.mol preserves the number of nodes (atoms).

For this we look again at what is a move, and also at what new primitive we need in order to make all this to function.

A move is described by a pair of patterns (i.e. molecules), called LEFT and RIGHT pattern, with the property that there is a pairing between the free ports of the LEFT pattern and the RIGHT pattern.

The move then is implemented by an algorithm which:

- does a pattern matching for the LEFT pattern
- then replaces the LEFT pattern by the RIGHT pattern.

(I should add that not any instance of the LEFT pattern in A.mol is replaced by a RIGHT pattern, because there may be conflicts created by the fact that a node of A.mol appears in two different LEFT patterns, so there is a need for a criterion which decides which move to do. I neglect this, you’ll see later why, but roughly I concentrate on the move, because the criterion (or priority choice) is a different problem.)

Aha! We need the possibility to make pairings of edges, or even better (seen the mol format) we need a way to make pairings of ports.

I shall use the notation a:b for such a pairing. Notice that:

- we see such pairings when we say that x is of type a, i.e x:a (recall that one can see types as pairings between edges of molecules and edges of other molecules which represent types from the Calculus of Constructions)
- the pairing a:A saying that the edge a belongs to the actor A is a pairing (induces a pairing) between the edges of a molecule and the edges of an actor diagram in distrubted GLC
- the pairing a:b, where a is an edge of a molecule and x is an edge of another molecule which represents an emergent algebra term (see the thread on projective conical spaces) says that edge “a” is in place “b”
- the pairing a:b can be the consequence of a copmmunication; non-exclusively one may think of a:b as a consequence of c?(b) | c!(a)
- in conclusion moves are particular pairings coupled with an algorithm for associating a LEFT pattern to a RIGHT pattern, pretty much as typing is a pairing with another algorithm, or places (i.e. emergent algebras and their reductions) is of the same kind; the algorithm which does the pairing is a matter of communication procedure and it can be implemented by process calculi, actor models or whatever you like best.

So, let’s pick a small quineas E.mol, like a 9- or 10-quine which has the property that it has one LEFT pattern for (say) beta move and let’s do the following:

- pair a LEFT pattern from A.mol with the same (pattern matching) from E.mol
- reduce E.mol (bootstrapping) and get E’.mol which is identical to E.mol up to renaming some of the port (i.e.edge) names
- update the pairing (which is akin of the intention of replacement of LEFT pattern by RIGHT pattern)
- let the program in charge of A.mol to rewire (apply the update)

It’s perhaps a too short explanation, will come with details from A to Z next year.

Wish you the best for 2015!

__________________________________________________________________________

Filed under: distributed GLC Tagged: chemlambda, enzyme, graph rewriting systems, pattern matching ]]>