Answer from ANR concerning the ANR Bigben project

For context see Problems with the ANR Bigben project.

The presidency of the french Agence Nationale de la Recherche kindly answered to my demand of reexamination of the awarded ANR Bigben project. [added: … after several previous interesting mail exchanges with higher and higher ANR representatives; one argument impressed me a lot, perhaps it deserves a full discussion because after all we want Open Science to win.]

Here are the two parts of the answer: avis (pdf) and letter (pdf).

Here is my reply to ANR answer (links added and [text added between brackets here]):

Thank you for the precise response and for the time spent by ANR concerning this subject. For the scientific part there is an article in preparation.

Here are some short remarks.

  1. There are no “generalized bipotentials”. The name is invented by the project leader to fit with his competences. The source of the theoretical foundation which gives the name to the project is arXiv:1902.04598, [On the information content of the difference from hamiltonian evolution] where in proposition 1.3 is explained the appearance of what the project leader now calls “generalized bipotentials”. This is not referenced in the project.
  2. For the experts: a bipotential is always relative to the duality chosen. What matters is the difference between the bipotential and the duality. Change the duality and you obtain a change of bipotential, by a substraction of the old duality and an addition of the new duality. Is this a theoretical advance towards “generalized bipotentials”?
  3. The subject of my hamiltonian inclusions is old (2008) [Hamiltonian inclusions with convex dissipation]. It was turned into “symplectic BEN” during a collaboration with the project leader, where the connection with Brezis-Ekeland and Nayroles principles was made, by neglecting the inertial terms. But the reduction of hamiltonian inclusions, or SBEN if you like, to BEN is misleading. Back in 2014, when Djimedo Kondo, member of the bigben team, was introduced to the subject, [slides of d’Alembert seminary from 2014], [figshare], he immediately remarked that the reduction of SBEN to BEN has problems. Indeed, what the leader de Saxce claims is that all reduces to the minimization of a cost functional over all evolution curves. But the cost, as easily remarked by Kondo, is infinite for most of the trajectories, unless one already satisfies the dynamic equations. By neglecting the inertial terms, one does not solve the problem, because the same kind of infinities force to consider the satisfiability of the trajectory (as in contact or some plasticity or damage problems). Even since 2014 it was clear that the “SBEN is BEN” idea of de Saxce, cannot work in practice, except for some trivial examples. In this project the leader de Saxce wants to pursue the same. I claim that other ideas are needed (some of them I have, but am I willing to make the public again, without attribution?).
  4. I am of course willing to see my work being developed and moreover enhanced by meaningful collaboration. It does not seem the case until now. There is also the ethical aspect which I refrained to mention because the scientific case is almost enough. That is why I refused the “visio” meetings recently, because previous ones, where I protested against “SBEN is BEN” or where I was assured about the details of collaboration, amounted to nothing. I am now accused, in messages leaked by inadvertence by de Saxce, to have strange financial demands, when in reality I asked for a schedule and details just like the other members of the project. This is a lack of collegiality which I take very seriously.
  5. Therefore, I take as very positive the interest into hamiltonian inclusion, or SBEN, or by any other name. It is also positive that some work on arXiv (not any work, only the useful one) is appropriated, thus recognizing the value.
  6. I take as negative the lack of attribution. I challenged my career on these Open Science ideas, do I need to see my work being used without proper attribution? But I admit though that from the point of view of ANR, or any other management organization, it would be risky to accept as valuable any arXiv, say, “preprint” (and probably it would mean the death of arXiv itself, due to low quality submissions). But in this case, clearly this work is valuable.

With best regards to all the members of this discussion,

Marius Buliga

_____

After the response, a final comment: it would be nice if the reputed ANR takes a step towards acknowledging more Open Science, just like more than 100 years ago the french society accepted impressionism in art.

The history of that art movement is an inspiration since a long time, see Boring mathematics, artistes pompiers and impressionists.

You can’t watch chemlambda animations here

Because now wordpress cuts them and keeps only very few frames. Don’t know when they started to do this. As the posts of this blog are watched no matter how old they are, this is not constructive.

You can watch them and play with them if you go to

http://imar.ro/~mbuliga/collection.html

or with smaller sized animations, but https if you need babysitting

https://chemlambda.github.io/collection.html

UPDATE (11.01.2023): Today is the 3 years anniversary of the salvaged collection. See this post and then the post about the long ddos attack on the collection in jan-feb 2020.

Five questions for 2023

We are coming out, slowly, from an age of confusion.

We are not naive dreamers when we support Open Science.

Here are five personal questions for the next year.

  1. Can we steal from Open Science? This is the question which has to get an answer for the ANR BIGBEN project. UPDATE: until now the Agence Nationale de la Recherche answered me that the purpose of Open Science is to be appropriated by the community… huh?
  2. Will the private sector engage into supporting Open Science? For them OS is the archenemy (how to make money if my work is not protected by IP) and the main source of ideas. When they say that “ideas are cheap”, they really mean “my work is to scale an idea until it becomes successful, not to create new ideas”. Respect for scaling, but new ideas are the secret for future success. Until now the private sector is bent to destroy all the sources of new ideas.
  3. Who will make the software infrastructure to share, collect and study microbiome data in real time, globally? Molecular Reality is the nanopore sensor I dreamed about, along with a dna printer, as the I/O for the molecular computers. At some point somebody will put something like this on phones. More importantly, those who will win the competition of apps and software infrastructure will face not only extreme wealth, but also Open Science questions.
  4. AI chatbots will replace search engines? Recall that all those chatbots are trained on public data. While it makes perfect commercial sense to give to the masses mediocre answers for any question, it will also increase the general dumbness and conformity. So, the last question is:
  5. Will Sci-Hub, LibGen or other emerging hackers produce a science friendly alternative? Science grows and thrives always on the fringe. Where people are OK with conformist answers, researchers can’t help but looking to find a crack to open the shell and go in unknown territory.

Matematica foris

Randomness is a theory of the rest of the world. Fortuna is a function which converts the outside (foris) into a number.

[source] Enough with old stories. What about pseudorandom number generators (PRNG)?

Randomness is everywhere. There is a clear advantage of random algorithms vs deterministic algorithms. We can turn a random algorithm into a deterministic one by using a PRNG as the source of randomness.

There is another related thread, namely decentralized computing. Indeed, we take as granted that we can model decentralized computing via asynchronous automata. But an async automaton simulates decentralized computing if we accept the hypothesis that from the point of view of one user, actor, etc, the rest of the world which participates to the computation behaves randomly.

If we put together the two ideas:

  • that we can turn a random algorithm into a deterministic one via a PRNG,
  • that we can model the rest of the world as random,

then we arrive to the following: a PRNG is a model of the rest of the world.

Usually we make a model of something of interest, a phenomenon, process, by a simplification and a formalization of the said phenomenon or process. Now, for some models we also need to place them in the world, therefore a PRNG seems like an interesting choice to do that.

Of course, a PRNG is used as a source of randomness but also the output or state of the algorithm has an influence on the rest of the world, therefore it should be used as a salt for the PRNG.

All in all it may happen that most of the computation spent in the simulation of the model is for the PRNG.

Problems with the ANR Bigben project

Due to unethical behavior of de Saxce, the principal investigator of the project Bigben, recently funded by the french Agence Nationale de la Recherche (ANR), I asked the ANR for a reevaluation of the project and a public response.

My work on hamiltonian inclusions, aka SBEN, is central and the main novelty of this project. After winning the ANR competition, the principal investigator misrepresented my work and engaged in unethical behavior. I keep the correspondence which proves this, for the interested colleagues, although I would rather hope that ANR takes the steps to self regulate in this matter.

I shall update with the ANR response or reaction, if any.

UPDATE: ANR kindly answered, see this post, but not exactly to my questions, so I replied.

As concerns the scientific part, a detailed explanation will be available. I am sad that a beautiful principle of dissipation as minimal disclosed information is dumbed down to an old idea. The Brezis-Ekeland-Nayroles (BEN) principle in quasistatic plasticity is just a particular example of my general theory (and the only new contribution of de Saxce) and sadly, not the feasible way to exploit the hamiltonian inclusions, except in the most trivial situations.

To transform the hamiltonian inclusions into symplectic BEN then into generalized bipotentials(BIG) BEN is only a game where by slight name changes de Saxce tries to appropriate my ideas. There is no scientific content in these name changes or particular examples.

Even the names are misleading, for example there are no generalized bipotentials, they are the same ones with respect to the symplectic duality (my work, not de Saxce’s). The point is not about bipotentials!

One needs to show how this principle can be used for simulations and for this there exist other, new ways.

Here you can see slides from 2014 and all the actors of the present project.

For my work on this subject see:

[1] M. Buliga, Hamiltonian inclusions with convex dissipation with a view towards applications, Mathematics and its Applications 1, 2 (2009), 228-251, arXiv:0810.1419

[2] M. Buliga, G. de Saxce, A symplectic Brezis-Ekeland-Nayroles principle, Mathematics and Mechanics of Solids 22, 6, (2017), arXiv:1408.3102

[3] M. Buliga, A stochastic version and a Liouville theorem for hamiltonian inclusions with convex dissipation (2018), arXiv:1807.10480

[4] M. Buliga, On the information content of the difference from hamiltonian evolution (2019), arXiv:1902.04598

Molecular computer repository contains scripts and text on asemantic computing

I come back to the release of the repository “molecular“. From the readme:

Molecular computers with interaction combinators like graph rewriting systems

Marius Buliga homepage 1, homepage 2, arXiv, figshare

DOI

Problem

How to use real chemistry to build molecular computers which are based on graph rewriting systems like chemlambda, chemSKI or Lafont Interaction Combinators.

This was first suggested in the article Molecular computers, also (arXiv) (figshare), where ackermann(2,2) is computed as an example.

Why

In computer science and logic we use graph rewriting systems in relation to computation, lambda calculus, functional programming, etc. We now have a small list of interesting graph rewriting systems which allow universal computation. It is remarkable that some graph rewriting rules appear to be everywhere.

Graph rewriting systems are a very promising direction for building decentralized computing systems, as well as for other programming, logical or mathematical subjects.

____________

Some interesting details:

Preying on feeble minds of rich crypto people

Computational bureaucracy eats the world. Being computational, it can also be hacked. Clever people find hacks and become incredibly rich.

Now they have to believe in something. They saw the ugly nothingness which is behind the [input name of hacked society mechanism] but it is hard to accept that there is no meaning in worldly success. So they look for such a meaning.

There are people who prey the feeble minds of rich crypto guys. They show them diagrams which infect them with categoricitis. They bureaucratize moral and give it business names like effective altruism. What else? Ah, there are some rich but not young people who behave like they believe their own shit (but they don’t really like mathematics). That’s a category in itself, more interesting.

Listen. Don’t fall for speedy explanations: ether, applied category theory, cellular automata, synchrony in the brain, “open” instead of “free”. Don’t you see? Look at the people who advance that… the same ones.

Take time, crypto guys. Think… ponder… learn… Or be a prey, your choice.

JD Replicator

The JD replicator page is a response to the kind request of Joel Dietz, the creator of metalambda.org, for his metaseminar 0 (link to be updated if transcript available).

Take a look at this replicator molecule. It behaves like a kind of polymerase, more precisely.

It has the property that it produces a random number of pairs of copies from any template molecule coming from a lambda term.

For example, if we take as template the Omega combinator molecule, glued to the JD replicator it looks like this

(to the left the JD replicator, to the right and down the Omega combinator)

The Omega combinator molecule reduces forever (is a chemlambda quine), but still! With the JD combinator we get a random number of pairs of Omega combinators which are active! Example:

They don’t look the same? Yes, you are right, the snapshot shows various stages of the reduction of the Omega combinator. They are not synchronized, they are, each, active, in a random state of it’s own evolution.

So they are, indeed, each of them, copies of the same living creature.

Go to the page for more detailed explanations, or to play with the simulations available down the page.

Released: molecular repository, asemantic computing

I just released on Github and zenodo.org the repository

https://github.com/chemlambda/molecular

The repository contains awk and js scripts as a proof of concept for molecular computers based on certain special graph-rewriting systems, like chemlambda, and text which explains why this type of interaction combinators graph-rewrites are relevant for molecular computing.

Also released are a challenge to use real chemistry to compute some values of the Ackermann function, and the ideological background for such efforts, which I call “asemantic computing”.

Scripts are usable, the texts are readable, now they they can also be cited with a doi.

Happy and working

working,working, working with pen and paper, very refreshing, very happy….

Do you know who’s happier than a mathematician working with pen and paper? A mathematician working in front of a big blackboard, with a good provision of chalk 🙂

After so much time in front of screens, I thought I lost this pen and paper pleasure. But no!

There is internet as well and I’m reasonably chatty and easy to find. Very important channel, I learn a lot lately.

IRL these first generation replicants will have blue blood (molecular computers 5)

If click chemistry exists and it got a Nobel prize (explanation), then we could build molecular computers today!

(image from page 3 of the explanation)

Recall that according to this proposal, first version here (2015), we could try to make individual molecules which chemically react in a way analoguous to graph rewrites in interaction combinators, chemlambda or the graph rewriting of your choice.

With all this copper, the first generation replicants will have blue blood. Recall the replicants were discovered in 2016.

This post continues after molecular computers reading list 4.

Space as a chemical computation (talk)

I’ll give the following talk on sept 8th 2022, in the Wolfram Physics Project colloquium on chemical computing. Mail me if interested.

[Thanks to Xerxes Arsiwalla for the invitation and for the preparation of the following text starting from my input]

Update (14.09.2022): First thoughts after the talk here.

Update (26.09.2022): Link to the download of the talk here.

________

Speaker: Marius Buliga

Title: Space as a Chemical Computation

Abstract: 

Some years ago I made a splash in social media with hundreds of animations in artificial life, obtained from simulations in chemlambda, an artificial chemistry [1]. Part of them are saved here [2] [3]. They were interesting because they were about lambda calculus, quines, birth and death, duplication and metabolism… They suggested it is possible to try to make molecular computers with Interaction Combinators like rewrites.

But in the background this artificial chemistry (or simply asynchronous graph rewriting automata) comes from the effort to understand space. This is what I want to explain in this talk.

Indeed, I want to pass from sub-riemannian geometry, to dilation structures, to their computational content, in order to explain how we can understand space itself as a chemistry.

More precisely, I argue here that space may not be a huge graph which dynamically evolves by graph rewriting, instead what we perceive as space is a semantics, or an algorithmic edge decoration of small, universal graphs which interact as molecules do.

[1] https://chemlambda.github.io

[2] https://chemlambda.github.io/collection.html (needs javascript enabled)

[3] http://imar.ro/~mbuliga/collection.html (needs javascript enabled)

Speaker Bio:

Marius is a researcher at the Institute of Mathematics of the Romanian Academy. His phd was on the use of the Mumford-Shah functional to model the appearance and propagations of brittle fractures. He also studied quasi-convexity in nonlinear elasticity (think about big deformations, like in rubber) which is related to minimizing energy functionals over infinite dimensional groups. With other french colleagues he mixed convex analysis and hamiltonian mechanics into the new symplectic Brezis-Ekeland-Nayroles principle for non-smooth, dissipative systems.

Marius worked on intrinsic characterizations of sub-Riemannian geometry, which is actually a model for a non-commutative differential calculus, described by dilation structures. Trying to understand the computational content of dilation structures, he got interested in distributed, decentralized computing, biology and artificial life. It’s related because if you try to put back geometry into computation then you’ll get models which resemble a lot both molecular computing and decentralized networks. The artificial chemistries chemlambda, chemSKI, zip-slip-smash are his latest toys.

3d duplication mechanism

Joel Sjogren proposes this 3d form of the duplication (or DIST) rewrite. The page uses three.js

Look at the LHS and at the RHS. Due to three.js you can scale and rotate the models.

What you see in the models:

  • The four boundary circles are the external half-edges.

In the LHS there are two pair-of-pants, each corresponding to a trivalent node.

In the RHS there are 4 pair-of -pants, so 4 trivalent nodes.

HAVE YOU SEEN THIS BEFORE?

In my opinion, but I may be wrong, this represents a distributive identity related to surgery.

UPDATE: Kauffman points that this is a Frobenius relation, see it in the middle of

https://math.ucr.edu/home/baez/week268.html

UPDATE 2: Two things, (1) is not exactly a Frobenius equality, but close, and (2) from the computation point of view it does correctly points out where duplication comes from in that particular theory. So it is not an explanation of duplication in general, instead is an instance of duplication in a particular equational theory, or equational theories are pretty bad for explaining computation (because one “=” corresponds to the existence of an unspecified arbitrarily long computation) and moreover this theory introduces many other specific relations. Also the algorithm of rewrite application is missing in this theory. In conclusion, the algebraic Frobenius structure contains an instance of the more general duplication phenomenon, here the one of pair-of-pants.

More general, this duplication seen in this Frobenius relation (or whatever equation is that) is just an instance of the FO duplication, as explained in the duplication confusion posts.

It would be more interesting to exploit Joel cobordism rewrite directly for computation, ie skip the algebraic path. What would be the beta rewrite? How do you color pants? With metrics? With complex structures? Can you duplicate any surface? What are surface quines? What is a busy beaver surface? What is a halting surface, what are normal forms of surfaces and what sufficient conditions there are for their existence? Etc…

Good work and return to the normal

This is just to tell that I’m good, working on new as well as old stuff. The pandemic time wears off. It is not a good time for open science, nor for the net, but who knows? Later, maybe? I’m optimistic though, is time for that kind of new which looks classic when you look at it.

Then we, or they will speak, those who can still read, focus, think, study, those who have passion, who have talent. Peace.

On reality again

In dialogue form [source of the start]

Remember the question if the moon is still there if nobody looks at it?

It is not this, but rather: is a map of the moon useful in any way if nobody uses it? Trivially no.

Is there reproducible evidence about the moon? Yes. So then we can go to the moon and leave footprints on it.

But suppose that nobody ever looked at the moon, then what? Then we don’t have evidence about the moon, therefore there is no discussion about it. The moon is not real in this situation.

Ok, let’s continue. Nobody ever looked at the moon therefore the moon is not real. But then it falls on your head and squashes you, how can it do that if not real?

Something big falls on my head and squashes me. The survivors discuss about that and the felt moon is now real.

Ok, but we know that the moon creates the periodic sea rising. If nobody looks at the moon means that the moon is not real, then how can the moon create z it is real but we don’t know about the moon. For us the relation with the moon is unreal.

Ok, on some planet a rock falls in a pond of methane, real or not?

Real from the moment we discuss about it.

But can we objectively say that the rock felt in the pond?

Yes, if we have reproducible evidence about that. For example from lab rocks consistently falling into lab ponds. But unless we want to meddle with that rock on that pond, the scientific prediction produces no real effect.

It is so simple. We don’t have to suppose that we control the reality in order to reason about it. Just an imperial, industrial revolution, war induced habit of mind.

Ok so you just say that we reason better if we think about reality as live discussion or live performance, as in a theater, and about objects and objective as recordings or movies.

Yes.

But this is against the power of bureaucracy, desired by anyone. I’m going to erase you and any evidence about you.

My hope is that this is a reproducible finding, therefore it does not matter what you do.

Let’s be less dramatic, what’s so important about reproducible evidence?

It is important because the scientific method defines objective any evidence which can be reproduced independently.

Suppose a physicist produces evidence. If another person just copies the pdf of the article, this is not reproducible evidence.

The other person has to make this reproduction real. After the other person takes the tiger by the tail to see if it happens as the first person claims, we got reproducible evidence.

Now is clear that reproducible evidence implies that we get the expected prediction. That is why is important.

But reality can be modified by unscientific evidence, in a predictable way. For example by theories of conspiration.

This is real. If we insist into finding why that is so, then we have to get reproducible evidence about this phenomenon. In this particular case it means that we have to experiment on people, many times, independently, in order to get scientific evidence. Then we can use this to make predictions about other people or populations, which will modify them in the direction desired by us.

What a great power, how attractive that sounds.

But the experimenter then may found that the effect on the population is not as expected in the long term.

The long term is a bet or a hope, it is as real as a theory of conspiration is, but it is as much based on reproducible evidence as the theory of conspiration.

Reality is so rich that it cannot be cloned.

But we’ll use this power just this time.

No, we’ll use it now and then again later, until we arrive to no power and later is like here lies Ozymandias… This is evidence reproduced independently many times.

So we shouldn’t use it al all?

I don’t know, I suppose that if we have the power then we may use it, but always adapt and bow to reality. Don’t just inhibit it.

I don’t worry about this, there will always be some body who will try some thing.

This is real.

Molecular computers reading list 4

Continues after reading list 3.

Benenson, Y. Biomolecular computing systems: principles, progress and potential. Nat Rev Genet 13, 455–468 (2012). https://doi.org/10.1038/nrg3197

Abstract: The task of information processing, or computation, can be performed by natural and man-made ‘devices’. Man-made computers are made from silicon chips, whereas natural ‘computers’, such as the brain, use cells and molecules. Computation also occurs on a much smaller scale in regulatory and signalling pathways in individual cells and even within single biomolecules. Indeed, much of what we recognize as life results from the remarkable capacity of biological building blocks to compute in highly sophisticated ways. Rational design and engineering of biological computing systems can greatly enhance our ability to study and to control biological systems. Potential applications include tissue engineering and regeneration and medical treatments. This Review introduces key concepts and discusses recent progress that has been made in biomolecular computing.

John R. Bracht, Wenwen Fang, Aaron David Goldman, Egor Dolzhenko, Elizabeth M. Stein, Laura F. Landweber,
Genomes on the Edge: Programmed Genome Instability in Ciliates,
Cell,
Volume 152, Issue 3,
2013,
Pages 406-416,
ISSN 0092-8674,
https://doi.org/10.1016/j.cell.2013.01.005.
(https://www.sciencedirect.com/science/article/pii/S0092867413000068)
Abstract: Ciliates are an ancient and diverse group of microbial eukaryotes that have emerged as powerful models for RNA-mediated epigenetic inheritance. They possess extensive sets of both tiny and long noncoding RNAs that, together with a suite of proteins that includes transposases, orchestrate a broad cascade of genome rearrangements during somatic nuclear development. This Review emphasizes three important themes: the remarkable role of RNA in shaping genome structure, recent discoveries that unify many deeply diverged ciliate genetic systems, and a surprising evolutionary “sign change” in the role of small RNAs between major species groups.

Molecular computers reading list 3

Continues after the reading list 2.

Keywords search for: RNA self-cleaving, self-cleavage, self-ligation, scissile phosphate, ribozyme, also for: viroid, rolling-circle mechanism.

Also significant: “hammerhead ribozymes are located immediately downstream from the stop codon”. This suggest a mechanism where the left pattern of the rewrite and the necessary rewrite enzyme are encoded in a rna string, which is copied and then cleaved to produce the left pattern and the enzyme which performs the rewrite to the right pattern. This is kind of similar with the knot notation, discussed here (and the linked posts about the duplication confusion).

To implement any of the small graph rewrite systems, one needs two self-cleaving events for one beta like rewrite, two self-cleaving and two self-ligation events for a dist like rewrite.

But there are plenty of mechanisms, here are some small steps in the jungle of those, read by a baffled mathematician:

This search looks more and more analogous with the previous situation, when I asked help from programmers and got lots of confused “impossible” or “how’s that different from _x_” reactions, then I had to do it on my own and it worked.

I start to believe that now it’s the same, but instead of simple programs there is a huge pile of chemistry to learn, just to learn later that it can be organized into a small, coherent recursive chemical reactions formalism. But I don’t want to do it, help! I may not be able to do it, of course, despite my claim that I can do everything (time permitted). I may be able to do it, only to find later no interest from the chemists because I’m not, definitely, a chemist. So again help! run with the ideas, but don’t forget to mention the source!

Molecular computers reading list 2

Continues after reading list 1.

This time I’m interested into tRNA and cloverleaf structure, more generally.

The source for this figure is this, to read and also explore the bibliography.

Natural or synthetic cloverleaf RNA structures are a simple candidate for 3 valent nodes with decorations.

Thank you for any notice about existing molecular computing which uses cloverleaf structures.

This and this are added to the reading list.

Molecular computers reading list 1

This is the first in a stream of posts about making a new kind of molecular computers. it is time to learn, you are welcome to join me.

[Update: go to the repository molecular.]

Here is the formulation of the problem and what is the concrete goal.

Problem: Create a graph rewrite system of the type of Interaction combinators with real chemistry.

Build a molecular computer in the sense ofone molecule which transforms, by random chemical reactions mediated by a collection of enzymes, into a predictable other molecule, such that the output molecule can be conceived as the result of a computation encoded in the initial molecule.”

Goal: Compute ackermann(2.2) or ackermann(2,3) with this system.

Or why not ackermann(3,2)? or ackermann(4,4) to see what an ackermann goo looks like, macroscopically.

Once you can build one, you can build all.

Details: Graph rewrite systems as Interaction Combinators, chemlambda, chemSKI, Zipper Logic, are a natural candidate to build molecular computers.

The idea is a graph will be a molecule and a graph rewrite rule will be chemical reaction

(LEFT PATTERN) + (TOKEN_1) + (ENZYME) = (RIGHT PATTERN) + (TOKEN_2) + (ENZYME)

which transforms the graph into a new graph (chemical reaction mediated by an enzyme or catalyst, perhaps in the presence of input and output tokens).

The goal is to compute a function which is not trivial, like the Ackermann function, which is not primitively recursive. This was suggested in the article Molecular computers [arXiv] [figshare] , where Ackermann(2,2) is computed as an example.

The real use case is to explore the limits of the Lafont universality (also here).

Question: are there already molecular computing techniques which can be used for turning a graph-rewrite system as described into reality?

This is the goal of the reading list, which starts now:

Evgeny Katz (editor), DNA- and RNA-Based Computing Systems , Wiley (2020)

Sundus Erbas-Cakmak, David A. Leigh,Charlie T. McTernan, and Alina L. Nussbaumer, Artificial Molecular Machines, Chem. Rev. 2015, 115, 10081−10206

David Yu Zhang and Georg Seelig, Dynamic DNA nanotechnology using strand-displacement reactions, , Nature Chemistry, VOL 3, february 2011, doi: 10.1038/nchem.957

computing with space | open notebook