Tag Archives: Single molecule computer

Alife vs AGI

Artificial general intelligence  is, of course, on the top of the mind of some of the best or most interesting researchers. In the post Important research avenues on my mind, Ben Goertzel writes:

1. AGI, obviously … creating robots and virtual-world robots that move toward human-level general intelligence

….

5. Build a massive graph database of all known info regarding all organisms, focused on longevity and associated issues, and set an AI to work mining patterns from it… I.e. what I originally wanted to do with my Biomind initiative, but didn’t have the $ for…

6. Automated language learning — use Google’s or Microsoft’s databases of text to automatically infer a model of human natural languages, to make a search engine that really understands stuff.  This has overlap with AGI but isn’t quite the same thing…

7. I want to say femtotech as I’m thinking about that a fair bit lately but it probably won’t yield fruit in the next few years…

….

9. Nanotech-using-molecular-bio-tools and synthetic biology seem to be going interesting places, but I don’t follow those fields that closely, so I hope you’re pinging someone else who knows more about them…

I believe that 9 is far more likely to achieve sooner than 1. Will explain a bit later, after looking a bit at the frame of mind which, I think, constrains this ordering.

AGI is the queen, the graal, something which almost everybody dreams to see. It is an old dream. Recent advances in cognition show that yeah, we, Natural general intelligence beings, are kind of robots with many, many processes going in parallel in the background, all of them giving the feeling of reality. On top of all these processes are the ones related to consciousness and high level functioning of the brain. It is admirable to try to model those, but it is naive, and coming from a old way of seeing things, to believe that the other processes are somehow not as interesting, or not really needed, or simply they are too mechanical, anyway, not something which is a challenge. Reality is that we now know that we even don’t have the right frame of mind to understand how to understand the functioning of those neglected, God given processes.

So, that is why I believe that AGI is not realistic. Unless we concentrate on language, or other really puny aspects of GI, but with lots of traditions.

Btw, have I told you that whatever I write, I am always happy to be contradicted?

The points 5 and 6 look indeed very probable. Will be done by corporations, that is sure. Somehow is the same thing behind, namely that there is an essence of the pyramidal way of thinking, such that with enough means, knowledge will accumulate on top of that pyramid. (For the point 1 intelligence is the top and for 5 and 6 corporations are on top, of course).

As regards the point 7, that starts to be genuinely new, therefore less fashionable. The idea of a single molecule quantum computer springs into mind. Should be known better. [See the comments at this G+ post.]

Several concepts are now under development to make a calculation using a single molecule:
1) to force a molecule to look like a classical electronic circuit but integrated inside the molecule
2) to divide the molecule into “qubits” in order to exploit the quantum engineering developed since several years around quantum computers.
3) to use intramolecular dynamical quantum behavior without dividing molecules into “qubits” leading to Hamiltonian quantum computer

Now, to point 9!

It can be clearly done by a combination of decentralized computing with artificial chemistry. 

In a future post I shall describe with details, by using also previous posts from chorasimilarity, which are the ingredients and what are the arguments in favour of this idea.

In this post I want to propose a challenge.  What I have in mind, rather vague  but might be fun, would be to develop through exchanges a “what if” world, where, for example, not AI is the interesting thing when it comes about computers, but artificial biology. Not consciousness, but metabolism, not problem solving, but survival. Also related to the IoT which is a bridge between two worlds. Now, the virtual world could be as alive as the real one. Alive in the Avida sense,  in the sense that it might be like a jungle, with self-reproducing, metabolic artificial beings occupying all virtual niches, beings which are designed by humans, for various purposes. The behaviour of these virtual creatures is not limited to the virtual, due to the IoT bridge.  Think that if I can play a game in a virtual world (i.e. interact both ways with a virtual world) then why not a virtual creature can’t interact with the real world? Humans and social manipulations included.

If you start to think about this possibility, then it looks a bit like this. OK, let’s write such autonomous, decentralized, self sustained computations to achieve a purpose. May be any purpose which can be achieved by computation, be it secure communications, money replacements, or low level AI city management. What stop others to write their creatures, one for example for the fun of it,  of writing across half of the world the name Justin by building at right GPS coordinates sticks with small mirrors on top, so that from orbit all shine the pixels of that name.  Recall the IoT bridge and the many effects in the real world which can be achieved by really distributed, but cooperative computations and human interactions. Next: why don’t write a virus to get rid of all these distributed jokes of programs which run low level in all phones, antennas and fridges? A virus to kill those viruses. A super quick self-reproducer to occupy as much as possible of the cheap computing  capabilities. A killer of it. And so on. A seed, like in Neal Stephenson, only that the seed is not real, but virtual, and it does not work on nanotechnology, but on any technology connected to the net via IoT.

Stories? Comics? Fake news? Jokes? Should be fun!
_______________________________________________

What is new in distributed GLC?

We have seen that several parts or principles of distributed GLC are well anchored in previous, classical research.  There are three such ingredients:

There are several new things, which I shall try to list them.

1.  It is a clear, mathematically well formulated model of computation. There is a preparation stage and a computation stage. In the preparation stage we define the “GLC actors”, in the computation stage we let them interact. Each GLC actor interact with others, or with itself, according to 5 behaviours.  (Not part of the model  is the choice among  behaviours, if several are possible at the same moment.  The default is  to impose to the actors to first interact with others (i.e. behaviours 1, 2, in this order)  and if no interaction is possible then proceed with internal behaviours 3, 4, in this order. As for the behaviour 5, the interaction with external constructs, this is left to particular implementations.)

2.  It is compatible with the Church-Turing notion of computation. Indeed,  chemlambda (and GLC) are universal.

3. The evaluation  is not needed during computation (i.e. in stage 2). This is the embodiment of “no semantics” principle. The “no semantics” principle actually means something precise, is a positive thins, not a negative one. Moreover, the dissociation between computation and evaluation is new in many ways.

4. It can be used for doing functional programming without the eta reduction. This is a more general form of functional programming, which in fact is so general that it does not uses functions. That is because the notion of a function makes sense only in the presence of eta reduction.

5. It has no problems into going outside, at least apparently, Church-Turing notion of computation. This is not a vague statement, it is a fact, meaning that GLC and chemlambda have sectors (i.e. parts) which are used to represent lambda terms, but also sectors which represent other formalisms, like tangle diagrams, or in the case of GLC also emergent algebras (which are the most general embodiment of a space which has a very basic notion of differential calculus).

__________________________________________

All these new things are also weaknesses of distributed GLC because they are, apparently at least, against some ideology.

But the very concrete formalism of distributed GLC should counter this.

I shall use the same numbering for enumerating the ideologies.

1.  Actors a la Hewitt vs Process Calculi.  The GLC actors are like the Hewitt actors in this respect.  But they are not as general as Hewitt actors, because they can’t behave anyhow. On the other side, is not very clear if they are Hewitt actors, because there is not a clear correspondence between what can an actor do and what can a GLC actor do.

This is an evolving discussion. It seems that people have very big problems to  cope with distributed, purely local computing, without jumping to the use of global notions of space and time. But, on the other side, biologists may have an intuitive grasp of this (unfortunately, they are not very much in love with mathematics, but this changes very fast).

2.   distributed GLC is a programming language vs is a machine.  Is a computer architecture or is a software architecture? None. Both.  Here the biologist are almost surely lost, because many of them (excepting those who believe that chemistry can be used for lambda calculus computation) think in terms of logic gates when they consider computation.

The preparation stage, when the actors are defined, is essential. It resembles with choosing the right initial condition in a computation using automata. But is not the same, because there is no lattice, grid, or preferred topology of cells where the automaton performs.

The computation stage does not involve any collision between molecules mechanism, be it stochastic or deterministic. That is because the computation is purely local,  which means in particular that (if well designed in the first stage) it evolves without needing this stochastic or lattice support. During the computation the states of the actors change, the graph of their interaction change, in a way which is compatible with being asynchronous and distributed.

That is why here the ones which are working in artificial chemistry may feel lost, because the model is not stochastic.

There is no Chemical reaction network which concerts the computation, simply because a CRN is aGLOBAL notion, so not really needed. This computation is concurrent, not parallel (because parallel needs a global simultaneity relation to make sense).

In fact there is only one molecule which is reduced, therefore distributed GLC looks more like an artificial One molecule computer (see C. Joachim Bonding More atoms together for a single molecule computer).  Only it is not a computer, but a program which reduces itself.

3.  The no semantics principle is against a strong ideology, of course.  The fact that evaluation may be not needed for computation is  outrageous (although it might cure the cognitive dissonance from functional programming concerning the “side effects”, see  Another discussion about math, artificial chemistry and computation )

4.  Here we clash with functional programming, apparently. But I hope that just superficially, because actually functional programming is the best ally, see Extreme functional programming done with biological computers.

5.  Claims about going outside Church-Turing notion of computation are very badly received. But when it comes to distributed, asynchronous computation, it’s much less clear. My position here is that simply there are very concrete ways to do geometric or differential like “operations” without having to convert them first into a classical computational frame (and the onus is on the classical computation guys to prove that they can do it, which, as a geometer, I highly doubt, because they don’t understand or neglect space, but then the distributed asynchronous aspect come and hits  them when they expect the least.)

______________________________________________

Conclusion:  distributed GLC is great and it has a big potential, come and use it. Everybody  interested knows where to find us.  Internet of things?  Decentralized computing? Maybe cyber-security? You name it.

Moreover, there is a distinct possibility to use it not on the Internet, but in the real physical world.

______________________________________________