Digital materialization and synthetic life

Digital materialization (DM) is not the name of a technology from Star Trek.  According to the wikipedia page

  DM can loosely be defined as two-way direct communication or conversion between matter and information that enables people to exactly describe, monitor, manipulate and create any arbitrary real object.

I linked to the Digital Materialization Group, here is a quote from their page.

DM systems possess the following attributes:

  • realistic – correct spatial mapping of matter to information
  • exact – exact language and/or methods for input from and output to matter
  • infinite – ability to operate at any scale and define infinite detail
  • symbolic – accessible to individuals for design, creation and modification

Such an approach can be applied not only to tangible objects but can include the conversion of things such as light and sound to/from information and matter. Systems to digitally materialize light and sound already largely exist now (e.g. photo editing, audio mixing, etc.) and have been quite effective – but the representation, control and creation of tangible matter is poorly supported by computational and digital systems.

My initial interest in DM came from possible interactions with the Unlimited Detail idea, see this post written some time ago.

Well, there is much more into this idea, if we think about life forms.
In the discussion section of this  article  by Craig Venter et al. we read:

This work provides a proof of principle for producing cells based on computer-designed genome sequences. DNA sequencing of a cellular genome allows storage of the genetic instructions for life as a digital file.

In  his book Life at the speed of light  Craig Venter writes (p. 6)

All living cells run on DNA software, which directs hundreds to thousands of protein robots. We have been digitizing life for decades, since we first figured out how to read the software of life by sequencing DNA. Now we can go in the other direction by starting with computerized digital code, designing a new form of life. chemically synthesizing its DNA, and then booting it up to produce the actual organism.

That is clearly a form of Digital Materialization.

Now, we have these two realms, virtual and real, and the two way bridge between them called DM.

It would be really nice if we would have the same chemistry ruled world:

  • an artificial version for the virtual one, in direct correspondence with those parts of
  • the real version (from the real world) which are relevant for the DM translation process.

This looks like a horribly complex goal to reach, because of the myriad concrete, real stumbling blocks, but hey, this is math for, right? To simplify, to abstract, to define, to understand.

[posted also here]

__________________________________________

Advertisements

How to use chemlambda for understanding DNA manipulations

UPDATE: Probably RNA better than DNA. The best entry point is that document. More in the references at the end of that doc. Or if you like just to see animations, there are aplenty (more than 350) in this collection.

________________

… or the converse: how to use DNA manipulations to understand chemlambda, this is a new thread starting with this post.

This is a very concrete, nice project, I already have some things written, but it is still in a very fluid form.

Everybody is invited to work with me on this. It would be very useful to collaborate with people which have knowledge about DNA and enzymes involved into the processes around DNA.

So, if you want to contribute, then you can do it in several ways:

  • by dedicating a bit of your brain power to concrete parts of this
  • by sending me links to articles which you have previously  read and understood
  • by asking questions about concrete aspects of the project
  • by proposing alternative ideas, in a clear form
  • by criticizing the ideas from here.

I am not interested just to discuss about it, I want to do it.

Therefore, if you think that there is this other project which does this and that with DNA and computation, please don’t mention it here unless you have clear explanations about the connections with this project.

Don’t use authority arguments and name dropping, please.

____________________________________

Now, if anybody is still interested to learn what is this about, after the frightening introduction, here is what I am thinking.

There is a full load of enzymes like this and that,  which cut, link, copy, etc. strings of DNA.  I want to develop a DNA-to-chemlambda dictionary which translates what happens in one world into the other.

This is rather easy to do. We need a translation of arrows and the four nodes from chemlambda into some DNA form.

Like this one, for example:

dna_1,fig

Then we need a translation of the chemlambda moves (or some version of those, see later) into processes involving DNA.

There is plenty of stuff in the DNA world to do the simple things from chemlambda. In turn, because chemlambda is universal, we get a very cheap way of defining DNA processes as computations.

Not as boolean logic computations. Forget about TRUE, FALSE and AND gates.  Think about translating DNA processes into something like lambda calculus.

I know that there is plenty of research about using DNA for computation, and there is also plenty of research about relations between lambda calculus and chemistry.

But I am not after some overarching theory which comprises everything DNA, chemistry and lambda calculus.

Instead, I am after a very concrete look at tiny parts of the whole huge field, based on a specific formalism of chemlambda.

It will of course turn out that there are many articles relevant for what will be found here and there will be a lot of overlap with research already done.

Partially, this is one of the reasons I am searching collaborations around this, in order to not invent wheels all the time, due to my ignorance.

__________________________________________

Who wins from failed peer reviews?

The recent retraction of 120 articles from non-OA journals, coming after the attack on OA by the John Bohannon experiment, is the subject of Predatory Publishers: Not Just OA (and who loses out?). The article asks:

Who Loses Out Under Different “Predator” Models?

and an answer is proposed.  Further I want to comment on this.

First, I remark that the results of the  Bohannon experiment (which is biased because it is done only on a selected list of OA journals) show that the peer review process may be deeply flawed for some journals (i.e. those OA journals which accepted the articles sent by Bohannon) and for some articles at least (i.e. those articles sent by Bohannon which were acepted by the OA journals).

The implication of that experiment is that maybe there are other articles which were published by OA journals after a flawed peer review process.

On the other side, Cyril Labbé discovered  120 articles in some non  OA journals which were nonsense automatically generated by SCIgen. It is clear that the publication of these 120 article shows that the peer review process (for those articles and for those journals) was flawed.

The author of the linked article suggests that the one who loses from the publication of flawed articles, in OA or non OA journals, is the one who pays! In the case of legacy publishers this is the reader. In the case of Gold OA publishers this is the author.

This is correct. The reason why the one who pays loses is that the one who pays is cheated by the flawed peer review. The author explains this very well.

But it is an incomplete view. Indeed, the author recognizes that the main service offered by the publishers is the  well done peer review. Before discussing who loses from publication of flawed articles, let’s recognize that this is what the publisher really sells.

At least in a perfect world, because the other thing a publisher sells is vanity soothing. Indeed, let’s return to the pair of discoveries made by Bohannon and Labbé and see that while in the case of Bohannon experiment the flawed articles were made up with for the experiment purpose,  Labbé discovered articles written by researchers who tried to publish something for the sake of publishing.

So, maybe before asking who loses from flaws in the peer review, let’s ask who wins?

Obviously, unless there is a conspiracy going on from some years,  the researchers who submitted  automatically generated articles to prestigious non OA publishers did not want their papers to be well peer reviewed. They hoped their papers will pass this filter.

My conclusion is:

  • there are two things a publisher sells: peer review as a service and vanity
  • some Gold OA journals and some legacy journals turned out to have flawed peer review service
  • indeed, the one who pays and does not receive the service looses
  • but also the one who exploits the flaws of the badly done  peer review service wins.

Obviously green OA will lead to fewer losses and open peer review will lead to fewer wins.

The true Internet of Things, decentralized computing and artificial chemistry

A thing is a discussion between several participants.  From the point of view of each participant, the discussion manifests as an interaction between the participant with the other participants, or with itself.

There is no need for a global timing of the interactions between participants involved in the discussion, therefore we talk about an asynchronous discussion.

Each participant is an autonomous entity. Therefore we talk about a decentralized discussion.

The thing is the discussion and the discussion is the thing.

When the discussion reaches an agreement, the agreement is an object. Objects are frozen discussions, frozen things.

In the true Internet of Things, the participants can be humans or virtual entities. The true internet of Things is the thing of all things, the discussion of all discussions. Therefore the true Internet of Things has to be asynchronous and decentralized.

The objects of the true Internet of Things are the objects of discussions. For example a cat.

Concentrating exclusively on objects is only a manifestation of the modern aversion of having a conversation. This aversion manifests in many ways (some of them extremely useful):

  • as a preference towards analysis, one of the tools of the scientific method
  • as the belief in the semantics, as if there is a meaning which can be attached to an object, excluding any discussion about it
  • as externalization of discussions, like property rights which are protected by laws, like the use of the commons
  • as the belief in objective reality, which claims that the world is made by objects, thus neglecting the nature of objects as agreements reached (by humans) about some selected aspects of reality
  • as the preference towards using bottlenecks and pyramidal organization as a mean to avoid discussions
  • as various philosophical currents, like pragmatism, which subordinates things (discussions) to their objects (although it recognizes the importance of the discussion itself,  as long as it is carefully crippled in order that it does not overthrow the object’s importance).

Though we need agreements, we need to rely on objects (as evidence), there is no need to limit the future true Internet of Things to an Internet of Objects.

______________________________________

We already have something  called Internet of Things, or at least something which will become an Internet of Things, but it seems to be designed as an Internet of Objects. What is the difference? Read Notes for “Internet of things not Internet of objects”.

Besides humans, there will be  the other participants in the  IoT,  in fact the underlying connective mesh which should support the true Internet of Things.  My proposal is to use an artificial chemistry model mixed with the actor model, in order to have only the strengths of both models:

  1.   decentralized,
  2. does not need an overlooking controller,
  3. it works without  needing to have a meaning, purpose or in any other ways  being oriented to problem solving
  4. does not need to halt
  5. inputs, processing and output have the same nature (i.e. just chemical molecules and their proximity-based interactions).

without having the weaknesses:

  1.  the global view of Chemical Reaction Networks,
  2. the generality of behaviours of the actors in the actor model, which forces the model to be seen as a high level, organizing the way of thinking about particular computing tasks, instead of being a very low level, simple and concrete model.

______________________________________

With these explanations, please go and read again  three  older posts and a page, if interested to understand more:

______________________________________

Open peer review as a service

The recent discussions about the creation of a new Gold OA journal (Royal Society Open Science)  made me to write this post. In the following there is a concentrate of what I think about the legacy publishers, Gold OA publishers and the open peer review as a service.

Note: the idea is to put in one place the various bits of this analysis, so that it is easy to read. The text is assembled from slightly edited parts of several posts from chorasimilarity.

(Available as a published google drive doc here.)

Open peer review as a service   

Scientific publishers are in some respects like Cinderella. They used to provide an immense service to the scientific world, by disseminating  new results and archiving old results into books. Before the internet era, like Cinderella at the ball, they were everybody’s darling.

Enters the net. At the last moment, Cinderella tries to run from this new, strange world.

Cinderella does not understand  what happened so fast. She was used with the scarcity (of economic goods), to the point that she believed everything will be like this all her life!

What to do now, Cinderella? Will you sell open access for gold?

But wait! Cinderella forgot something. Her lost shoe, the one she discarded when she ran out from the ball.

In the scientific publishers world, peer-review is the lost shoe. (As well, we may say that up to now, researchers who are writing peer-reviews are like Cinderella too, their work is completely unrewarded and neglected.)

In the internet era the author of a scientific research paper is free to share his results with the scientific world by archiving a preprint version of her/his paper in free access repositories.  The author, moreover, HAS to do this  because the net offers a much better dissemination of results than any old-time publisher. In order (for the author’s ideas) to survive, making a research paper scarce by constructing pay-walls around it is clearly a very bad idea.  The only thing which the gold open access  does better than green open access is that the authors pay the publisher for doing the peer review (while in the case of arxiv.org, say, the archived articles are not peer-reviewed).

Let’s face it: the publisher cannot artificially make scarce the articles, it is a bad idea. What a publisher can do, is to let the articles to be free and to offer the peer-review service.

Like Cinderella’s lost shoe, in this moment the publisher throws away the peer-reviews (made gratis by fellow researchers) and tries to sell the article which has acceptable peer-review reports.

Context. Peer-review is one of the pillars of the actual publication of research practice. Or, the whole machine of traditional publication is going to suffer major modifications, most of them triggered by its perceived inadequacy with respect to the needs of researchers in this era of massive, cheap, abundant means of communication and organization. In particular, peer-review is going to suffer transformations of the same magnitude.

We are living interesting times, we are all aware that internet is changing our lives at least as much as the invention of the printing press changed the world in the past. With a difference: only much faster. We have an unique chance to be part of this change for the better, in particular  concerning  the practices of communication of research.

In front of such a fast evolution of  behaviours, a traditionalistic attitude is natural to appear, based on the argument that slower we react, a better solution we may find. This is however, in my opinion at least, an attitude better to be left to institutions, to big, inadequate organizations, than to individuals.

Big institutions need big reaction times because the information flows slowly through them, due to their principle of pyramidal organization, which is based on the creation of bottlenecks for information/decision, acting as filters. Individuals are different in the sense that for them, for us, the massive, open, not hierarchically organized access to communication is a plus.

The bottleneck hypothesis. Peer-review is one of those bottlenecks, traditionally. It’s purpose is to separate the professional  from the unprofessional.  The hypothesis that peer-review is a bottleneck explains several facts:

  • peer-review gives a stamp of authority to published research. Indeed, those articles which pass the bottleneck are professional, therefore more suitable for using them without questioning their content, or even without reading them in detail,
  • the unpublished research is assumed to be unprofessional, because it has not yet passed the peer-review bottleneck,
  • peer-reviewed publications give a professional status to authors of those. Obviously, if you are the author of a publication which passed the peer-review bottleneck then you are a professional. More professional publications you have, more of a professional you are,
  • it is the fault of the author of the article if it does not pass the peer-review bottleneck. As in many other fields of life, recipes for success and lore appear, concerning means to write a professional article, how to enhance your chances to be accepted in the small community of professionals, as well as feelings of guilt caused by rejection,
  • the peer-review is anonymous by default, as a superior instance which extends gifts of authority or punishments of guilt upon the challengers,
  • once an article passes the bottleneck, it becomes much harder to contest it’s value. In the past it was almost impossible because any professional communication had to pass through the filter. In the past, the infallibility of the bottleneck was a kind of self-fulfilling prophecy, with very few counterexamples, themselves known only to a small community of enlightened professionals.

This hypothesis explains as well the fact that lately peer-review is subjected to critical scrutiny by professionals. Indeed, in particular, the wave of detected plagiarisms in the class of peer-reviewed articles lead to the questioning of the infallibility of the process. This is shattering the trust into the stamp of authority which is traditionally associated with it.  It makes us suppose that the steep rise of retractions is a manifestation of an old problem which is now revealed by the increased visibility of the articles.

From a cooler point of view, if we see the peer-review as designed to be a bottleneck in a traditionally pyramidal organization,  is therefore questionable if the peer-review as a bottleneck will survive.

Social role of peer-review. There are two other uses of peer-review, which are going to survive and moreover, they are going to be the main reasons for it’s existence:

  • as a binder for communities of peers,
  • as a time-saver for the researchers.

I shall take them one-by-one.

On communities of peers. What is strange about the traditional peer-review is that although any professional is a peer, there is no community of peers. Each researcher does peer-reviewing, but the process is organized in such a manner that we are all alone.

To see this, think about the way things work: you receive a demand to review an article, from an editor, based on your publication history, usually, which qualifies you as a peer. You do your job, anonymously, which has the advantage of letting you be openly critical with the work of your peer, the author. All communication flows through the editor, therefore the process is designed to be unfriendly with communications between peers. Hence, no community of peers.

However, most of the researchers who ever lived on Earth are alive today. The main barrier for the spread of ideas is a poor mean of communication. If the peer-review becomes open, it could foster then the appearance of dynamical communities of peers, dedicated to the same research subject.

As it is today, the traditional peer-review favours the contrary, namely the fragmentation of the community of researchers which are interested in the same subject into small clubs, which compete on scarce resources, instead of collaborating. (As an example, think about a very specialized research subject which is taken hostage by one, or few, such clubs which peer-reviews favourably only the members of the same club.)

Time-saver role of peer-review. From the sea of old and new articles, I cannot read all of them. I have to filter them somehow in order to narrow the quantity of data which I am going to process for doing my research.

The traditional way was to rely on the peer-review bottleneck, which is a kind of pre-defined, one size for all solution.

With the advent of communities of peers dedicated to narrow subjects, I can choose the filter which serves best my research interests. That is why, again, an open peer-review has obvious advantages. Moreover, such a peer-review should be perpetual, in the sense that, for example, reasons for questioning an article should be made public, even after the “publication” (whatever such a word will mean in the future). Say, another researcher finds that an older article, which passed once the peer-review, is flawed for reasons the researcher presents. I could benefit from this information and use it as a filter, a custom, continually upgrading filter of my own, as a member of one of the communities of peers I am a member of.

All the steps of the editorial process used by legacy publishers are obsolete. To see this, is enough to ask “why?”.

  1. The author sends the article to the publisher (i.e. “submits” it). Why? Because in the old days the circulation and availability of research articles was done almost exclusively by the intermediary of the publishers. The author had to “submit” (to) the publisher in order for the article to enter through the processing pipe.
  2. The editor of the journal seeks reviewers based on  hunches, friends advice, basically thin air. Why? Because, in the days when we could pretend we can’t search for every relevant bit of information, there was no other way to feed our curiosity but from the publishing pipe.
  3. There are 2 reviewers who make reports. (With the author, that makes 3 readers of the article, statistically more than 50% of the readers the article will have,  once published.) Why? Because the pyramidal way of organization was, before the net era, the most adapted. The editor on top, delegates the work to reviewers, who call back the editor to inform him first, and not the author, about their opinion. The author worked, let’s say, for a year and the statistically insignificant number of 2 other people make an opinion on that work in … hours? days? maybe a week of real work? No wonder then that what exits through the publishing pipe is biased towards immediate applications, conformity of ideas and the glorified version of school homeworks.
  4. The editor, based solely on the opinion of 2 reviewers, decides what to do with the article. He informs the author, in a non-conversational way, about the decision. Why? Because again of the pyramidal organization way of thinking. The editor on top, the author at the bottom. In the old days, this was justified by the fact that the editor had something to give to the author, in exchange of his article: dissemination by the means of industrialized press.
  5. The article is published, i.e. a finite number of physical copies are typed and sent to libraries and particulars, in exchange for money. Why? Nothing more to discuss here, because this is the step the most subjected to critics by the OA movement.
  6. The reader chooses which of the published articles to read based on authority arguments. Why? Because there was no way to search, firsthand, for what the reader needs, i.e. research items of interest in a specific domain. There are two effects of this.

(a) The raise of importance of the journal over the one of the article.

(b) The transformation of research communication into vanity chasing.

Both effects were (again, statistically) enforced by poor science policy and by the private interests of those favoured by the system, not willing to  rock the boat which served them so well.

Given that the entire system is obsolete, what to do? It is, frankly, not our business, as researchers, to worry about the fate of legacy publishers, more than about, say, umbrella repairs specialists.

Does Gold OA sell the peer-review service?  It is clear that the reader is not willing to pay for the research publications, simply because the reader does not need the service which is classically provided by a publisher: dissemination of knowledge. Today the researcher who puts his article in an open repository does a much better dissemination  than legacy publishers with their old tricks.

Gold OA is the idea that if we can’t force the reader to pay, maybe we can try with the author. Let’s see what exactly is the service which Gold OA publishers offer to the author (in exchange for money).

1.  Is the author a customer of a Gold OA publisher?

I think it is.

2. What is the author paying for, as a customer?

I think the author pays for the peer-review service.

3. What offers the Gold OA publisher  for the money?

I think it offers only the peer-review service, because dissemination can be done by the author by submitting to open repositories, like the arxiv.org , for free. There are opinions that  that the Gold OA publisher offer much more, for example the service of assembling an editorial board, but who wants to buy an editorial board? No, the authors pays for the peer-review process, which is managed by the editorial board, true, which is assembled by the publisher. So the end-product is the peer-review and the author pays for that.

4. Is there any other service  else sold to the author by the Gold OA publisher?

Almost 100% automated services, like formatting, citation-web services, hosting the article are very low value services today.

However, it might be argued that the Gold OA publisher offers also the service of satisfying the author’s vanity, as the legacy publishers do.

Conclusion.  The only service that publishers may provide to the authors of research articles is the open, perpetual peer-review.  There is great potential here, but Gold OA sells this for way too much money.

______________________________________

What is new in distributed GLC?

We have seen that several parts or principles of distributed GLC are well anchored in previous, classical research.  There are three such ingredients:

There are several new things, which I shall try to list them.

1.  It is a clear, mathematically well formulated model of computation. There is a preparation stage and a computation stage. In the preparation stage we define the “GLC actors”, in the computation stage we let them interact. Each GLC actor interact with others, or with itself, according to 5 behaviours.  (Not part of the model  is the choice among  behaviours, if several are possible at the same moment.  The default is  to impose to the actors to first interact with others (i.e. behaviours 1, 2, in this order)  and if no interaction is possible then proceed with internal behaviours 3, 4, in this order. As for the behaviour 5, the interaction with external constructs, this is left to particular implementations.)

2.  It is compatible with the Church-Turing notion of computation. Indeed,  chemlambda (and GLC) are universal.

3. The evaluation  is not needed during computation (i.e. in stage 2). This is the embodiment of “no semantics” principle. The “no semantics” principle actually means something precise, is a positive thins, not a negative one. Moreover, the dissociation between computation and evaluation is new in many ways.

4. It can be used for doing functional programming without the eta reduction. This is a more general form of functional programming, which in fact is so general that it does not uses functions. That is because the notion of a function makes sense only in the presence of eta reduction.

5. It has no problems into going outside, at least apparently, Church-Turing notion of computation. This is not a vague statement, it is a fact, meaning that GLC and chemlambda have sectors (i.e. parts) which are used to represent lambda terms, but also sectors which represent other formalisms, like tangle diagrams, or in the case of GLC also emergent algebras (which are the most general embodiment of a space which has a very basic notion of differential calculus).

__________________________________________

All these new things are also weaknesses of distributed GLC because they are, apparently at least, against some ideology.

But the very concrete formalism of distributed GLC should counter this.

I shall use the same numbering for enumerating the ideologies.

1.  Actors a la Hewitt vs Process Calculi.  The GLC actors are like the Hewitt actors in this respect.  But they are not as general as Hewitt actors, because they can’t behave anyhow. On the other side, is not very clear if they are Hewitt actors, because there is not a clear correspondence between what can an actor do and what can a GLC actor do.

This is an evolving discussion. It seems that people have very big problems to  cope with distributed, purely local computing, without jumping to the use of global notions of space and time. But, on the other side, biologists may have an intuitive grasp of this (unfortunately, they are not very much in love with mathematics, but this changes very fast).

2.   distributed GLC is a programming language vs is a machine.  Is a computer architecture or is a software architecture? None. Both.  Here the biologist are almost surely lost, because many of them (excepting those who believe that chemistry can be used for lambda calculus computation) think in terms of logic gates when they consider computation.

The preparation stage, when the actors are defined, is essential. It resembles with choosing the right initial condition in a computation using automata. But is not the same, because there is no lattice, grid, or preferred topology of cells where the automaton performs.

The computation stage does not involve any collision between molecules mechanism, be it stochastic or deterministic. That is because the computation is purely local,  which means in particular that (if well designed in the first stage) it evolves without needing this stochastic or lattice support. During the computation the states of the actors change, the graph of their interaction change, in a way which is compatible with being asynchronous and distributed.

That is why here the ones which are working in artificial chemistry may feel lost, because the model is not stochastic.

There is no Chemical reaction network which concerts the computation, simply because a CRN is aGLOBAL notion, so not really needed. This computation is concurrent, not parallel (because parallel needs a global simultaneity relation to make sense).

In fact there is only one molecule which is reduced, therefore distributed GLC looks more like an artificial One molecule computer (see C. Joachim Bonding More atoms together for a single molecule computer).  Only it is not a computer, but a program which reduces itself.

3.  The no semantics principle is against a strong ideology, of course.  The fact that evaluation may be not needed for computation is  outrageous (although it might cure the cognitive dissonance from functional programming concerning the “side effects”, see  Another discussion about math, artificial chemistry and computation )

4.  Here we clash with functional programming, apparently. But I hope that just superficially, because actually functional programming is the best ally, see Extreme functional programming done with biological computers.

5.  Claims about going outside Church-Turing notion of computation are very badly received. But when it comes to distributed, asynchronous computation, it’s much less clear. My position here is that simply there are very concrete ways to do geometric or differential like “operations” without having to convert them first into a classical computational frame (and the onus is on the classical computation guys to prove that they can do it, which, as a geometer, I highly doubt, because they don’t understand or neglect space, but then the distributed asynchronous aspect come and hits  them when they expect the least.)

______________________________________________

Conclusion:  distributed GLC is great and it has a big potential, come and use it. Everybody  interested knows where to find us.  Internet of things?  Decentralized computing? Maybe cyber-security? You name it.

Moreover, there is a distinct possibility to use it not on the Internet, but in the real physical world.

______________________________________________

A passage from Rodney Brooks’ “Intelligence without representation” applies to distributed GLC

… almost literally.  I am always very glad to discovered that some research subject where I contribute is well anchored in the past. Otherwise said, it is always well for a researcher to  learn that he’s on the shoulder of some giant, it gives faith that there is some value in the respective quest.

The following passage resembles a lot with some parts and  principles of distributed GLC:

  • distributed
  • asynchronous
  • done by processing structure to structure (via graph rewrites)
  • purely local
  • this model of computation does not need or use any  evaluation procedure, nor in particular evaluation strategies. No names of variables, no values are used.
  • the model does not rely on signals passing through gates, nor on the sender-receiver setting of Information Theory.
  • no semantics.

Now, the passage from “Intelligence without representation” by Rodney Brooks.

It is only the observer of the Creature who imputes a central representation or central control. The Creature itself has none; it is a collection of competing behaviors.  Out of the local chaos of their interactions there emerges, in the eye of an observer, a coherent pattern of behavior. There is no central purposeful locus of control. Minsky [10] gives a similar account of how human behavior is generated.  […]

… we are not claiming that chaos is a necessary ingredient of intelligent behavior.  Indeed, we advocate careful engineering of all the interactions within the system.  […]
We do claim however, that there need be no  explicit representation of either the world or the intentions of the system to generate intelligent behaviors for a Creature. Without such explicit representations, and when viewed locally, the interactions may indeed seem chaotic and without purpose.
I claim there is more than this, however. Even at a local  level we do not have traditional AI representations. We never use tokens which have any semantics that can be attached to them. The best that can be said in our implementation is that one number is passed from a process to another. But it is only by looking at the state of both the first and second processes that that number can be given any interpretation at all. An extremist might say that we really do have representations, but that they are just implicit. With an appropriate mapping of the complete system and its state to another domain, we could define a representation that these numbers and topological  connections between processes somehow encode.

However we are not happy with calling such things a representation. They differ from standard  representations in too many ways.  There are no variables (e.g. see [1] for a more  thorough treatment of this) that need instantiation in reasoning processes. There are no rules which need to be selected through pattern matching. There are no choices to be made. To a large extent the state of the world determines the action of the Creature. Simon  [14] noted that the complexity of behavior of a  system was not necessarily inherent in the complexity of the creature, but Perhaps in the complexity of the environment. He made this  analysis in his description of an Ant wandering the beach, but ignored its implications in the next paragraph when he talked about humans. We hypothesize (following Agre and Chapman) that much of even human level activity is similarly a reflection of the world through very simple mechanisms without detailed representations.

________________________________________________