A passage from Rodney Brooks’ “Intelligence without representation” applies to distributed GLC

… almost literally.  I am always very glad to discovered that some research subject where I contribute is well anchored in the past. Otherwise said, it is always well for a researcher to  learn that he’s on the shoulder of some giant, it gives faith that there is some value in the respective quest.

The following passage resembles a lot with some parts and  principles of distributed GLC:

  • distributed
  • asynchronous
  • done by processing structure to structure (via graph rewrites)
  • purely local
  • this model of computation does not need or use any  evaluation procedure, nor in particular evaluation strategies. No names of variables, no values are used.
  • the model does not rely on signals passing through gates, nor on the sender-receiver setting of Information Theory.
  • no semantics.

Now, the passage from “Intelligence without representation” by Rodney Brooks.

It is only the observer of the Creature who imputes a central representation or central control. The Creature itself has none; it is a collection of competing behaviors.  Out of the local chaos of their interactions there emerges, in the eye of an observer, a coherent pattern of behavior. There is no central purposeful locus of control. Minsky [10] gives a similar account of how human behavior is generated.  […]

… we are not claiming that chaos is a necessary ingredient of intelligent behavior.  Indeed, we advocate careful engineering of all the interactions within the system.  […]
We do claim however, that there need be no  explicit representation of either the world or the intentions of the system to generate intelligent behaviors for a Creature. Without such explicit representations, and when viewed locally, the interactions may indeed seem chaotic and without purpose.
I claim there is more than this, however. Even at a local  level we do not have traditional AI representations. We never use tokens which have any semantics that can be attached to them. The best that can be said in our implementation is that one number is passed from a process to another. But it is only by looking at the state of both the first and second processes that that number can be given any interpretation at all. An extremist might say that we really do have representations, but that they are just implicit. With an appropriate mapping of the complete system and its state to another domain, we could define a representation that these numbers and topological  connections between processes somehow encode.

However we are not happy with calling such things a representation. They differ from standard  representations in too many ways.  There are no variables (e.g. see [1] for a more  thorough treatment of this) that need instantiation in reasoning processes. There are no rules which need to be selected through pattern matching. There are no choices to be made. To a large extent the state of the world determines the action of the Creature. Simon  [14] noted that the complexity of behavior of a  system was not necessarily inherent in the complexity of the creature, but Perhaps in the complexity of the environment. He made this  analysis in his description of an Ant wandering the beach, but ignored its implications in the next paragraph when he talked about humans. We hypothesize (following Agre and Chapman) that much of even human level activity is similarly a reflection of the world through very simple mechanisms without detailed representations.



3 thoughts on “A passage from Rodney Brooks’ “Intelligence without representation” applies to distributed GLC”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s