I have to record this ongoing discussion from G+, it’s too interesting. Shall do it from my subjective viewpoint, be free to comment on this, either here or in the original place.
(Did the almost the same, i.e. saved here some of my comments from an older discussion, in the post Model of computation vs programming language in molecular computing. That recording was significant, for me at least, because I made those comments by thinking at the work on the GLC actors article, which was then in preparation.)
Further I shall only lightly edit the content of the discussion (for example by adding links).
It started from this post:
> […] cited in a 2003 Scientific American article on multiverses by Max Tegmark.
- local not global
- distributed not sequential
- no external controller
- no use of evaluation.
From this hypothesis, I believe that notions like “state”, “information”, “signal”, “bit”, are concepts which don’t pass this filter, which is why they are part of an ideology which impedes the understanding of many wonderful things which are discovered lately, somehow against this ideology. Again, Nature is a bitch, not a bit 🙂
That is why, instead of boasting against this ideology and jumping to consciousness (which I think is something which will wait for understanding sometimes very far in the future), I prefer to offer first an alternative (that’s GLC, chemlambda) which shows that it is indeed possible to do anything which can be done with these ways of thinking coming from the age of the invention of the telephone. And then more.
would one argue then that the past
is not real
I think we can safely say that the past is a thing, and any of this thing reifications are very real.
+Marius Buliga, i’m still digesting, could you rephrase “- no use of evaluation” for me? But yes, practical is good!
- Computation with GLC actors: pure syntax, no semantics
- Distributed GLC discussion (II)
- No extensionality, no semantics
- The front end visual system performs like a Distributed GLC computation
In distinction from that. in distributed GLC there is no evaluation needed for computation. There are several causes of this. First is that there are no values in this computation. Second is that everything is local and distributed. Third is that you don’t have eta reduction (thus no functions!). Otherwise, it resembles with pure functional programming if you see the core-mask construction as the equivalent of the input-output monad (only that you don’t have to bend backwards to keep both functions and no side effects in the model).
Among the effects is that it goes outside the lambda calculus (the condition to be a lambda graph is global), which simplifies a lot of things, like for example the elimination of currying and uncurrying. Another effect is that is also very much like automaton kind of computation, only that it is not relying on a predefined grid, nor on an extra, heavy handbook of how to use it as a computer.
On a more philosophical side, it shows that it is possible to do what the lambda calculus and the TM can do, but it also can do things without needing signals and bits and states as primitives. Coming back a bit to the comparison with pure functional programming, it solves the mentioned cognitive dissonance by saying that it takes into account the change of shape (pattern? like in Kauffman’s post) of the term during reduction (program execution), even if the evaluation of it is an invariant during the computation (no side effects of functional programming). Moreover, it does this by not working with functions.
I look forward for his comments about this!
The Autoverse is an artificial life simulator based on a cellular automaton complex enough to represent the substratum of an artificial chemistry. It is deterministic, internally consistent and vaguely resembles real chemistry. Tiny environments, simulated in the Autoverse and filled with populations of a simple, designed lifeform, Autobacterium lamberti, are maintained by a community of enthusiasts obsessed with getting A. lamberti to evolve, something the Autoverse chemistry seems to make extremely difficult.
Related explorations go on in virtual realities (VR) which make extensive use of patchwork heuristics to crudely simulate immersive and convincing physical environments, albeit at a maximum speed of seventeen times slower than “real” time, limited by the optical crystal computing technology used at the time of the story. Larger VR environments, covering a greater internal volume in greater detail, are cost-prohibitive even though VR worlds are computed selectively for inhabitants, reducing redundancy and extraneous objects and places to the minimum details required to provide a convincing experience to those inhabitants; for example, a mirror not being looked at would be reduced to a reflection value, with details being “filled in” as necessary if its owner were to turn their model-of-a-head towards it.