A very deep one, maybe.
Now, suppose that we scanned the whole world (or a part of it) and we put the data in the cloud. Do we have a mirror of reality now on the cloud? No! Why not, the data, according to mainstream CS ideology, is the same: coordinates and tags (color, texture, etc) in the cloud, the same in reality.
Think about the IoT, we do have the objects, lots of them, in potentially unlimited detail. But there is still this uncanny valley between reality and computation.
We can’t use the data, because:
- there is too much data (for our sequential machines? for our dice and slice ideology, a manifestation of the cartesian disease ? )
- there is not enough time (because we ask the impossible: to do, on one very limited PC, the work done by huge parts of reality? or because the data is useful only together with the methodology (based on absolute, God’s eye view of reality, based on passive space as a receptacle), and the methodology is what stops us?)
I think that we can use the data (after reformatting it) and we can pass the uncanny valley between reality and computing. A way to do this supposes that:
- we get rid of the absolute, passive space and time, get rid of global views (not because these don’t exist, but because this is a hypothesis we don’t need!)
- we go beyond Turing Machine and Von Neumann architecture, and we seriously include P2P asynchronous, local, decentralized way of thinking into the model of computation (like CSP, or Actor Model or why not Distributed GLC?)
This is fully compatible with the response given by Neil Gershenfeld to the question
(Thank you Stephen P. King for the G+ post which made me aware of that!)