I am continuing the post Applications of UD by two comments, one concerning Google, the other Kinect.
Google: There are many discussions (on G+ in particular) around A second spring of cleaning at Google, mainly about their decision concerning Google Reader. But have you notice they are closing Google Building Maker? The reason is this:
Compare with Aerometrex, which uses UD:
So, are we going to see application 2 from the last post (Google Earth with UD) really soon?
Kinect: (I moved the update from the previous post here and slightly modified) Take a look at the video from Kinect + Brain Scan = Augmented Reality for Neurosurgeons
They propose the following strategy:
- first use the data collected by the scanner in order to transform the scan of the patient’s brain into a 3D representation of the brain
- then use Kinect to lay this representation over the real-world reconstruction of the patient’s head (done in real time by Kinect), so that the neurosurgeon has an augmented reality representation of the head which allows him/her to see inside the head and decide accordingly what to do.
This is very much compatible with the UD way (see application point 3.) Suppose you have a detailed brain scan, much more detailed than Kinect alone can handle. Why not using UD for the first step, then use Kinect for the second step. First put the scan data into the UD format, then use the UD machine to stream only the necessary data to the Kinect system. This way you have best of both worlds. The neurosurgeon could really see microscopic detail, if needed, correctly mapped inside patient’s brain. What about microscopic level reconstruction of the brain, which is the real level of detail needed by the neurosurgeon?