Home > Uncategorized > Unlimited detail is a sorting algorithm

## Unlimited detail is a sorting algorithm

This is a continuation of the post “Unlimited detail challenge…“.  I am still waiting for a real solution to emerge from the various intriguing ideas. Here I want to comment a bit about the kind of the algorithm UD might be.

Bruce Dell compared it with a search algorithm, more specifically compared it with a search algorithm for words. As everybody knows, words form an ordered set, with the lexicographic order.

Moreover, Dell explains that the thing of UD is to pick from the huge database (which might be in a server, while the algorithm runs on a computer elsewhere, thus the algorithm has to be in a sense an online algorithm) only the 3D atoms which are visible on the screen, then render only those atoms.

Therefore, imagine that we have a huge database of 3D atoms with some characteristics (like color) and a separate database made of the coordinates of these atoms. The UD algorithm solves a “sorting” problem in the second database. (I put “sorting” because there is no total order fully compatible with the solution of the ray-casting problem, in the sense that it remains the same when the POV is changed.) Once this “sorting” part is done, the algorithm asks for the characteristics of only those points and proceed to the rendering part, which is almost trivial.

By looking at sorting algorithms, it is then to be expected a time estimate of the kind  $O((\log N)^{2})$ (for given, fixed number of screen pixels), where $N$ is the number of 3D atoms.

There are even sorting algorithms which are more efficient, like $O(\log N)$, but they are more efficient in practice for really huge numbers of atoms , like $2^{6000}$, as the AKS sorting.

So, the bottom line is this: think about UD as being a kind of sorting algorithm.

About these ads
1. March 5, 2013 at 4:41 pm | #1

New Euclideon interview for English student:

It’s a fairly nervous interview, but Dell let’s some things slip, like here:-

“With point cloud data you only grab the differences from frame to frame”
Hmmm… I’m now more confused than ever about how this works. Did he imply jpeg streaming is similar to their own process?

I asked him to ask the question about how long the island took to convert to Euclideon format at the end. The vague answer was about 20 hours for a terabyte of point clouds on average – not a good sign for game development.

These things lead me back to thinking it’s pre-rendered and sampled somehow.

• March 25, 2013 at 2:16 pm | #2

I noticed a rendering artifact at 30:21. Though, I’m not sure what to make out of it.

Regarding the selecting differences from frame to frame. Well, lossless jpeg encoding is basicly applying the wavelet transform (http://en.wikipedia.org/wiki/Wavelet_transform) and then use run-length endcoding. The wavelet transform basically builds a quadtree from the original image, with the average color in the root node and some difference information that tells how to obtain the average colors of the child nodes. This can be trivially generalized to octrees, allowing one to only stream differences (like in jpeg streaming). One can now select the high detail differences nearby and the low detail difference far away and stream these to the client.

Further compression can be obtained by smoothing out noise in the difference maps, which improves the compression ratio of the run-length encoding.

One issue with this method comes to mind. Which is color-leaking: Consider a wall, white on one side and red on the other. Once you are sufficiently far away, looking at the white side, the front and back color would be stored merged in a single tree node, effectively coloring the wall pink. However, assuming Euclideon uses this approach, they seem to hide it by reducing saturation at large distances, which will cause the wall to turn light gray.

2. March 5, 2013 at 4:43 pm | #3

The second video was supposed to start at about 29:33
I don’t know why the link’s time position didn’t work.

• March 5, 2013 at 7:21 pm | #4

Thanks Dave! Re: “about 20 hours for a terabyte of point clouds on average”, yes, but then you can walk around for hours in that world, wherever you wish. And remember that the Euclideon converted database is smaller (anyway, not bigger) than the input database.

• March 5, 2013 at 7:36 pm | #5

Yeah, I don’t think the size of the final database is the salient point here. From the perspective of computer game developers only, 5 minutes is too long from artwork to running demo.
I wonder if you have to remake the whole world again if you change a small area? If it’s pre-rendered then some items will be seen from many different areas.

3. March 5, 2013 at 7:50 pm | #6

Dave, I think the database conversion is done once, by the producer of the game, say. You get the converted database only, you don’t need to convert it on your computer every time when you load the game.

Much more interesting than games would be, in my opinion, an UD Google Earth. Which will happen, much sooner than expected, anytime now. It would look much more like Neal Stephenson’s “Earth” software from his Snow Crash novel.

• March 5, 2013 at 8:13 pm | #7

I didn’t say that either.
If an artist changes a branch or a dirt patch on a brick wall, he/she will have to export it to the viewer. This export process involves converting the mesh/points or whatever directly into the format required for the Euclideon engine to use.

4. March 12, 2013 at 7:06 am | #8

You’d need to convert the point cloud data for the compression advantages anyway. So it can just be done when the game goes gold.

Also people need to realize unlimited detail tech will effectively kill the 3d modeling software industry, as it will become alot cheaper to just pay an artist to make and paint a real 3d model and then laser scan it into the engine.

• March 12, 2013 at 1:12 pm | #9

The data format IS the renderer, the point cloud must be converted to be visible in Geoverse. So the process must be done every time.
Here’s what’s happening in one 3D modelling package:-

Do you really think that stationary, unlit, massive data models can compete with that? It’s amazing what’s happening in the GPU world at the moment.

• March 12, 2013 at 1:21 pm | #10

“The data format IS the renderer, the point cloud must be converted to be visible in Geoverse. So the process must be done every time.”
NO, WHY?

• March 12, 2013 at 1:30 pm | #11

chorasimilarity :
“The data format IS the renderer, the point cloud must be converted to be visible in Geoverse. So the process must be done every time.”
NO, WHY?

I’m sorry, I don’t understand. Are you disagreeing with me, or actually asking why?
ALL the discussions on here have been about the data format, and how it’s made.

• March 12, 2013 at 1:53 pm | #12

Yes, I disagree, I am asking why are you saying such a strange thing? There are no shortcuts, you need time to pass from a raw huge database of 3D atoms to the visualization of it. What UD is doing is splitting this time in two steps: 1. most of the time is consumed in order to transform the initial database into one of comparable size (maybe smaller, like 20%); this is done once, 20hrs (not on a laptop, but in a computer farm) for 1TB, then 2. the advantage of the UD formatted database is that it can be used fast, by a laptop or even by a cellphone, because it allows to extract from the database only very few significant data, in realtime.

• March 12, 2013 at 2:33 pm | #13

chorasimilarity :
Yes, I disagree, I am asking why are you saying such a strange thing? There are no shortcuts, you need time to pass from a raw huge database of 3D atoms to the visualization of it. What UD is doing is splitting this time in two steps: 1. most of the time is consumed in order to transform the initial database into one of comparable size (maybe smaller, like 20%); this is done once, 20hrs (not on a laptop, but in a computer farm) for 1TB, then 2. the advantage of the UD formatted database is that it can be used fast, by a laptop or even by a cellphone, because it allows to extract from the database only very few significant data, in realtime.

What about creating and editing the data? Whenever anything is changed, how much of the conversion has to be done again? It’s what I was saying a few posts before.

You can’t paint or manipulate the scene in the actual Euclideon format, so you have to re-generate the data every time you want to make a change to the original.

If you’re a game designer for example (or an archaeologist, crime scene investigator, or architect etc), you’ll need to go from creating the model to viewing it really quickly, or instantly, like the ‘Unreal’ demo shown above – the guy is manipulating all the graphics in real-time.
That’s all I’m saying. I can’t seem to get across the importance of the bond between CREATION and VIEWING and how imperative that is to the creative business.
Unlimited Detail does not allow this, because of the large conversion time.

1. March 6, 2013 at 8:20 pm | #1
Sauropod Vertebra Picture of the Week #AcademicSpring

SV-POW! ... All sauropod vertebrae, except when we're talking about Open Access

Science to Grok

computing with space

isomorphismes

computing with space

Retraction Watch

Tracking retractions as a window into the scientific process

Shtetl-Optimized

computing with space

Not Even Wrong

computing with space

Theoretical Atlas

He had bought a large map representing the sea, / Without the least vestige of land: / And the crew were much pleased when they found it to be / A map they could all understand.

Gödel's Lost Letter and P=NP

a personal view of the theory of computation

Gowers's Weblog

Mathematics related discussions

Research and Lecture notes

by Fabrice Baudoin

Calculus VII

being boring

The "Putnam Program"

Language & Brains, Machines & Minds

What's new

Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao