Unlimited detail is a sorting algorithm

This is a continuation of the post “Unlimited detail challenge…“.  I am still waiting for a real solution to emerge from the various intriguing ideas. Here I want to comment a bit about the kind of the algorithm UD might be.

Bruce Dell compared it with a search algorithm, more specifically compared it with a search algorithm for words. As everybody knows, words form an ordered set, with the lexicographic order.

Moreover, Dell explains that the thing of UD is to pick from the huge database (which might be in a server, while the algorithm runs on a computer elsewhere, thus the algorithm has to be in a sense an online algorithm) only the 3D atoms which are visible on the screen, then render only those atoms.

Therefore, imagine that we have a huge database of 3D atoms with some characteristics (like color) and a separate database made of the coordinates of these atoms. The UD algorithm solves a “sorting” problem in the second database. (I put “sorting” because there is no total order fully compatible with the solution of the ray-casting problem, in the sense that it remains the same when the POV is changed.) Once this “sorting” part is done, the algorithm asks for the characteristics of only those points and proceed to the rendering part, which is almost trivial.

By looking at sorting algorithms, it is then to be expected a time estimate of the kind  O((\log N)^{2}) (for given, fixed number of screen pixels), where N is the number of 3D atoms.

There are even sorting algorithms which are more efficient, like O(\log N), but they are more efficient in practice for really huge numbers of atoms , like 2^{6000}, as the AKS sorting.

So, the bottom line is this: think about UD as being a kind of sorting algorithm.

Advertisements

15 thoughts on “Unlimited detail is a sorting algorithm”

  1. New Euclideon interview for English student:

    It’s a fairly nervous interview, but Dell let’s some things slip, like here:-


    “With point cloud data you only grab the differences from frame to frame”
    Hmmm… I’m now more confused than ever about how this works. Did he imply jpeg streaming is similar to their own process?

    I asked him to ask the question about how long the island took to convert to Euclideon format at the end. The vague answer was about 20 hours for a terabyte of point clouds on average – not a good sign for game development.

    These things lead me back to thinking it’s pre-rendered and sampled somehow.

    1. I noticed a rendering artifact at 30:21. Though, I’m not sure what to make out of it.

      Regarding the selecting differences from frame to frame. Well, lossless jpeg encoding is basicly applying the wavelet transform (http://en.wikipedia.org/wiki/Wavelet_transform) and then use run-length endcoding. The wavelet transform basically builds a quadtree from the original image, with the average color in the root node and some difference information that tells how to obtain the average colors of the child nodes. This can be trivially generalized to octrees, allowing one to only stream differences (like in jpeg streaming). One can now select the high detail differences nearby and the low detail difference far away and stream these to the client.

      Further compression can be obtained by smoothing out noise in the difference maps, which improves the compression ratio of the run-length encoding.

      One issue with this method comes to mind. Which is color-leaking: Consider a wall, white on one side and red on the other. Once you are sufficiently far away, looking at the white side, the front and back color would be stored merged in a single tree node, effectively coloring the wall pink. However, assuming Euclideon uses this approach, they seem to hide it by reducing saturation at large distances, which will cause the wall to turn light gray.

      1. Indeed thought that a mix of polygon like reference points would be needed to make such engine feasible for a game, but also for avoiding said color leaking, which I also happened to think of, especially in the smaller mipmaps, because yes, what we are talking now is a sort of 3d optimized mipmapping in order to jump from voxel chunks that get increasingly far apart as the distance increases, allowing to not even sort points which are not significant or that would fall on the same pixel of the 2d screen.
        With polygons and nurbs, anyway one can also spare ram by orienting same sets of point clouds or voxel. Or even using relative coordinates, using less memory for higher precision locations wherever it’s needed and for the pointers needed to build the network of reference which connects the masses.
        A no brainer objection i see against animations of big chunks of data is that you would need to move millions of points every frame, compared to a few thousand polygons. Not true, because such coordinates are relative to the origin, and only the origin is can be actually moved, the other points composing the object can be rendered only when on screen and still operating the sorting algorithm that supposedly draws only the needed pixels at each frame. When the whole object is off screen, (collision calculations aside) the system only has to account for the origin, mostly, and only of morph targets and skeletal joints in the case of skeletal animation. I know because such streamlining is something already happens with polygons as well.
        I happen to think of a problem (and its relative solutions as well) though, this all works wonders as long as you can parse points which are dense, passing from one to the other, going through each level of detal and distance biase, but what happens if you have to render what’s behind a roof of a palace and can’t get to such far away points passing through the aforementioned chain of directly connected points (because it’s off scree or hidden by already drawn points, but having to even sort so much invisible points would dump all the advantage of such technology, it would be like over drawing).
        We all would certainly think about accessing such far nodes from parent nodes which cover much bigger distances between each other. Especially if we are using recursive functions to go through the leaves and branches, going back to parents would be extremely fast and easy.

        A problem with this, if unoptimized would be isotropia though, something analog to how blurry used to be the mipmaps of the texture, when they were seen from a tight trasversal angle, like a road, a floor or a wall parallel to your view angle.
        Obviously blurriness would be a problem in this situation, as we only need to find the next convenient point behind the fence or parapet of our roof, out of a vast panoramic view, easily close to a million of remaining points to draw, if not beyond.
        The problem with using a parent node to skip to the distance, would be that the next parent node might be out of sight (or hidden by the parapet, and/or portion of the drawn roof floor depending on our position) and the system would have to try drawing another node ensuring it’s on our line of sight, which would be easier once being back into a thick network of nodes.
        This not taking into account the analog problem of color bleeding which would be the loose network of parent nodes landing right inside the building in a perfectly useless position.
        I thought of some ways to overcome this, using wall entities and having parent nodes external, plus a flexible list of pointers and references, updatable if the world gets modified or something moves, plus some arrays that modify the access to the other data set depending on the angle.

  2. The second video was supposed to start at about 29:33
    I don’t know why the link’s time position didn’t work.

    1. Thanks Dave! Re: “about 20 hours for a terabyte of point clouds on average”, yes, but then you can walk around for hours in that world, wherever you wish. And remember that the Euclideon converted database is smaller (anyway, not bigger) than the input database.

      1. Yeah, I don’t think the size of the final database is the salient point here. From the perspective of computer game developers only, 5 minutes is too long from artwork to running demo.
        I wonder if you have to remake the whole world again if you change a small area? If it’s pre-rendered then some items will be seen from many different areas.

  3. Dave, I think the database conversion is done once, by the producer of the game, say. You get the converted database only, you don’t need to convert it on your computer every time when you load the game.

    Much more interesting than games would be, in my opinion, an UD Google Earth. Which will happen, much sooner than expected, anytime now. It would look much more like Neal Stephenson’s “Earth” software from his Snow Crash novel.

    1. I didn’t say that either.
      If an artist changes a branch or a dirt patch on a brick wall, he/she will have to export it to the viewer. This export process involves converting the mesh/points or whatever directly into the format required for the Euclideon engine to use.

  4. You’d need to convert the point cloud data for the compression advantages anyway. So it can just be done when the game goes gold.

    Also people need to realize unlimited detail tech will effectively kill the 3d modeling software industry, as it will become alot cheaper to just pay an artist to make and paint a real 3d model and then laser scan it into the engine.

    1. The data format IS the renderer, the point cloud must be converted to be visible in Geoverse. So the process must be done every time.
      Here’s what’s happening in one 3D modelling package:-

      Do you really think that stationary, unlit, massive data models can compete with that? It’s amazing what’s happening in the GPU world at the moment.

      1. chorasimilarity :
        “The data format IS the renderer, the point cloud must be converted to be visible in Geoverse. So the process must be done every time.”
        NO, WHY?

        I’m sorry, I don’t understand. Are you disagreeing with me, or actually asking why?
        ALL the discussions on here have been about the data format, and how it’s made.

      2. Yes, I disagree, I am asking why are you saying such a strange thing? There are no shortcuts, you need time to pass from a raw huge database of 3D atoms to the visualization of it. What UD is doing is splitting this time in two steps: 1. most of the time is consumed in order to transform the initial database into one of comparable size (maybe smaller, like 20%); this is done once, 20hrs (not on a laptop, but in a computer farm) for 1TB, then 2. the advantage of the UD formatted database is that it can be used fast, by a laptop or even by a cellphone, because it allows to extract from the database only very few significant data, in realtime.

      3. chorasimilarity :
        Yes, I disagree, I am asking why are you saying such a strange thing? There are no shortcuts, you need time to pass from a raw huge database of 3D atoms to the visualization of it. What UD is doing is splitting this time in two steps: 1. most of the time is consumed in order to transform the initial database into one of comparable size (maybe smaller, like 20%); this is done once, 20hrs (not on a laptop, but in a computer farm) for 1TB, then 2. the advantage of the UD formatted database is that it can be used fast, by a laptop or even by a cellphone, because it allows to extract from the database only very few significant data, in realtime.

        What about creating and editing the data? Whenever anything is changed, how much of the conversion has to be done again? It’s what I was saying a few posts before.

        You can’t paint or manipulate the scene in the actual Euclideon format, so you have to re-generate the data every time you want to make a change to the original.

        If you’re a game designer for example (or an archaeologist, crime scene investigator, or architect etc), you’ll need to go from creating the model to viewing it really quickly, or instantly, like the ‘Unreal’ demo shown above – the guy is manipulating all the graphics in real-time.
        That’s all I’m saying. I can’t seem to get across the importance of the bond between CREATION and VIEWING and how imperative that is to the creative business.
        Unlimited Detail does not allow this, because of the large conversion time.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s