New discussion on UD (updated)

This post is for a new discussion on UD, because the older ones “Discussion about how an UD algorithm might work” and “Unlimited detail challenge: the most easy formulation” have lots of comments.

____________

UPDATE: It looks like Oliver Kreylos    (go and read his blog as well)   extremely interesting work on LiDAR Visualization is very close to Bruce Dell’s  Geoverse. Thanks to Dave H. (see comments further) for letting me know about this.  It is more and more puzzling why UD  was welcomed by so much rejection and so many hate messages when, it turns out, slowly but surely, that’s one of the future hot topics.

____________

Please post here any new comment and I shall update this post with relevant information. We are close to a solution (maybe different than the original), thanks to several contributors, this project begins to look like a real collaborative one.

Note: please don’t get worried if your comment does not appear immediately, this might be due to the fact that it contains links (which is good!) or other quirks of the wordpress filtering of comments, which makes that sometimes I have to approve comments by people who already have previous approved comments. (Also, mind that the first comment you make have to be approved, the following ones are free of approval, unless the previous remarks apply.)

Let’s talk!

103 thoughts on “New discussion on UD (updated)”

  1. I was re-reading some of the previous threads and I came to the conclusion that JX nailed it hard.
    My thought is that in any UD video they didn’t seem to have any LODding issue. I mean, the caracteristic “pixelated” effect that pops up in lack of mipmapping. If they have used any other method for raytracing/voxel partitioning they should have came across something like that, but their videos have smooth “zooming” transitions.
    In JX’s snapshots the behavior is very similar (possibly a pass of antialiasing should do the fix). So yesh, UD=cubemaps.
    That’s some very “naive” idea but meh. Keep it compressed and you’ll be just fine.
    I really think that’s all. Maybe.

    1. Right, I think the same. There is one thing undone, at least, namely introducing an octree style description of the points cloud, which is not “geographical”, i.e. is not wrt to some absolute reference system, but cloud points centered, something Dave H. works on. That’s the second very good idea, in cronological order.

      1. Yep that’s also going to be cool!
        Darn, I can hardly imagine Dell right now reading our comments an laughing his ass off like: “Look at what things I’m making people do”.

        As a side notice, what about animations? Such a system isn’t really going to go anywhere without animations or physics (like Dell claimed, at least in the videogame industry). Is it about “switching” cubemaps? That’s insane.
        Oh no. I see no full-UD applications. I see hybrids.

  2. Sorry I intended videogames. Hybrid videogames. I’m out of sleep. lol
    The first thing I’ll check will be like “my house in UD” on street view. And my street sucks. At least it will suck at unlimited detail.

      1. Examining ‘street view’ gives you a completely unlimited (to resolution) detailed view of outside your house as it is! Look at all those fine, fine details on the surfaces. My research is based on kind of merge between different ‘street view’ style 3D cube maps. 😉
        I’m starting to have my doubts at the moment though, but only because I’ve got a hitch with it concerning holes in the data. Instead of moving the 3D points I need to re-map the screen pixels to the new positions top fill in the holes. It’s a bit hard to explain.

      2. Dave H., take your time, although I think I understand you. What you say is much cheaper than retrieving the right points from the database every other time, is better to work as much as possible in the screen pixels space (or in the selected virtually visible points space). Am I right?

  3. Dave H keep in mind that euclideon took something like 8 years to achieve what they are getting now. Problems may get THAT hard to solve. Be warned then.

    Also I would like to ask a question about cubemapping.
    I need to understand “when” the holes pop up. If the artifacts get worse the further away you move from the center of a cubemap, then I’ll throw this idea: overlapping cubemaps.
    When the precision is not enough with the current cubemap, you “switch” to the closest overlapped cubemap before the holes come in.
    You may have already discussed about that so then just ignore my comment 🙂

    Btw I retrieved some frames of artifacts. Very likely they have already been linked here but I cant seem to find (a grass ‘tip’ and the scene of the leaf collision).
    http://www.mediafire.com/view/?txh2g012av5py#9lnk863k4m6e5ar

  4. The artefacts could be in the original model. It’s great idea to link to some images though, I’ll grab some tomorrow when I’m back at the computer (it’s 10:20pm here in the UK ;). There are at least two other places where you can see the ‘fat pixels’ that make up the image. So it’s definitely worth putting them in one place.
    One thing amazes me by your screenshot is a reminder that those tree fronds are antialiased! Which, if it’s an octree, the average colours have to be stored at each level of the octree, or at least some kind of reference pointer.

    1. I got it!
      In the meanwhile I’ll see if I can fill up the MF folder with some more screenshots.

      I saw the atomontage engine doing some supersampling for edge smoothing

      and in some of their videos.
      I may be wrong, possibly it was just a simple DOF effect not really worth of interest…

      Linear interpolation between nearby points? Can be a feasible way to “fill the blanks”?

    2. They are not “fat pixels”, they are “fat points”, I guess. I think same is true for the trees, the UD algorithm seems to have some small problems with picking the right scale for the “fat points”, it does it right for the trees, but wrong for the … thing in close view.

    1. Thanks, that’s very interesting to learn about, much more information on the homepage. Yes, that’s the world where UD fits.
      UPDATE: If I were anywhere close to the data visualisation community (games including), which I’m not, I would have known about this. It is incomprehensible for me the level of rejection UD got.

  5. The tree structure they seem to use is based on a center of mass decomposition (say each point has one unit of mass). The pre-processing time is in the range of seconds, so why is Bruce Dell saying in his interview about 20hrs time for pre-processing?

    1. > Also, I cant tell how serious this is…
      This is serious. ZJ = shabtronic = shabby = Zardoz Jones, a very talented hence un-haughty computer scientist, formerly involved in emulators. He produced, about 3 years ago, the impressive yellow mice demo.

      > It is incomprehensible for me the level of rejection UD got.
      Malevolent design. NVIDIA & the like = GlaxoSmithKline = UNICEF = Human Rights Watch = … = Mammon-worship.

  6. There: http://idav.ucdavis.edu/~okreylos/ResDev/LiDAR/index.html

    Quote: “Implement an out-of-core multiresolution point cloud renderer to visualize large (100 million to 10 billion points) LiDAR scans at interactive frame rates (around 60 frames per second) in immersive or desktop environments.

    LiDAR data is represented in an octree of point sets. The octree is created by recursively splitting the data set’s domain until each octree leaf node contains less than a given number of sample points. Intermediate octree nodes are created by subsampling the union of point sets contained in their children. The subsampling algorithm subsequently merges pairs of closest points, to achieve a uniform point density in each interior node of the octree.

    The in-memory representation of the octree serves as a cache for the on-disk representation. Only the octree nodes relevant for rendering the current view are kept in memory. Cached nodes are exchanged according to their priority, based on the visible size of the node’s domain and their distance from a focus region.

    Cached nodes are subdivided, i.e., their children are loaded from disk, if their projected size exceeds a given threshold. By adapting this threshold, the overall rendering quality can be changed to achieve interactive frame rates independent of the performance of the underlying graphics hardware.

    Currently rendered nodes — a subset of the nodes currently kept in memory — are cached on the graphics card using OpenGL vertex buffer objects. The fixed number of cached nodes is managed based on a most-recently used strategy.

  7. Here’s some screen-grabs I’ve taken from the ‘Aiden Visits Euclideon’ video on YouTube.
    It shows the stream being loaded progressively and the pixels shrinking as the details are collected. The further the points are, the fewer details need to be loaded.
    Is detail succession:




    And here’s a leaf as it zoomed passed the camera in an early video of UD:

    Some parts of the leaf’s edge seem blocky while others don’t, but you can clearly see the fat pixels again. We could guess that the colour information appears compressed like a jpeg, and it doesn’t seem to be evenly spaced, but it’s difficult to tell as always.

    1. Hmmm, actually the colour comes from the individual pixels in those shots, so maybe not jpeg, I was taking it too far. 😉

  8. About UD data format

    Last Year I do some test with octree data and note some facts.
    Build an octree of point cloud increase the memory used (the points and the inner nodes),
    Every node have 8 childrens, one optimization (in size) is save only the first pointer but the memory usage is more too.
    Another kind of encode a octree is with the morton order, in few words, interlaving X,Y,Z in bit level give a morton order of the points.
    again, sort the point cloud in morton orden not reduce the memory (not increase at last) usage but there are and idea.. prefix compression..
    if there are a lot of point in a octant..say for example 000.101.001 then I can save morton of prefix (3 levels) and in this chunk of space
    not save the prefix then I remove 9 bits for every point in this place.
    if like grouping the point in space and give me other idea. Suppose I have only 8 bits for “level” (not the octree level), in first level only
    have 256x256x256 places.
    In places with data (points inside) I can hold other “level” and then the precision of these points are 16 bits
    (8 for the “level” 1 and 8 more for the current “level”) and so.. if I need.. I have more precision (unlimited??)
    The cons are the cut of the space in octree are fixed, if a group of point has in the middle, the prefix compresion lost this information.
    I note in UD animation videos some block of pixels traslates (a spider moving legs), or the boxes in the Aiden video (the guy working with joystick),
    tell someting about the point cloud.
    Perhaps search a box of points of any density or any count of point for cut the “level” but this levels need traslates…

    I don’t know how fit this in the render procedure and I not fully understand the octree to quadtree conversion of 17genr.

    I hope to contribute something.

    1. preda thanks! The center of the UD is in the way the “octree”, or just an equivalent of a tree is constructed. The algorithm of Kreylos is a step in the good direction (he has other better parts than UD, but still he has not the speed).

  9. Thick & like clay:

    http://web.archive.org/web/20111128001512/http://www.gamingface.com/2011/08/death-of-gpu-as-we-know-it-re-post-from.html

    I’m convinced that UD is analogous to a combination of Yagel & Teller’s works (object-order front-to-back octree traversal with an octant rejection method based on frustum casting via sub-viewports’ advance). You have to be less precise, more abstract, however, for example don’t advance/test four corners per viewport square (too much work & accesses), instead advance their centers & test by relying on the sides’ sizes (less precise: thick & like clay). (Please read the papers).

    Euclideon did not block my last posts (which are now gone because I deleted my account), so one’s conviction was lessened.

    preda, it’s D. Meagher who does octree to quadtree conversion (alg. number 2). You are right about the prefixes (suffixes too: instancing): “Think little 3D atoms, X, Y, Z and some property coordinates, like colour and texture. However we store it all in a zipped format that recycles any common structures. So it’s all nice and small in the end”. (http://thisismyjoystick.com/feature/interview-bruce-roberts-dell-unlimited-detail-technology/)

    Sorry for the length.

  10. UD’s main feature descibed in that link, is that they can display an unlimited amount of detail, so they can reflect the whole island upside down to make the water effect and it still renders at the same speed. This ‘fact’ alone cancels out many 3D algorithms. So they simply can’t be drawing 3D objects through standard splatting techniques, even though it looks identical to Meagher’s cloud rendering at times.
    I tried Lars Schneider’s octree renderer on my Mac yesterday, and it runs faster than any other voxel code I’ve tried, but it is still hindered by the number of points.
    Here’s a video and links to software:

    1. Note the errors on the side of the phone’s screen are very similar to early UD videos. Maybe Euclideon use a combination of 3D splatting and 2D pre-projection?

      1. Another video shows it running on an iPad (2 mins in), he’s pausing to allow data to stream in.

    1. 17genr :
      http://web.archive.org/web/20080509064541/http://www.tkarena.com/Articles/tabid/59/ctl/ArticleView/mid/382/articleId/38/Death-of-the-GPU-as-we-Know-It.aspx#Comments

      It’s interesting that Bruce (and his dad) spoke out on a forum, they haven’t done that again.
      His father is right though, It is another list of “it’s voxels, I know how they work, so I conclude it can’t be done” kind of talk. It’s really just people trying to fit what they know into something else entirely, but we do have to go through that process to reach conclusions, or as Sherlock Holmes puts it:-
      “When you have eliminated the impossible, whatever remains, however improbable, must be the truth.”
      🙂

  11. If you have a WebGL compatible browser (Chrome/Firefox) then you may be interested in this:-
    https://www.shadertoy.com/view/Msf3z4
    They’ve built up a 3D ray-tracing environment out of planes, boxes and cylinders that match the cubmap picture, then re-project rays from new camera positions and remap them from the cubemap. The effect is fairly convincing, and reminds me of the localised 3D cubemap idea.

  12. UD was met with rejection because of the wild claims of the author. The PR stunt of the author was just too much to digest. He just went full retard from giving a false idea about the “unlimited” to the stupid idea that somehow this will become the “polygon” killer and the next big thing for games.

    It’s a shame because the tech really is sort of amazing. But we can definitely make out some of the limitations. I can imagine some of the game assets being visualized with this tech but that’s it.. just a few things here and there.

    There are loads of other more important technologies in the backburner waiting for much much faster gpus. Be it the hyper-expensive path tracing, new, realistic and physically correct looking shaders and etc. Focusing on geometry will just push us back from the real things we have to concentrat on when it comes to a good 3d experience in games.

    1. “Wild” but true. I agree that it would be interesting, from a sociological point of view, to understand why this communication went so bad, but that’s not the point. There are other places for this type of comment, here we try to build something, not to win the friends approval in a spitting contest. Any constructive contribution is welcome, but this — let’s feel good among the naysayers — is not constructive.

  13. Somebody should ask D. Meagher how he implemented “Vision 0.2 – Win32 (95/98/NT4) Version Built ‘May 4 2001 19:52:48′”. It has the edges’ artifacts.

      1. Go to octree.com and you can demo their browser plug-in with various data sets. You can see the errors on the right hand side when the model intersects the edge. This only seems to happen on certain models though.

  14. Lately I’ve been wondering how one could do perspective projection (i.e. dividing the horizontal and vertical coordinates by the depth of the point) with only integer additions, subtractions and, I assume, bit shifts. Dell claims he does this.

    I’ve come as far as this:
    – First use floating point computations to determine the 8 corners of the root node of our octree and convert the coordinates of these corners to integers. As this is constant time and not really rendering I do not count these floating point operations.
    – Now we can compute the corners of children in the octree using only addition and bit shifts (divisions by 2).

    This allows a lot of view port culling. One can also traverse the nodes in front to back order and apply some occlusion culling method. However, the coordinates are in screen space and still need to be projected on the screen.

    One stupid idea for doing the projection came to my mind so far. For a given point in camera space, start in the middle of the screen (x and y are zero there, assuming z is depth) and then repeatedly add or subtract z, while shifting a single pixel, until x and y are close enough to the given point. Yes, this is a really slow method of performing division (I said it is stupid). However, maybe there is some clever and fast way of doing this.

    Does anyone have an idea of how to tackle or circumvent the projection problem with only integer additions, subtractions, comparisons and bit shifts? I’m hoping that the solution to this problem gives some insight on how this integer addition property helps Euclideon to improve their rendering speed.

  15. I know that at this point in time it is pretty much pointless, but if you want you can check out my new video for the ray casting thingy on my channel:
    http://www.youtube.com/appc23

    I don’t want to embed it here to save space.

    I repost these links tho, you never know:
    http://tinyurl.com/c7zbqkh
    http://tinyurl.com/br9zynm
    http://www.mediafire.com/?w9oxygazfn3u4hq

    But damn! Dell was all like “sort data in an odd way” I fucking knew it! And then boom cubemaps…
    Oh well that was my attempt.
    Peace

  16. The paper is mostly on how to optimize the rendering process using GPU. However, they are using a variation on a ray casting algorithm: “recursive DDA (i.e., generalized Bresenham)”. DDA = Digital differential analyzer. They use a variation, because the algorithm relies on a stack, which has poor performance on GPUs.

    Wikipedia mentions that DDA only uses integer addition and subtraction, so this seems to be the answer (or at least a hint) to my question in #40 (thanks for recommending the paper). Also, it is very close to how I imagined ‘mass connected processing’.

    1. Yes, seems to be one ingredient. But why are you saying this is close to “mass connected …”? Still, this type of processing seems to treat points (on the screen or in space) independently. (That would be another ingredient, I guess.)

      1. I should’ve read on DDA before posting. The other ingredient is what I read at http://beyond3d.com/showthread.php?t=62515. Basically it is using a screen space quadtree for occlusion culling. And then traversing both the quadtree and octree simultaneously, the octree in front to back order. It is like ray tracing, but with pyramids which are recursively subdivided instead of rays.

        You should only implement this for the pyramid max(|x|,|y|)<z as this allows you to apply DDA during traversal. This effectively renders a single face of the cubemap surrounding the viewer and can be repeated for the other 5 pyramids/faces as well.

  17. bcmpinc, yes, but still, are you finding points independently, or you have some greedy algorithm which exploits the past image and the already found points in order to speed the search for new ones?

  18. I select one octree node per pixel. At most, one, because of the occlusion method I use. I do not reuse information of the previous frame. It would be trivial to add, due to the occlusion quadtree, but I do not expect that it would improve performance.

    When ray tracing an octree, neighboring rays often traverse the same nodes for the first part of the ray. I exploit this by traversing the nodes for the first part of both rays simultaneously, which cuts computational cost in half. This can also be done recursively for bundles of rays. I estimate that this optimization reduces the running time by a log factor.

    It is not a simple algorithm, but it is straightforward enough to become the textbook example of how to render point clouds. And the code would fit on 2 pages.

    1. bcmpinc :
      When ray tracing an octree, neighboring rays often traverse the same nodes for the first part of the ray. I exploit this by traversing the nodes for the first part of both rays simultaneously, which cuts computational cost in half. This can also be done recursively for bundles of rays. I estimate that this optimization reduces the running time by a log factor.

      How you decide when the dual ray rule is executed? The camera could be on a cube’s side, and the angle of the rays are all very slightly different and hit octree leaves at subtly different times and scales.

      1. It is always executed. The bundle is split whenever necessary. For example when the bundle crosses the boundary of a cube. Though I’m not literally bundling the rays.

        I’ll attempt to describe the algorithm:
        Let F be the view pyramid (like view frustum, but without nearplane. It still has a backplane, which is the backside of O, thus it is different than the farplane. However, this is mostly an implementation detail). Let O be the root node of the octree.
        Then call render(F,O).

        Render(f, o) {
        // occlusion
        If f is fully rendered, then return.
        If o does not intersect f, then return.

        // rendering
        If f is a single pixel, then color it with the color of o and return.

        // recursion
        If the backplane of f is than a side of o {
        Split f into 4 pyramids.
        For each smaller pyramid sf, call Render(sf, o).
        } else {
        Traverse the children of o in front to back order.
        For each child c, call Render(f, c).
        }
        }

        This algorithm should be implemented using DDA. It allows you to check whether o and f intersect in a single line of code. Also, with DDA it can be implemented using only additions, subtractions and bitshifts.

        DDA already requires 6 copies of this code, one for each side of the ‘cubemap’ of the current view. With 24 copies — split each side in 4 smaller squares — one can hardcode the octree child traversal order.

    2. I’ve given up writing my implementation for now. I hit a few issues, which I cannot seem to resolve to my liking. Additionally, I should be busy finishing my master thesis.

      The problem I’m running into, is that when I use ray vectors of which the z-coordinate is 1, I can use DDA, but must use perspective correct interpolation, which includes two multiplications and a devision. If I normalize the vector, I can simply use linear interpolation, with one addition and one bitshift, but cannot seem to use DDA, forcing me to do too much branching.

      My code is at https://github.com/bcmpinc/voxel-engine, but currently it is not even worth checking out. Only camera is working, and if you happen to have a dataset, you can see my previous failed attempt.

    1. Well, it is hard to say. I do not know the bb language and there too few comments for me to understand what is going on. However it seems like you do not use any recursion, which is a vital part of my algorithm. It is still quite possible that you are using DDA, but it is not in the same way as I am planning to use it.

      1. I updated the package with a “GUEasy” file. Try downloading it again from the link 🙂
        It contains a very basic explanation of the algorithm.
        I use BB just for easier understanding btw.

    1. Anyway I’m just talking about DDA right now.
      Sincerely, I’m done with all that weird ray tracing stuff of mine and with the whole UD thing. It’s getting crazy.
      I just wanted to check if the DDA I got in mind seeks the same logic as what you all are talking about (I associate the DDA to “line drawing” and that’s all I know about it). That was a quick stupid example of course and can be adapted to do all kind of stuff.

  19. I found some source code:

    http://www.cse.yorku.ca/~wolfgang/software.html
    FLIRT – Faster than LIght Ray Tracer

    This use octree-5d for acceleration data structure of ray tracing..
    explain in the book “Geometric Data Structure for computer graphics”, for “Elmar Langetepe and Gabriel ZachMann”. I not full undestand for now..

    Another old idea “ProximityClouds”, grab in the empty space of octree the distance to the next posible hit.

    I try to combine both..

  20. New video in http://www.youtube.com/watch?v=t8rsEJoh6mQ

    Nice paper to read in https://sites.google.com/site/hvclabunist/publications, first paper.

    I found a nice idea for reject nodes in a octree.. add a side-in information in nodes, 1bit for face. all the rays without this direction make this node empty.. derived from 5d-octree (not really aplicable in points).

    The first data of ud file format avaiable in
    http://euclideon-apac-sydney.s3-ap-southeast-2.amazonaws.com/datasets/Kimoto_8cm.uds

    I miss the render..Marius, if you like tell us the hint for make a traverse with adds…

    1. Thanks Pablo, I’ll make a post later, there are interesting snapshots to take between 8:46-8:49, when they go instantly to a location and it downloads.
      Otherwise very very busy, I would LOVE to do stuff like this but I have my stuff to push. But I’m open to collaboration.

  21. I know I might be late to the game (as far as discussion goes) but I have the habit of stewing on things until my head explodes. A lot of good ideas and hard work has gone into the thought on how UD works or might work, and the good lot of them have seemed to gravitate here to these dicussions. So I thought I might jump in so I do apologize if anything I do touch upon has been talked about already since I have not read every message or post that there is possible here or elsewhere.

    I personally like playing the numbers game, that is, to sit back and wait for all interviews on the technology to show a pattern (or give enough to figure out a pattern) and piece together formulas to get at the same numbers and study them. The concept of using a form of cube maps (at least in part, or a past version of UD) actually was confirmed to me in the March 19th, 2008 interview with Tech Knowledge Arena (I believe the archive.org version of the link has been posted before):

    “This is of a character made of point cloud data. The rather unique creature is made of 1,572,864, 3D atoms. Unlimited Detail admit that they don’t have a graphics artist onboard to create impressive models. That’s a shame since better artwork would help to sell this technology. But whether you like the look of this character or not is not the point of the demo. The character is made of little colored dots. If you look closely at his feet you will see that all the bumps are not flat textures like we normally see in games but rather the bumps are all real geometry. There is not a polygon straight edge to be seen.”

    They were talking about a screenshot of an alien looking dog (I think was called a ‘mud puppy’ in other interviews or posts). But what got me was the exact count of “3D atoms” listed since I knew that number right away… 512x512x6 = 1,572,864… it WAS using a cube map (or relief mapping, or depth image-based representations as some part of it). So this hint was out in the open for a long time for someone to see. After that when numbers were talked about they would become “nice” ROUNDED numbers. But that didn’t turn me away from fudging these values to get an idea of what is going on. When you know your powers of two and data type sizes you can usually figure out close enough.

    Let us now fast forward to when Eucledion came out with their island demo (and when we all knew to some degree octrees where part of the mix). Dell again started spewing numbers again in interviews and displays of this demo: 1km square, 64 atoms per cubic mm, equivelent to 20 million polygons per square meter, etc. Lets use what we got here to figure out his world.

    1. 64 atoms per cubic mm: This would give us 4 atoms per linear mm, it is also a nice amount to fit in a single unsigned 64-bit int a single on/off flags if we wanted to pack common spatial info for this cluster (as a sub 2 level octree in a raw pointerless form).

    2. If we wanted to be able to save memory and not store every atom in the world (on/off) we would want some sort of sparse setup as other have talked about. The most compact would most likely be a leaf-only linear octree (a single 1-D array).

    3. If we lust had a linear array we need a way of uniquely identify each atom (or cluster of atoms) within this octree since we now do not have a direct tree structure. And this would be in a form a spatial code (and preda has already brought this up) like a morton code. Unlike preda’s form (if i read between the lines correctly) was using 8bits per level to describe an onctant, we only need 3 bits (2^3 describe 8 values) for each interleaved level of the z-order curve. So due to the structure of this coding, we always deal with cubic space even if it is not used.

    4. So lets break down the island:
    4a. 1km/(1mm / 4) = 4000000 (the 4 was the atoms per linear mm from above)
    4b. 4000000^3 = 6.4e+19 (we have to deal with cubic space even it is not used)
    4c. log2(6.4e+19) = 65.794705708 (that is log base 2, or you can log(x)/log(2) )
    4d. This is close to needing 64-bits, so the quoted values are not quiet right or we need to break things down again.

    5. So before we talked about needed 3 bits per level, so the closest we can get within 64-bits is 21 levels, or 63 of the 64-bits used.
    5a. (2^63)^(1/3) = 2097152 (1/3 is the cube root since we are looking for linear values)
    5b. 1km / 2097152 = 0.476837158mm (so this gives us basically 2 atoms per linear mm and not the 4, so 8 atoms per cubic mm)
    5b. We could stop there, slap an 8-bit int here (2x2x2 for the cubic), have our 64 atoms per cubic mm (8×8), and call it a day. But this would leave a huge amount of unique and non-instance atoms taking up a lot of memory. I think we need to break it down again.

    6. So the next variable size we could use is 32-bits, but with the 3-bits per level, we can only have 10 levels, thus only using 30 of the 32-bits.
    6a. (2^30)^(1/3) = 1024 (cube root again for linear distance)
    6b. Well hell, how can we get the detail now that we need? Well we are going to use two different type of octrees, one for the world (like old tile based games but in 3D) and one for each 3D tile of the world.
    6d. Now thinking about this, and looking at the island demo, the world zooming into it looks like a 1024×1024 image, and as you get closer you can see that it is indeed tiled. So we are on the right track.

    7. So using the same bit level of 30 for both the world map and the 3D tiles
    7a. 1km / (10×4*1024) = 0.953674316mm (well hell again, now we are lossing detail and now only have 1 atom per linear and cubic mm)
    7b. Little secre, us programers like our powers of two, and when we say 1km we might meen 1024m and if we say 1m we might meen 1024mm.
    7c. 1024m / (1024 * 1024) = 0.9765625mm ( well that makes it rounder and closer to 1mm but we just dont have the detail we wanted to match).
    7d. But wail, Dell like to repeat the 64 atoms per mm alot, so lets use what I talked about earlier and slap a 64-bit in here. Now we got our level of detail we wanted plus we have a mass of atoms in a cluster sharing similar 3D postional info (that is talked about too).

    8. So is there some other values we can use to verify this somewhat? What about Dell’s comments on the ground detail? What can the 20 million polygons tell us?
    8a. Since he talks about 1 atom per pixel I am going to take this as that it would take 20 million polygons at pixel size to render the same thing.
    8b. Since if this is right, we can figure out the surface atom density of a 1m tile.
    8c. sqrt(20,000,000) = 4472.135955 (atoms per linear meter, we will work with non- round numbers for just a bit)
    8d. 4472.135955 / 4 = 1118.03398875 (the 4 again is the atoms per linear mm for earlier)
    8e. Well this value is damn close to 1024, the value we calculated not long ago. Lets work it the other way.
    8f. (1024^2) * 16 = 16777216 (the 16 is the square atom density of 4×4)
    8g. Well this is not close to 20 million when you look at it this way, so either it was just a nice rounded number because of other visable items, or maybe that number accounts for multiple lods as well like a texture mip. Well when looking at it that way (taking just the general realm of 1/3 more per image to store all mip level) we get aroun that 20 million, well more like 22 million, but atom cluster size could drop to 8 or 1 at lower lods.

    So here is my first breakdown and a fraction of my thoughts i still have squirreled away in my notebooks, head, and code. Also, hello everyone, my name is Shea. 🙂

    1. Hello Shea, that’s a lot of points! I’m not sure whether you think it uses cube maps or not, as your descriptions go on to talk about octrees and atoms per mil. That wasn’t the idea behind cube mapped projections. Perhaps it’s just me being stuck on a single idea.
      I don’t think that seeing binary style numbers in the presentations actually tell us much, although the renderer does appear lot be a square 1024×1024. I can’t help thinking, so?
      The concept of ‘unlimited detail’ means the original data can be as high a resolution as they want, so maybe they just stuck to 1/64mm as a limit for the island demo because of time and office computational limits, or sanity(!) The values cannot really be construed into any particular algorithm.
      But it would be a shame if you’re correct about the island being a tiled model for any other reason than construction time limits, because that would mean in reality they could never do such a highly detailed arena without repetition. The mapping tech they’ve demonstrated has been described as being far less resolution, hopefully just because it’s only as good as the lidar data fed to it.
      We could certainly do with seeing more tech demos, as it’s taking them far too long to develop.

      1. Thanks! Do you notice anything useful for us between 8:46-8:49 ? How is this compatible (in a sense it is, of course) with “one point per pixel”? What is the explanation of that behaviour?

      2. At 8:46 you can see the resolution increasing as it’s loading, I screen captured those stages in my post on the 12th of April, above.

      3. I didn’t say screen resolution, my screen captures show model resolution increments, please explain if you mean something else.

      4. Dave H, yes, you are right, now tell me why they appear on the screen? (They are not quite “model resolutions”, but close.) At the beginning, you see the octree on the screen, it takes time to put it there, but after that it works smoothly. What does that mean?

  22. The movie confirmed that Euclideon is using file mapping (mmap on Linux). It is a method that allows you to say I want to access this file as if it is fully loaded into memory, without actually doing any loading. The operating system then handles IO and caching. If you only access a small part of the file, only these parts are loaded into memory. So this easily allows you to run it from a usb-stick.

    It is a small detail and not really a big secret of Euclideon. However, I just wanted to make sure you all now about file mapping.

  23. chorasimilarity :
    Dave H, yes, you are right, now tell me why they appear on the screen? (They are not quite “model resolutions”, but close.) At the beginning, you see the octree on the screen, it takes time to put it there, but after that it works smoothly. What does that mean?

    Later on they show a low-data version of the landscape, where the steam has to catch up with the camera. It tells us that the levels of detail are chunked together in the stream. This would help prevent the disc from moving rapidly for the details – computers likes reading data in chunks, not as single bytes all over the place. I seem to remember GIF pictures have a progressive stream option, seen back when the internet was all dial-up, where you see increasing resolution as they load up.

    1. Yes, but for the GIFs you see increasing resolution of the screen. At UD you see that there is an initial, highly time consuming step consisting into rendering the octree on the screen, as it is traversed. Nice!

  24. bcmpinc :
    I’ve given up writing my implementation for now. I hit a few issues, which I cannot seem to resolve to my liking. Additionally, I should be busy finishing my master thesis.
    The problem I’m running into, is that when I use ray vectors of which the z-coordinate is 1, I can use DDA, but must use perspective correct interpolation, which includes two multiplications and a devision. If I normalize the vector, I can simply use linear interpolation, with one addition and one bitshift, but cannot seem to use DDA, forcing me to do too much branching.
    My code is at https://github.com/bcmpinc/voxel-engine, but currently it is not even worth checking out. Only camera is working, and if you happen to have a dataset, you can see my previous failed attempt.

    It’s very easy to be consumed by this, I have to admit. Best to get on with other things and just let ideas bubble up when you’re not expecting them!

      1. chorasimilarity :
        Hi, why don’t you put all your parts in one place and propose a guest post here?

        Ok, bu how would that work? I’ve never created a wordpress post before and definitely not a guestpost.

      2. Send me a mail with your text, links, pictures (if needed, in attachment, or just links to them) and I shall transform it into a post. At the beginning I shall mention something like “This is a guestpost by bcmpinc, … “.
        Another useful thing would be to have a database, for example from a part of a real city. Somebody here mentioned it has access to such databases (Tony?). Then test your program on this, or manage somehow that anybody could test it. I also suggest you to put the final thing on github. What do you think?

      3. You should have received my email by now. I’m already putting the code on git-hub, however, it is not yet finished. Implementation still needs a lot of work. I have pointcloud data from opentopography.org, but it lacks color and is mostly just a hightmap. The images (http://imageshack.us/a/img708/221/voxel4.png) only show the first 5M points of the 135M points in the pointset, due to memory limitations.

      4. Thanks, I got it, you write: “I designed an algorithm that, given an octree and a location within that
        octree can compute the cubemap at that location. The algorithm kind of raytraces each face of the cubemap, but does this in such a way that
        multiple rays are traced simultaneously (aka. mass connected processing).”
        I’m going to sleep now, I’ll do it sometimes tomorrow.

  25. OK, I’m having another look at the cube-map thing (yes, again:)
    If you look at the room around you, then move your head to the left 2 foot you’ll see that not a lot really changes. If you move 1 foot instead, you’ll see the view is even more similar.

    Now here’s the trick-

    The cube-maps can be low resolution, because they can share the resolution between one another!

    So the eight corners of a block, say, can share a 1024x1024x6 cube-map between them as their views are fairly similar. So to get a view from inside that block you combine the eight corner views of 128x128x6 resolution cube-maps, which is much, much smaller. These corners are a basic grid or octree that can be re-used and discarded as you move from block to block through the scene. Only local data is needed and it can be streamed over quite a low network speed or USB flash memory.

    In other words, there is enough coherence between the snapshots that allow them to make up high resolution as a collective. This may be the loading effect you see as it loads the surrounding area cube-maps to make up the high enough resolution. This resolution sharing greatly reduces the size of the cube-maps. I guess the closer the cube-maps are, the lower the resolution can be, but there’s got to be a balance made somewhere.

    The views don’t have to be a cube-maps of course, it could be a ‘splat’ technique like D.Meagher’s, just as long as you have the data required for an entire wrap-around view at each snap-shot.

    So image resolution is moved and converted into position space. Like a Fourier Transform of location data? – Not quite, but I can’t think of the technical term for it at the moment.

    I’m probably completely wrong of course, I just thought it was a nice sideways look at the problem and could give someone further ideas.

    1. Thanks, interesting. Don’t forget that it’s a sorting algorithm, what if you search in the 3D octree space for the names of the screen pixels? What about the very important point that one should not do the raycasting (or whatever with equivalent effect) for each individual screen pixel, again and again. What is the order of operations when you try to do raycasting by traversing an octree? Is this order rigid, or we may change it?

      1. “Don’t forget that it’s a sorting algorithm”
        We don’t know what he’s sorting, or what stage of the renderer uses it.
        I really like the 5D kd-tree idea though, – it was mentioned in the comment from ‘preda’s FLIRT link above.
        5D is a seemingly mind breaking concept, but really all they do is sort the tree in X,Y,Z, and XY direction angles from the centre. Much like appc23’s idea i guess, but with a single pre-sort of the whole data. Using this you can find the nearest neighbour to the position and step that distance through the scene using the equivalent to ‘distance estimation fields’ – http://lmgtfy.com/?q=distance+estimation+fields
        It’s the fastest way I can think of, but it’s probably still not fast enough.
        I see in the new video they have a section on rendering it in 3D, which means they maybe drawing two full resolution images per frame! (it might be interleaved, but it’s still impressive)

      2. Yes, 5D is the right dimension for the space of directions, but I don’t think that’s what they are doing. That is because the UD format shrinks the database, not making it bigger. (I don’t understand from the film if he says 17% or 70%) [edited out the non-constructive opinion]

  26. chorasimilarity :
    Yes, 5D is the right dimension for the space of directions, but I don’t think that’s what they are doing. That is because the UD format shrinks the database, not making it bigger. (I don’t understand from the film if he says 17% or 70%) [edited out the non-constructive opinion]

    Euclideon’s web site states 5-20% of the original data. If everything pre-projected then the compression is only relative to the final screen size, not the actual data. So from a pre-projection point of view it’s not particularly relevant, but I haven’t got any figures to back that up unfortunately.

  27. Wow conversations break out faster than a combinatorial explosion in here. I realized after the fact my post might have left out more info on my train of thought and specific details that tie it together. I will hope to clear up what I can once I get home later tonight.

  28. Well I got home so much later than I thought but I will still give it a shot (sorry if I ramble).

    “I’m not sure whether you think it uses cube maps or not, as your descriptions go on to talk about octrees and atoms per mil. That wasn’t the idea behind cube mapped projections. Perhaps it’s just me being stuck on a single idea.” – Dave H.

    Yes my walkthrough did look to split away from what I said about believing it to use cube maps. What I will add to this now is reverse engineering sometimes takes part archeology and genealogy. I believe at some point (era) UD did use cube maps somewhere in the pipeline in part or as the only construct. But as to if it is still a big part or just one of many now minor or possible vestigial features (with the current era of UD as we know it) is an open question. For me I bin the progression of Dell’s work into 5 different eras.

    1. Original ray tracer
    2. Became realtime
    3. Appearance to the public
    4. Euclideon island demo
    5. Geoverse

    Most people might really only care about the hear and now but I want to get into the mindset of how it could have possibly got here from the start. Sometimes you have to rewind and walk the path and take in the orginal sights. So from this my first post was only pointing out an aspect of era 3, that has been a hot topic here (cube maps), along with a partial breakdown of era 4 both from available information.

    “The concept of ‘unlimited detail’ means the original data can be as high a resolution as they want” – Dave H.

    Here I think is were there can be road blocks since this is a letter of the law vs. the spirit of the law scenario. Personally I love the concept of unlimited levels of zoom in and out of a limitless scene but there is always going to be limits somewhere in the process. In theory there is never a difference between theory and practice, but in practice there most certainly is. This goes back to Dell’s ramblings about the progression of the color wars that it stopped were it did because our eyes couldn’t tell a difference. If we treat our monitors as a piece of glass moving around the world you can only get so close to things before they are pressed to it. Say your dot pitch was around 0.25mm, so any equivalent detail smaller than that doesn’t make sense for the general/practical case.

    Back to what I was talking about before. I can have the tendecy to jump between the eras when working through a mental picture of UD through out its life, so please feel free to stop and ask me if something does not look to follow or mesh.

    “But it would be a shame if you’re correct about the island being a tiled model for any other reason than construction time limits, because that would mean in reality they could never do such a highly detailed arena without repetition.” – Dave H.

    There are more ways to skin a cat, and from what we see of Geoverse, it has moved beyond a few limits (but possibly added others). From a more recent interview with Dell we know the island demo was totaly loaded into memory and Geoverse now does streaming (out-of-core paging). We are still just pushing the problem around, we can now have more free roaming open areas and detail but we are going to use up a hell of a lot of drive space.

  29. Shea, I don’t believe there are qualitative differences between the 2-5 eras from your classification and I doubt very much that UD ever had the 1st era. There is no evidence for this, it is only maybe an emerging story along the lines: yes, we naysayers are not going to eat crows because we were right at the beginning, only Delll progressed afterwards. Dell seems to have one (mathematical is my guess) formulation, which is not ray-tracing, which is then enhanced, in particular see the opengl guy patent submission with euclideon (for some reason the ipaustralia.com.au does not work from where I am since yesterday and I don’t remember the name of the guy). The only reason, in my opinion, why Bruce Dell is not making public his approach to the problem is that he wants to get something from his idea, which is completely understandable.
    But these are all just opinions and the idea of this blog is that opinions without evidence are frowned upon. So I am not going to indulge on this in my comments.
    One more, though, the last one. It does not matter if the island is tiled or not. It is certainly tiled, because the guy did not have access to huge data, therefore he had to make them by hand. But that is not the point. He could have used the flag of Japan instead (albeit it would make a less convincing evidence that his algorithm is working).
    The real problem is: describe an UD algorithm. Authority arguments are not for this blog. Constructive arguments are welcomed.

    1. I was not trying to say the what I marked as the 1st era was what would classify as UD but the point were he left that path to work towards what would become UD. The Aiden visit video actually shows smacker videos of this era were even Dell says it was 20 seconds per frame and talks about octrees and ray tracing. You can also see these startings shown in the John Gatt interview. And I understand your point about the qualitative differences but I am trying to find what quantitative diferences there are. The reason why this interests me is because programers tend to have a pattern to their style and reasoning even through different iterations and evolutions. I believe in UD and I am trying to only go by what Dell himself has actually offered up video, text, and image wise. And I am not pushing that Dell had nothing, thus why I have era 3, this is were UD was revealed to the world at large and UD was very much real. Era 1 and 2 has only been shown and discussed by Dell after the 4th era. I am trying to piece together a timeline and progression. I do apologize if this approach is not in line for here since this is your blog. I believe everyones ideas here are valid and every path that has been presented is worth exploring. Again I am sorry if I have come across as over stepping my bounds.

      1. Thanks Shea, maybe my comment looks unwelcoming. Absolutely no need to apologize for anything. I only wanted to tell, again, that here we look forward for constructive ideas, original if possible, made by people with very long attention span syndrome.
        Discussions are not fights, even if, as a mathematician, I can be obnoxious, but that’s a tool for better understanding.

  30. I know all too well the long attention span and a math background. Believe me combinatorics is where I feel comfortable and I have worked through many enumeration formulas on possible UD like approaches and structures. I guess I just have to work differently on presenting things, I usually have the habit of starting from the begining when I finally start talking.

  31. chorasimilarity :
    Dave H, what can he sort? The color of screen pixels (or pixels by color).

    In a complex pre-processing algorithm, which it appears to be, he could be sorting at any stage of the system, whether it’s at the front end, back or middle of the process. It may be a minor part of the algorithm, or a fundamental, it’s impossible to know from his vague description.
    My point is that we can’t start with ‘it’s a sorting algorithm’ as it may be misleading to exaggerate its importance. We don’t really know, which of course the fun/frustrating part of it!

      1. Yes, potree is my bachelor thesis project which originated from the idea to build a WebGL Viewer based on Scanopy. Scanopy was quite impressive.

  32. New interview! Forget the dodgy marketing speak, watch the camera fly inside the elephant!

    [I replaced with a better link, thanks for message!]

Leave a reply to Shea Street Cancel reply