The weekend round table on UD

The comments from   Is 3D Cube a predecessor of UD?   were very useful as concerns the  idea of making open source UD like programs. There are already two projects, created by Bauke and Jim respectively, see the page   UD projects here for the relevant links.

We cannot possibly know if any of  these two proposals is like Bruce Dell’s UD, but it does not matter much because Bauke and Jim programs may well turn out to be as good as Dell’s, or why not even better. (However,  I still want to know how exactly the database of Saj’s 3D Cube is done, because of the claims he made many years ago, which are almost identical with the ones made by Dell, see the link from the beginning of this post.)

Enough chit-chat, the reason for this post   is that I suppose new discussions will follow, for example about Bauke’s still pending detailed explanations about his program. Also, any day now Jim might amaze us.

So, please start commenting here instead, if you feel the need to ask or answer questions about the two projects. Or, hey, you are welcome to announce yours!

_________

UPDATE:   I add here Bauke’s explanations from this comment :

My algorithm consists of three steps:
1. Preparing the quadtree. (This process is very similar to frustum culling, I’m basically culling all the leaf nodes of the quadtree that are not in the frustum.) This step can be omitted at the cost of higher CPU usage.
2. Computing the cubemap. (Or actually only the visible part, if you did step 1.) It uses the quadtree to do occlusion culling. The quadtree is basically an improved z-buffer, though since the octree is descended in front to back order, it is sufficient to only store ‘rendered’ or ‘not rendered’. It furthermore allows to do constant time depth checks for larger regions.

3. Rendering the cubemap. This is just the common cubemap rendering method. I do nothing new here.

My description only explains step 2, as this is the core of my algorithm. Step 1 is an optimization and step 3 is so common that I expect the reader to already know how this step works.

Step 2 does not use the display frustum at all. It does do the perspective. But does so by computing the nearest octree leaf (for the sake of simplicity I’m ignoring the LOD/mipmap/anti-aliassing here) in the intersection of a cone and the octree model. This is shown in the following 2D images:

sbus

 
[image source]

 

sn29
[image source]

I don’t know what you mean with ‘scaling of each pixel’, but I believe that scaling of pixels only happens in step 3. In step 1 and 2 I completely ignore that this happens.

I hope this answers your questions. If not, please tell which of the 3 steps you do not understand.

 

 

_________

UPDATE: You may like techniques used here  [source]

37 thoughts on “The weekend round table on UD”

    1. Yes, I find this video fascinating. I wonder what “protomatter – type D” means. Also, I am not sure if the videos are comparable, because as far as I understand the video from the post shows how to build what it’s seen from a POV, knowing photos from a given path of POV’s, while here, apparently, there are used laser scans instead of photos. Anyway, if the two videos do compare, then it’s obvious which one is more interesting.

    1. That bottom video is mine (with the Yellow mice) the top video was made by dvoidis, we both came to similar techniques (and so would anybody else who has sufficient low level rendering experience), although I did mine a year before and didn’t post it because I thought it looked gash. I eventually posted my vid so we could compared tech, and dvoidis released his code a few weeks later, and it is very very similar technique. Dvoidis and myself both exploit the 8 node transformed deltas to get the entire transform for free (Well a shift and add per traversal), dvoidis does the same, but also uses a ZBuffer check node coverage to reject nodes. I used conic normals to reject back facing nodes. Using conic normals is much faster for renderer – at the expense of memory and complexity of implementation.

      I’m 100% sure euclideon are doing something very very similar with octree’s and the delta transform trick – should have something to show you guys real soon (I’ve got 3 days left to be able to work on this).

      As to JX’s comments:

      Scene can be streamed very easily using this tech, or any other octree tech. You simply split you models up into fixed size octrees and fix up the end nodes when you have to stream in a new octree – it’s dead simply.

      Octree’s are a massive benefit for real-time rendering – since they are in essence a very efficient 3D texture/Cube map – with mipmaps.

      My current tech is looking really good, and I’ve now re-evaluated the memory usage – and it’s definitely something that can be used in games. The biggest issue at the moment is animation – if that can be worked out (and I’m sure it can) this tech will be a serious competitor to the current brute force rasterization trend.
      .

      1. Thank you Jim, I know you made the video, that’s why I mentioned “faster than a zillion rabbits”, although that’s a mistake, I should have write “mice” instead. Very interesting remarks, I look forward for your updates.

      2. Hello everyone!

        jim :
        Dvoidis and myself both exploit the 8 node transformed deltas to get the entire transform for free (Well a shift and add per traversal)…

        Were really a shift and add all operations needed to perform entire transformation of a node? From what I read it follows that You were performing calculations in view space (then shift and add per traversal are sufficient), but what about transformation to projection space?

        Also did You check the speed of Your technique for floating point math instead of fixed point?

  1. heh 🙂 I got all the old models including the good old mouse – just testing them all now.

    here’s some currently memory usage stats that I’ve made a note of to give me a idea of usability:

    // Buddah
    // 7= 128^3 Nodes: 62,983 Mem: 247kb Zip: Rar: DTime: 3.9 RTime: 1.5
    // 8= 256^3 Nodes: 268,179 Mem: 1mb Zip: Rar: DTime: Rtime: 2.3
    // 9= 512^3 Nodes: 1,135,075 Mem: 4.4mb Zip: 1.8mb Rar: 1.6mb RTime: 4.2
    // 10= 1024^3 Nodes: 4,741,833 Mem: 18mb Zip: 6.8mb Rar: 6.3mb RTime: 10.1
    // 11= 2048^3 Nodes: 19,508,955 Mem: 76mb Zip: 25mb Rar: 24mb RTime: 30.4
    // 12= 4096^3 Nodes: 79,322,452 Mem: 309mb Zip: 98mb Rar: 92mb RTime: 104.7
    // 13= 8192^3 Nodes: Mem: Zip: Rar: DTime: RTime:
    // 14= 16384^3 Nodes: Mem: Zip: Rar: DTime: RTime:

    Basically you’ve got the view the models as if they are 3D textures and decide the detail level you want to view them at. So 10 levels deep is good for a screen sized object since it means the resolution of the octree texture is 1024x1024x1024. In that case it currently takes up 18.7mb memory raw uncompressed, 9mb LZSS’d 6,8mb zipped and 6.3 rar’d. I havn’t looked at custom compression yet – but it’s currently very inefficient and I wouldn’t be surprised if I could make it 1/3rd of the size when I go to work on it. I suspect the “Elephant” in the euclideon Island demo is either 11 or 12 levels deep – going from what I see in the vid.

  2. Thinking about uses in games, probably Dave H could say something also, I have an idea to share with all the people looking at this post, but remaining quiet: it looks to me that a functional UD program, like Bauke’s and hopefully Jim’s (very excited to see it in action!) open the possibility TO PLAY IN REAL ENVIRONMENTS. Instead of playing in some hard worked artificial environment, why not use these UD programs combined with freely available laser scan data to play in real places on Earth. Moreover, that could have social aspects, with people discussing about meeting in this or that real place and do whatever they want to. By “real” I don’t mean the real real place, like is the case with the games using augmented reality, but I mean using real world laser scan data to play in.

  3. This would be awesome for certain types of games. The games industry would have big issues with using this kind of tech – it’s slowly switching over from polygon models to Voxels/3D textures, with AAA titles artists starting to use sculpting based generation mudbox zbrush e.t.c, but it’s still would a huge leap for any single company – since all the artists are going to be mostly experienced in polygons and the previous 15 years knowledge built from that. Disk size is also a issue for real world or non tiled/instanced usage – hence the gig’s of data used in geoverse. There’s also the coding issues – everyone is currently at rasterization focus – 3d math/rendering issues/overdraw/shaders/thread scheduling e.t.c.. This tech forces a new way to think about coding – and going from a known solution to unknown tech is always a huge risk.

    1. Yes, as you say about geoverse, the idea would be to have servers with the heavy data, which stream by the net, to the players, only what’s needed. If it is developed in open source, there could appear such social games, like Second Life on steroids.

      1. Nice idea- I like it. I’m really looking forward to euclideon releasing some kind of .exe demo, so we can see the diskspace usage.

      2. We’re a treasure of ideas here! Like Janis Joplin says: take it! 🙂 [EDIT: “we” means all the contributors here, “take it” is addressed to everybody looking at. So, take these little pieces of our hearts and make a good use of them, by making your creations open, if they are build from openly shared knowledge.]

  4. Geoverse is a geospatial product, i’m not hearing too much discussion on dev within the Geospatial software and how slowly they handle cloud point data. Anyway here is an OS cloud point program called cloud compare. http://www.danielgm.net/cc/

    1. Is that .laz -.las a hint? (In no way I’m convinced, until now, that the story of UD is fully recovered, with all due respect, exactly because of the secret of the database format).
      Beautiful video! There are things happening when the camera starts to move quick, my eyes tell me something is not smooth, though.

  5. I think that last video is a 3D ( .obj ) model from photogrammetry put through agisoft.ru Photoscan software. You know that, 2D photos can be turned into 3D models. Also in the process of converting 2D to 3D models is cloud point data. So 2D photos can be converted to 3D .las or .laz with or without the georeferenced coords. I have a bunch of mining sites in 3D Obj or .Las. I am not with Aerometrex.

    1. Patents like that get my goat especially since me and several associates have used this technique for some time and I know we were not the only ones out there. I have used this for things like star fields and nebula in space rendering along with variations for doing rain when animated.

    2. Well I should say we used a variation. Thinking about it it looks like this (even though it is equivalent) defines three planes per voxel volume where we used 3n+3 planes per n^3 total volume.

  6. This comment does not add to the discussion, my bad, but I have to say that’s the most lurked post lately. What’s this, some combination of holiday season and waiting for Jim’s halo3d? What else?

Leave a reply to Shea Mation Cancel reply