UD patent

from Kais:

You are welcome to discuss, if you feel the need!
Recall that first comment needs moderation. Other than that, please be constructive, as usual.
Thanks Kais!
Advertisements

28 thoughts on “UD patent”

  1. Nice, thanks!
    I read it. It describes an octree splatter which switches to “orthogonal mode” for nodes which are sufficiently far and small, such that their perspective projection can be replaced with an orthogonal one. The rasterized image remains the same. In essense, this avoids the division when projecting the nodes for majority (?) of them. There seems to be an overhead for checking the orthogonal mode condition, but it’s probably not a big one.

    I’ll try to re-implement this in my renderer and see how much it boosts the speed.

    1. Any news on this? It looks to me that this orthogonal mode is only a necessary step on a much bigger representation algorithm..

      1. No news AFAIK. I believe the same, the question is HOW concretely this helps. I’m surprised that it takes so long to have the open source version of UD. Bauke proposal is good, maybe better than what the original UD uses AFTER the step of selection from the huge database.

  2. That sounds like D.J. Meagher’s patent from the 80s. I remember someone say that the whole screen wobbles slightly as the camera moves in UD – which now makes sense as you are tracking the inaccuracies between 3D projection and 2D scaling as you fly into the scene.
    It’s a fairly vague patent though.

  3. It was the 2D projection of the 3D cube. It’s eight points are subdivided to quickly find 2D positions of locations inside the box. So the smaller cube locations are simply interpolations of the projected parent cube, you just use bit-shifts and additions instead of having to project every point inside it.
    This explains the edge artifacts he was getting originally – possibly.

    Here is his early Patents, which I mentioned here a couple of years ago. 🙂
    It’s from 1981!…
    http://fab.cba.mit.edu/classes/S62.12/docs/Meagher_octree.pdf
    And from 1984…
    http://www.google.com/patents/EP0152741A2?cl=en

  4. It’s fairly nice, but the detail seems to be, um, limited!
    I’m a little confused as to why the detail is lost when you look away from an area. Surely it should cache it for a few seconds in case the user looks back. Instead we get this continuous fat to thin pixel effect.
    There are ‘interesting’ diagonal artifacts when really close to the atom things as well.

  5. first: install chrome, youll like it 😉 its the best

    second: i doubt this is about security. they didnt use webGL because the algorithm is CPU only thats one of the things they were braging about, remember? native client technology makes it possible to run C code in the browser. so it is very hard (if not impossible) to reverseengineer their code.
    if you use openCL or shaders, the source code will be always in plain text (as far as i know). everyone could view your “secret” algorithm. i guess thats one of the reasons they didnt port it to GPU yet.

    btw you should also take a look at atomontage, the guy there recently released new videos and images which look amazing and contain simple animation (no deformation) and not so simple physics

    1. You’re definitely correct, I didn’t know this Native Client thing existed, it will be the fastest code to use. I wonder how they protect your computer’s security?

      1. The Atomontage animated vowel tank is very surprising, especially when he drills holes into it and flies the camera through them:-

    2. I installed chrome on my mother’s computer, and used it from time to time, during visits. That’s why for some time google thought I am a woman, with this age and interests. Re: UD, good luck to them, I kind of lost interest because I don’t appreciate the old ways of artificial scarcity. I am still interested in a open source UD, by the same or other algorithm. Bauke has up to now the only such proposal, which is clearly different than the original UD, because still there is nothing known about how the visualising part communicate with the database.

      1. I’m still working on it occasionally. I’m moving away from the UD approach though, because I don’t think CPU based will ever be fast enough for games, and I found a way to allow porting the algorithm to GPU. Currently I’m busy figuring out the details. Assuming that porting to the GPU comes with a huge increase in FPS, I can drop the sparse voxel octree and switch to a more generic structure: the leafless scene graph. It would allow for all kinds of fancy animation/rotation methods that just aren’t possible with triangle based methods. I have to choose what I’ll allow in the scene graph and what not. Progress is slow though, as I work on it in my spare time, while also doing a PhD on a different topic. Someone else is also trying to implement my algorithm, though I have my doubts whether they’ll succeed.

        About the database, Euclideons compression technique is kind of impressive, though I still have some compression methods in mind for my implementation, but lack motivation to actually implement them. Their streaming-stuff probably isn’t that interesting. I’ll have to implement something like that to get the model data into GPU memory and basically it boils down to good data locality (which I have through Hilbert curve sorting) and some fancy memory paging algorithm.

        What I think is interesting though is that both Euclideon’s and my algorithm are unlimited detail algorithms, — in the same sense as computers are turing machines: they both aren’t only because we cannot really have unlimited storage — the time complexity of the rendering algorithm does not depend on (the size of) the scenery data. In fact, I think I managed to get an O(n*log(n)) algorithm, with n being the amount of pixels in your viewport. I don’t yet know about the constant factor involved, but I doubt there is any room for improvement, considering that O(n*log(n)) is the lower bound for comparison based sorting algorithms.

      2. Thank you for the post about the leafless scene graph, I missed that one. It’s excellent. Have you thought about the situation where you have a huge scene which is in a database and you need to extract, by (a part of) your algorithm the relevant pieces, very fast? I believe that this part only can be very useful, regardless of the use of the part you extract for putting it on screen fast. Can you write a program which does only this?

      3. First chop the data into chunks of say 256kib, then for each chunk compute a list of chunks that it refers to. No when rendering, if we know the octnodes in the rootnode of the quadtree, we know in which chunks these nodes are and also know how deep the traversal can get (10-13 iterations), so we can walk the chunk graph and load all referenced chunks into memory. Because of data locality the number of chunks shouldn’t grow too fast for each iteration and so this might work.

        It doesn’t take into account that far away blocks need less resolution though, maybe that optimization is necessary as well.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s