Unlimited detail and 3D portal engines, or else real-time path tracing

Here are two new small pieces which might, or not, add to the understanding of how the Unlimited Detail – Euclideon algorithm might work. (Last post on this subject is Unlimited detail, software point cloud renderers, you may want to read it.)

3D-portal engines: From this 1999 page “Building a 3D portal engine“, several quotes (boldfaced by me):

Basically, a portal based engine is a way to overcome the problem of the incredible big datasets that usually make up a world. A good 3D engine should run at a decent speed, no matter what the size of the full world is; speed should be relative to the amount of detail that is actually visible. It would of course be even better if the speed would only depend on the number of pixels you want to draw, but since apparently no one has found an algorithm that does that, we’ll go for the next best thing.

A basic portal engine relies on a data set that represents the world. The ‘world’ is subdivided in areas, that I call ‘sectors’. Sectors are connected through ‘portals’, hence the name ‘Portal Engine’. The rendering process starts in the sector that the camera is in. It draws the polygons in the current sector, and when a portal is encountered, the adjacent sector is entered, and the polygons in that sector are processed. This would of course still draw every polygon in the world, assuming that all sectors are somehow connected. But, not every portal is visible. And if a portal is not visible, the sector that it links to doesn’t have to be drawn. That’s logical: A room is only visible if there’s a line of sight from the camera to that room, that is not obscured by a wall.

So now we have what we want: If a portal is invisible, tracing stops right there. If there’s a huge part of the world behind that portal, that part is never processed. The number of polygons that are actually processed is thus almost exactly equal to the number of visible polygons, plus the inserted portal polygons.

By now it should also be clear where portals should be inserted in a world: Good spots for portals are doors, corridors, windows and so on. That also makes clear why portal engines suck at outdoor scenes: It’s virtually impossible to pick good spots for portals there, and each sector can ‘see’ virtually every other sector in the world. Portal rendering can be perfectly combined with outdoor engines though: If you render your landscape with another type of engine, you could place portals in entrances of caves, buildings and so on. When the ‘normal’ renderer encounters a portal, you could simply switch to portal rendering for everything behind that portal. That way, a portal engine can even be nice for a ‘space-sim’…

So let’s dream and ask if there is any way to construct the database for the 3D scene such that the rendering process becomes an algorithm for finding the right portals, one for each pixel maybe. To think about.  The database is not a tree, but from the input given by the position of the viewer, the virtually available portals (which could be just pointers attached to faces of octrees, say, which point to the faces of smaller cubes which are visible from the bigger face, seen as a portal) organize themselves into a tree. Therefore the matter of finding what to put on a screen pixel could be solved by a search algorithm.

As a small bonus, here is the link to a patent of Euclideon Pty. Ltd. : An alternate method for the child rejection process in regards to octree rendering – AU2012903094.

Or else real-time path tracing. Related to Brigade 2, read here, and  a video:

3 thoughts on “Unlimited detail and 3D portal engines, or else real-time path tracing”

  1. I have a different theory. I beleive that “unlimited detail” is simply a matter of converting geometry based data into voxel cubes at multiple scales since you dont need to see as much detail at a difference and then allocating the storage to match the distance you choose to keep. since each spherical volume shell grows at a rate proportional to the radious squared, and the detail needed with each shell decreases inversely proportional to the radious you get a worst case where each shell takes up a fixed amount of memory. You could use fog or something like that to make the most distant objecys fade out of sight, but in practice that may be completely unneeded as you can sparely store flatter terrains in an octree or similar graph of your choosing and you would tend to reach an overall total limit (more or less, I have neither deducted for duplicates nor penalized the graph overhead, plus half a pixel would be hard to store). Now as far as efficient retrieval goes, I think if you just project a ray and pick the nearest voxes for each screen pixel you’ll get something like what you see in the demonstrations, where the total number of pixels is the main scaling factor. Also, I suppose with the right algorithms tou could pull off the same thing with mesh based models, but it is rather easier to understand how this all works with voxels.

  2. Basically you are saying that if you project the screen image on a spherical shell, it can be encoded in the same amount of memory because the scaling of shell area and the detail needed cancel one another. But you don’t find the voxels you need in the same shell. Or I don’t understand your argument. Moreover, if you start with a database of coordinates of 3d points (plus some color and whatnot), how do you convert this into voxel cubes at multiple scales, because the pre-processed database is key.

  3. I think we are more or less saying the same thing about the projection. I imagined the projection part would simply be a matter of looking through the shells until you hit a voxel. Now your pixel takes on the color of the voxel.

    So to brute force produce a a voxel cube from the coordinates you pick a voxel size, and do intersection after intersection test until you get a cube. I’ll assume there’s a better algorithm, but that’s good enough for our talk. For other scales, you could do this again, and again, and again, to any size, but personally I would just choose to scale down by 1/2 on each axis producing a sequence of cubes each 1/8 the size of the previous. I like your convert it to iterative function set ideas, very handy for producing additional detail on the fly, esp if you only need to test the 8 voxels known to exist in the smaller scale algorithm.

Leave a comment