Tag Archives: videogames

Teaser: 2D UD

Here are two images which may (or may not) give an idea about another fast algorithm for real time rendering of 3D point cloud scenes (but attention, the images are drawn for the baby model of 2D point cloud scenes).  The secret lies in the database.

I have a busy schedule the next weeks and I have to get it out of my system. Therefore,  if anybody gets it then please send me a comment here. Has this been done before, does it work?

The images now: the first has no name

eucludeon

The second image is a photo of the Stoa Poikile, taken from here:

Stoa_Poikile

Hint: this is a solution for the ray shooting problem (read it) which eliminates trigonometry, shooting rays, computing intersections, and it uses only addition operation (once the database is well done), moreover, the database organized as in the pictures cannot be bigger than the original one (thus it is also a compression of the original database).

_______________

See the solution given  by JX of an unlimited detail algorithm here and here.

Diorama, Myriorama, Unlimited detail-orama

Let me tell  in plain words  the explanation by  JX about how a UD algorithm might work (is not just an idea, is supported by proof, experiments, go and see this post).

It is too funny! Is the computer version of a diorama. Is an unlimited-detail-orama.

Before giving the zest of the explanation of JX, let’s thinks: do you ever saw a totally artificial construction which, when you look at it, it tricks your mind to believe you look at an actual, vast piece of landscape, full of infinite detail? Yes, right? This is a serious thing, actually, it poses a lot of questions about how much can be  compressed the 3D visual experience of a mind boggling  huge database of 3D points.

Indeed, JX explains that his UD type algorithm has two parts:

  • indexing: start with a database of 3D points, like a laser scan. Then, produce another database of cubemaps centered in a net of equally spaced “centerpoints” which cover the 3D scene. The cubemaps are done at screen resolution, obtained as a projection of the scene on a reasonably small cube centered at the centerpoint. You may keep these cubemaps in various ways, one of these is by linking the centerpoint with the visible 3D points. Compress (several techniques suggested).   For this part of the algorithm there is no time constraint, it is done before the real-time rendering part.
  • real-time rendering: input where the camera is, get only the points seen from closest  centerpoint, get the cubemap, improve it by using previous cubemaps and/or neighbouring cubemaps. Take care about filling holes which appear when you change the point of view.

Now, let me show you this has been done before, in the meatspace.  And even more, like animation! Go and read this, is too funny:

  • The Daguerre Dioramas. Here’s (actually an improved version of) your cubemap JX: (image taken from the linked wiki page)

Diorama_diagram

  • But maybe you don’t work in the geospatial industry and you don’t have render farms and huge data available. Then you may use a Myriorama, with palm trees, gravel, statues, themselves rendered as dioramas. (image taken from the linked wiki page)

Myriorama_cards

  • Would you like to do animation? Here is it, look at the nice choo-choo train (polygon-rendered, at a a scale)

ExeterBank_modelrailway

(image taken from this wiki page)

Please, JX, correct me if I am wrong.

Unlimited detail and 3D portal engines, or else real-time path tracing

Here are two new small pieces which might, or not, add to the understanding of how the Unlimited Detail – Euclideon algorithm might work. (Last post on this subject is Unlimited detail, software point cloud renderers, you may want to read it.)

3D-portal engines: From this 1999 page “Building a 3D portal engine“, several quotes (boldfaced by me):

Basically, a portal based engine is a way to overcome the problem of the incredible big datasets that usually make up a world. A good 3D engine should run at a decent speed, no matter what the size of the full world is; speed should be relative to the amount of detail that is actually visible. It would of course be even better if the speed would only depend on the number of pixels you want to draw, but since apparently no one has found an algorithm that does that, we’ll go for the next best thing.

A basic portal engine relies on a data set that represents the world. The ‘world’ is subdivided in areas, that I call ‘sectors’. Sectors are connected through ‘portals’, hence the name ‘Portal Engine’. The rendering process starts in the sector that the camera is in. It draws the polygons in the current sector, and when a portal is encountered, the adjacent sector is entered, and the polygons in that sector are processed. This would of course still draw every polygon in the world, assuming that all sectors are somehow connected. But, not every portal is visible. And if a portal is not visible, the sector that it links to doesn’t have to be drawn. That’s logical: A room is only visible if there’s a line of sight from the camera to that room, that is not obscured by a wall.

So now we have what we want: If a portal is invisible, tracing stops right there. If there’s a huge part of the world behind that portal, that part is never processed. The number of polygons that are actually processed is thus almost exactly equal to the number of visible polygons, plus the inserted portal polygons.

By now it should also be clear where portals should be inserted in a world: Good spots for portals are doors, corridors, windows and so on. That also makes clear why portal engines suck at outdoor scenes: It’s virtually impossible to pick good spots for portals there, and each sector can ‘see’ virtually every other sector in the world. Portal rendering can be perfectly combined with outdoor engines though: If you render your landscape with another type of engine, you could place portals in entrances of caves, buildings and so on. When the ‘normal’ renderer encounters a portal, you could simply switch to portal rendering for everything behind that portal. That way, a portal engine can even be nice for a ‘space-sim’…

So let’s dream and ask if there is any way to construct the database for the 3D scene such that the rendering process becomes an algorithm for finding the right portals, one for each pixel maybe. To think about.  The database is not a tree, but from the input given by the position of the viewer, the virtually available portals (which could be just pointers attached to faces of octrees, say, which point to the faces of smaller cubes which are visible from the bigger face, seen as a portal) organize themselves into a tree. Therefore the matter of finding what to put on a screen pixel could be solved by a search algorithm.

As a small bonus, here is the link to a patent of Euclideon Pty. Ltd. : An alternate method for the child rejection process in regards to octree rendering – AU2012903094.

Or else real-time path tracing. Related to Brigade 2, read here, and  a video: