Tag Archives: euclideon

Euclideon Holoverse virtual reality games revealed

Congratulations! Via a comment by roy.  If there is any other news you have then you’re welcome here, as in the old days.

Bruce Dell has a way to speak, to choose colors and music which is his own. Nevertheless, to share the key speaker honor with Steve Wozniak is just great.

 

 

It rubs me a bit in the wrong direction when he says that he has the “world first new virtual lifeforms” at 7:30. Can they replicate? Do they have a metabolism? On their own, in random conditions?

If I sneeze in a Holoverse room, will they cough the next day? If they run into me, shall I dream new ideas about bruises later?

 

Is there an uncanny valley between reality and computing?

A very deep one, maybe.

I was looking for news about UD and euclideon. I found this pair of videos [source], the first by BetterReality and the second by The Farm 51 .

Now, suppose that we scanned the whole world (or a part of it) and we put the data in the cloud. Do we have a mirror of reality now on the cloud? No! Why not, the data, according to mainstream CS ideology, is the same: coordinates and tags (color, texture, etc) in the cloud, the same in reality.

Think about the IoT, we do have the objects, lots of them, in potentially unlimited detail. But there is still this uncanny valley between reality and computation.

We can’t use the data, because:

  • there is too much data (for our sequential machines? for our dice and slice ideology,  a manifestation of the cartesian disease ? )
  • there is not enough time (because we ask the impossible: to do, on one very limited PC, the work  done by huge parts of reality? or because the data is useful only together with the methodology (based on  absolute, God’s eye view of reality, based on passive space as a receptacle), and the methodology is what stops us?)

I think that we can use the data (after reformatting it) and we can pass the uncanny valley between reality and computing. A way to do this supposes that:

  • we get rid of the absolute, passive space and time, get rid of global views (not because these don’t exist, but because this is a hypothesis we don’t need!)
  • we go beyond Turing Machine and Von Neumann architecture, and we seriously include P2P asynchronous, local, decentralized way of thinking into the model of computation (like CSP, or Actor Model or why not Distributed GLC?)

This is fully compatible with  the response given by Neil Gershenfeld to the question

2014 : WHAT SCIENTIFIC IDEA IS READY FOR RETIREMENT?

(Thank you Stephen P. King for the G+ post which made me aware of that!)

What’s going on in this UD algorithm?

In the post My idea about UD, completely unrelated to the content of it, b@b made a comment where he gives a link to a post of “BruceRDell” reproduced here:

Okay, I see you understand basic principles. Now let’s move to the more complex things. I can’t share exact UD algorithm (you must understand me, I’ve spent years mastering it) but I can express in a code what I have already said in the interviews. The thing I want to say is: you must use sorting and eliminate raycasting.
Sorting algorithm I introduce here is very VERY simple. You can optimize it a lot using octrees and more elegant heuristics than that. But keep it simple.
You’ve mentioned this http://rghost.net/48541594 dataset, I had some problems downloading it, but I did it that and can say it is good for testing, you may use it with my program.

Here is the code I am talking about. If you’re smart enough, you’ll get benefit from it.

Magio, in this comment on My idea about UD, says:

I’m probably way too late for this. But anyway I made this quick edit of the program:
http://pastebin.com/HiDW5tMB     [updated version:http://pastebin.com/kW1t5s1c  ]

___________________

Question: why does it work?

UPDATE:  It’s not clear if it really works, it’s unknown the quantitative improvement given by the random permutation trick. If you care, the BruceRDell  thing is a prank.

___________________

I would like to understand the following:

  1. what is the role of the random permutation?
  2. why does the main loop work?
  3. is this already done? (send links to papers, if any)

So, let’s have a polite and informative discussion in the comments, if you agree. We all write in some variant of broken english, therefore I don’t see why anybody can’t participate.

___________________

Not interested in: is the real Bruce Dell? Is the real UD algorithm? Only interested to understand the guts of it.

I finish with the video from b@b’s comment:

My idea about UD

Here is what I believe that  UD is doing. This can be surely multiply optimized, but the basics is: put the z buffer in the 3d space.

I. Preparing the database.

1. Imagine a 3D cubic lattice which is as big as your 3D scenery bounding box.  Take coordinates aligned with the lattice. Put the camera at coordinates (0,0,0) and surround it with a cube C. The faces of the cube C will serve as the cubemap we want to construct. Each face is covered by pixels of a given resolution. We have already the following parameters to play with, given in the units of the coordinate system chosen: the step of the lattice, the dimension of the cube C, the resolution of a face of C.

2. render, by any means you like, the lattice, as seen from the (0,0,0) pov, on the 6 “screens” -faces of C.  We have 6 view frustra, the discussion will be the same for each of them. In order to render them you need to put small balls, or squares facing the relevant face of the cubemap, or whatever, at the lattice nodes, so we have another parameter here, the diameter of the ball or square.  As a result of the rendering you now know the following things:

  • for any lattice atom you know which pixel from which face it corresponds. You may do better for lattice atoms which are inside the cube C, namely “ignore” them, say you attribute to them a IGNORE label, otherwise you attribute to each lattice atom the 2D coordinates of the pixel from the cubemap and a information which says which face of the cubemap the pixel is,
  • you have information about the scale, which you attach to the lattice atoms, like this: if two neighbouring lattice atoms project on the same pixel then attach IGNORE to both. If the ball/square/whatever of a lattice atom projects on more than one pixel then attach to it  a number SCALE approximately proportional with the square ROOT (thanks ine) of the number of pixels it projects (or the dimension of the bounding box of the pixels)

Of course, you don’t want to take a huge lattice, with very small balls. That’s all in the parameters choice.

3.  Take now your database of 3D points, i.e. the real one, which you want to render eventually, UD style. I shall ignore, for the sake of the argument, how is the database implemented: as an octree or otherwise, or even if the database is made by 3D points or by some more complex objects, like polygons.  Put this database in the coordinates chosen first, such that, for example, if you are working with octrees, the cells of the lattice correspond with some level of the octree.  Attach to each node of the lattice the supplementary information: the points from the database which are within the ball surrounding the atom, or else the label VOID. Alternatively, think that any point from the database is inside some cell lattice and “project” it on each corner of the the cell lattice (i.e. attach to each lattice atom a list of points from the database which are in the neighbourhood of the lattice atom, the nil list corresponds to VOID)

4.  Let’s see what we have, supposing for example that we use octrees. We add the 3D lattice to the 3D database of points (by using lists of points as explained at 3.) and for any lattice atom we attach also the information as explained at point 2.

How to compress this efficiently? There is a new parameter here, namely the level of the octree and also how is the color, for example, information stored in the octree. Of course, this is a matter of recursion, namely at points 1-3 we may take finer and finer resolutions and lattice steps, and so on, starting from a very gross lattice and resolution, etc, then trying to figure a correct recursion procedure. That’s work to be done, is not trivial but it is somehow straightforward once you figure it.

II. The pipeline and rendering.

The problem is that we want to be able to get very very fast, from the database constructed at (I), only the points which are needed for realtime rendering, when the camera is at coordinates (x,y,z).

This problem splits into two different ones:

  • at the start of the realtime UD rendering, we want to be able to cull something close to the minimum number of 3D points, when camera is at (0,0,0). According to the information given by Euclideon, a good algorithm should   able to do this in about 1s.
  • then, we need a procedure to take what we need from the database when we change the pov from (x,y,z) to (x+1,y, z) (or alike). This should be much more faster, allowing for realtime rendering.

As a preparation, let’s remark that:

  1. if the camera is a (0,0,0), then we already know where each lattice point projects, is written in the database. So we just need to start from a pixel, at a given resolution (reccursion again), and to choose from the database only the lattice atoms which are CLOSE, in the decreasing order of SCALE, and from those the real 3D points which are neighbours (of course we have to use the octree structure). We get for each pixel a number of points of the order of log(dimension of the world).
  2. if the camera is at (x,y,z) we also know where each point from the 3D database projects, because we read it from the data attached to the lattice atom which, translated by (x,y,z), is the neighbour of the point. We get also the SCALE parameter from this.

1. We use remark 1 to solve the first problem, namely what comes through the pipeline from the huge 3D database to the pre-rendering buffer B, when we start with the camera at (0,0,0). The buffer contains about (number of pixels) X log(dimension of the world)  3D points, along with pixels coordinates where they project and with SCALE.  This is fed directly to the rendering procedure, which you can choose freely, but it is almost trivial.

2. What happens when we move the camera? We update the pre-rendering buffer B, only by updating the pixel and scale information for the 3D points in the buffer D, getting only by a translation (addition) the relevant data from the huge database, in case here are two things which might happen: there is a point which exits the buffer, or there is a hole in the image.

_________________________

Is this making any sense? Please let me know, because I was asked repeatedly what do I really believe.

The weekend round table on UD

The comments from   Is 3D Cube a predecessor of UD?   were very useful as concerns the  idea of making open source UD like programs. There are already two projects, created by Bauke and Jim respectively, see the page   UD projects here for the relevant links.

We cannot possibly know if any of  these two proposals is like Bruce Dell’s UD, but it does not matter much because Bauke and Jim programs may well turn out to be as good as Dell’s, or why not even better. (However,  I still want to know how exactly the database of Saj’s 3D Cube is done, because of the claims he made many years ago, which are almost identical with the ones made by Dell, see the link from the beginning of this post.)

Enough chit-chat, the reason for this post   is that I suppose new discussions will follow, for example about Bauke’s still pending detailed explanations about his program. Also, any day now Jim might amaze us.

So, please start commenting here instead, if you feel the need to ask or answer questions about the two projects. Or, hey, you are welcome to announce yours!

_________

UPDATE:   I add here Bauke’s explanations from this comment :

My algorithm consists of three steps:
1. Preparing the quadtree. (This process is very similar to frustum culling, I’m basically culling all the leaf nodes of the quadtree that are not in the frustum.) This step can be omitted at the cost of higher CPU usage.
2. Computing the cubemap. (Or actually only the visible part, if you did step 1.) It uses the quadtree to do occlusion culling. The quadtree is basically an improved z-buffer, though since the octree is descended in front to back order, it is sufficient to only store ‘rendered’ or ‘not rendered’. It furthermore allows to do constant time depth checks for larger regions.

3. Rendering the cubemap. This is just the common cubemap rendering method. I do nothing new here.

My description only explains step 2, as this is the core of my algorithm. Step 1 is an optimization and step 3 is so common that I expect the reader to already know how this step works.

Step 2 does not use the display frustum at all. It does do the perspective. But does so by computing the nearest octree leaf (for the sake of simplicity I’m ignoring the LOD/mipmap/anti-aliassing here) in the intersection of a cone and the octree model. This is shown in the following 2D images:

sbus

 
[image source]

 

sn29
[image source]

I don’t know what you mean with ‘scaling of each pixel’, but I believe that scaling of pixels only happens in step 3. In step 1 and 2 I completely ignore that this happens.

I hope this answers your questions. If not, please tell which of the 3 steps you do not understand.

 

 

_________

UPDATE: You may like techniques used here  [source]