Unlimited detail challenge: the most easy formulation

Thanks to Dave H. for noticing the new Euclideon site!

Now, I propose you to think about  the most easy formulation of unlimited detail.

You live in a 2D world, you have a 1D screen which has 4 pixels. You look at the world, through the screen, by using a 90deg frustrum. The 2D world is finite, with diameter N, measured in the unit lengths of the pixels (so the screen has length 4 and your eye is at distance 2 from the screen). The world contains atoms which are located on integer coordinates in the plan. There are no two atoms in the same place. Each atom has attached to it at most P bits, representing its colour. Your position is given by a pair of integer coordinates and the screen points towards  N, S, E, W only.

Challenge: give a greedy algorithm which, given your position and the screen direction,  it chooses 4 atoms from the world which are visible through the screen, in at most O(log N) steps.

Hints:

  • think about what “visible” means in this setting
  • use creatively numbers written in base 2, as words.

_______________

The Teaser 2D UD might help.

I’m not giving this challenge because I am a secretive …. but because I enjoy collaborations.

40 thoughts on “Unlimited detail challenge: the most easy formulation”

  1. Anybody wishing to propose a solution?

    Somebody mentioned that raycasting in an octree is logarithmic, but I am not convinced. An octree is logarithmic in the number of objects, but raycasting from a POV is not, AFAIK. Unless if you are looking from very far away, and that’s where the idea of perspective might give you an idea for a modification of the octree idea, as suggested in the teaser.

  2. “Dag” mailed me an answer to the challenge, under the form of a file containing an algorithm along with some comments. Here is it, (in the wordpress format of the comments), until now I had no time to think about it, so what do YOU think?

    “Quadtree-less viewport extrusion & recursive quadrisection in an octree traversed in front-to-back order. No non-visible node is visited, those that are visited are visited only once per frame. There is also a quadtree version. No *, / or float required; not raycasting. Bit level.

    /* cube is an octree node, “in” is a stack of (square, pyramid) pairs; it is non-empty.
    All the in(put) squares are in the cube.
    Pyramids are in practice 4 increments; they advance the vertices of the squares in white cubes i.e., are deltas. For parallel viewing the pyramids’ bases are points. */
    UD outputs a perhaps empty stack of (square, pyramid) pairs (square being not in cube, having exited it). */
    UD(cube, in)
    {
    if (gray(cube)) { /* the cube is a mass */
    do { /* make the given squares (they enter this cube) enter this cube’s exposed octants */
    (square, pyramid) = pop(in)
    if (square in XYZ(cube)) /* XYZ(cube) is an octant (child) of cube */
    push((square, pyramid/2), in[XYZ]) /* in[XYZ] is the in stack for the XYZ octant; pyramid/2’s height is half that of pyramid */
    else
    foreach (XY) /* quadrisect the square and its pyramid */
    push((XY(square), XY(pyramid)), in) /* XY(square) is a quadrant of square, XY(pyramid) is its pyramid (a quadrant of pyramid) */
    } while (in)
    /* the squares entering this cube have now entered its (exposed) octants: front-to-back traverse them */
    foreach (XYZ in front_to_back) { /* front_to_back is a front-to-back enumeration of the cube’s octants */
    if (in[XYZ]) { /* visit the child only if its “in” stack isn’t empty: an alternate method for the child rejection process in regards to octree rendering */
    out’ = UD(XYZ(cube), in[XYZ])
    while (out’) { /* distribute the squares exiting XYZ(cube) among the cube’s octants or push those out of the cube on its “out” stack */
    (square, pyramid) = pop(out’)
    if (square in XYZ'(cube)) /* XYZ'(cube) is an octant of cube */
    push((square, pyramid), in[XYZ’])
    else if (square out of cube)
    push((square, pyramid*2), out) /* pyramid*2’s height is twice that of pyramid */
    else
    foreach (XY) /* quadrisect the square and its pyramid */
    push((XY(square), XY(pyramid)), out’)
    }
    }
    }
    } else if (black(cube)) /* the cube is an atom */
    /* paint each viewport square with the cube’s color, do not push anything on the “out” stack: no square shall exit this opaque cube */
    do {
    (square, ) = pop(in)
    color(square) = color(cube)
    } while (in)
    else /* the cube is white (empty) */
    /* advance, at a reasonable delta (pyramid), the squares out of the cube, quadrisecting them as needed */
    do {
    (square, pyramid) = pop(in)
    if (square in cube)
    push((square + pyramid, pyramid), in) /* the “+” extrudes the viewport square further away from the eye */
    else if (square out of cube)
    push((square, pyramid), out)
    else
    foreach (XY) /* quadrisect the square and its pyramid */
    push((XY(square), XY(pyramid)), in)
    } while (in)
    return out
    }

    This approach is related to:

    0) Ask, and it shall be given you; seek, and ye shall find; knock, and it shall be opened unto you: For every one that asketh receiveth; and he that seeketh findeth; and to him that knocketh it shall be opened (Matthew 7.7)
    1) And again I say unto you, It is easier for a camel to go through the eye of a needle, than for a rich man to enter into the kingdom of God. (Matthew 19.24)

    0) Cube octants are 3-D doors, to knock on them is to have a square in them.
    1) Quadrisect (analyze) squares whose status is undecided: give up its pomp, humble it.”

  3. UD is 5-D: 1-D front-to-back enumeration of octrees, 4-D spaces (3-D & 1-D scale).
    It is a portal renderer, as you said. Its portals aren’t 2-D polygons but 3-D cubes. They separate 4-D spaces.

    Description:
    Cube with viewport squares in it.

    If the cube is labeled as an atom then paint the viewport squares with its color.

    If the cube has no subcubes i.e., no portals then extrude, step-by-step, the squares out of it. Squares having some vertices in and some out of the cube are quadrisected and are to be extruded. Return the set of exit squares.

    If the cube has subcubes then first scatter the squares among its subcubes. Squares having their vertices in different subcubes are quadrisected and are to be scattered.
    Then, in front-to-back order, consider each subcube: if there are squares in it then apply what is described here to that cube with its set of squares and scatter the returned squares among the (remaining) subcubes or gather those that are out of the cube in its set of exit squares. Return the set of exit squares.

    The extrusion step is scale-dependent (octree level). When you enter a cube it is halved, hence not squares but rather (square, pyramid) pairs. Pyramids are tensors.

    Start with cube = object space and (view pyramid vertex, view pyramid).

    UD, VD, 5-D.

    1. Thanks, what is VD? Is “V” = 5, as in roman numerals?

      EDIT: 5 is the dimension of the bundle of straight lines in 3D space, also. I shall think about all of it when `I shall find some time, but more interesting (for me) is to see people’s (motivated) opinions. So, thanks again, excuse me for not adding more details to my challenge and hint of solution, I feel guilty but I really don’t have yet enough time to really check all the nuts and bolts, to be sure. Your message is really intriguing, on the nose I believe it.

      Therefore feel free to add more details, because more precise you are, more credible your solution looks. By announcing your solution you also claim your precedence, I don’t think anybody well informed still thinking UD is a hoax.

      What about the time estimate, how fast is the algorithm?

  4. There are no non-visible cubes traversed (whether they be out of the view pyramid or behind) and those that are traversed are traversed only once. So, what is depth-first traversed is the subtree that is the intersection between the octree representing the visible volume and the object octree. By the quadtree complexity theorem generalized by D. J. Meagher, the number of cubes in that subtree is as the [surface] of the visible volume + the log of the object octree sides’ length. But that surface is as its front surface (what you see) which is as the viewport’s area. Hence a complexity as the viewport’s area.

    Thank you very much for your perseverance and interest. Ideas should be patented only to prohibit their implementation, not to turn them into money (which is idolatry).

    The construction itself is an art, its application to the world an evil parasite. (L. E. J. Brouwer)

    Heal the sick, cleanse the lepers, raise the dead, cast out devils: freely ye have received, freely give. (Matthew 10.8)

  5. I emailed Donald Meagher a couple months ago in regards to his 1982 paper and subsequent patent on “”Efficient Synthetic Image Generation of Arbitrary 3-D Objects”
    He seemed uninterested in pursuing his patent, but did mention Dell had approached him at a trade fair he was attending. I’ll leave any significance of that up to the reader.
    Searching the Internet for “octree transform” will also get your reasearch juices flowing.
    See also http://www.voxalgroup.com/voxalgroup_flyer.pdf for mobile phones displaying massive data sets.

  6. I know I flood your blog, but Mr. Dell has achieved what I was after since the late ’90s. A beautiful formulation just occurred to me. I’ll post it here. The problem with the previous attempt is that it accessed memory in a cache inimical fashion (the algorithm is the same). I hope that what is going to be described soon is memory hierarchy friendly. Thanks for your patience. Of course I’m a crackpot, so what?: “J’ai souvent remarqué, que des personnes, qui ne font pas tout à fait profession du métier, ont coutume de fournir des pensées plus singulières, concetti più vaghi et più pelegrini, où l’on ne s’attend pas… une personne qui n’était pas géomètre du tout et qui fit imprimer quelque chose de géométrie donna quelque occasion à ma quadrature arithmétique, sans parler d’autres exemples.” (G. W. Leibniz)

    I believe that in a week’s time, a fast implementation shall be carried through.

    P.S.: If the previous occurrence did not work at all it is because “dragon.txt” got garbled by google. Here’s a working version: https://docs.google.com/file/d/0B0Tw1fnDScRsel93NGstbXFiTzg/edit?pli=1

  7. http://www.ipaustralia.com.au/applicant/euclideon-pty-ltd/patents/AU2012905545/

    What is this method? First, sort your points in Z-order, then construct an octree on the sorted list. What you get is a depth-first octree, the front-to-back traversal of which gives efficient streaming (because then, 3-D nearness often = 1-D storage nearness).

    Note that it is suspected that in UD, view pyramids aren’t degenerated to rays i.e., you stop the subdivision when they correspond to viewport tiles.
    Also, one deals with an octree of bricks.

    I again removed VD.zip because embarrassing overlooks were noticed.

  8. Hi, you may on the right track, but I’m not sure I follow you. You can only sort the points once, because of the time taken, so this can’t be done at rendering time. What happens to the Z ordering if the camera has many degrees of movement? If the camera is facing downwards then an increase in Z travels across the screen rather than into it.

  9. Yes the points are sorted once but the reason for their being sorted is to gather together common prefixes. An octree is constructed out of the sorted points. When you proceed in this way, 3-D nearness becomes 1-D nearness (Z-ordered points = depth-first ordered octree). The front-to-back traversal is view-dependent but it considers nearby things, at each scale. So there rarely are big jumps while retrieving octrees (the deeper, the smaller & more frequent the jumps): streaming & cache friendliness (because accesses are amortized).

    I’ll upload a shorter VD for fellow UD investigators: it contains the front-to-back traversal, the children rejection process, the Z sort & the octree-indexed bricks (of arbitrary size but all same-sized). What is lacking is tiling but it’s on its way (this drastically speeds up things). It is hoped that by the month’s end, a reasonable thing shall be output.

    Claim: a necessary condition for UD’s rapidity is the parsimonious quadrisection of the view pyramid (rarely degenerating them into rays).

  10. I’m not convinced that terabytes of data can be handled without huge disc/memory cache hits by simply ordering points in one dimension. You appear to be describing standard octree traversal, but I don’t think that laptop demo would be fast enough if that was happening for every pixel on the screen. Euclideon also claim to not be using any form of ray tracing.
    As an aside, have you considered ordering with a Morton code? It automatically puts everything in a neat octree order with close neighbours from one corner of the area to the opposite corner – all without actually using a sort routine. I’ve been using it in 2D and it’s a really fast way of handling and searching position data.
    Where have you uploaded? I cant see a link to anything.

  11. > As an aside, have you considered ordering with a Morton code
    Z-order = Morton-order = quadtree in depth-first order

    If you don’t sort the points prior to constructing your (non-hashed) quadtree, its nodes won’t, in general, be beautifully distributed and thus traversing such a structure may cause more higher rank misses than necessary.

    > Where have you uploaded? I cant see a link to anything.
    I’ll try to have something today; will put that under the above link. It likely will still be ridiculously slow (but it follows the rules!).

    It would be interesting if a more serious person were to contribute something (following the rules, however), I’m not even a programmer.

  12. “It would be interesting if a more serious person were to contribute something (following the rules, however), I’m not even a programmer.”
    I’m not sure what you implying here, I cannot contribute to theories in a direction I don’t believe in, but I can give hints as why they would be too slow or impracticable. The whole fun of understand UD is that fact that it IS fast, so anything over complex or data cumbersome just won’t work no matter how elegant it appears mathematically.

  13. Hi everybody, I shall leave this comment for a couple of hours, then I’ll trash it. It is very nice that people are starting to consider that UD is really a thing and to discuss about a possible solution, now that we know how it’s done, i.e. by a clever pre-processing recorded in a clever format of the database. BUT: let’s not deviate from the discussion, please. Maybe somebody could propose an alternative place for discussions, then record from time to time the state of them here?
    Otherwise, how are you doing JX? Any news?

  14. How can you have a conversation about UD if you redact any comments demonstrating that one particular idea is impossible? That’s illogical.

    1. Dave, I am not going to erase any of the already appeared posts. It is very nice that such a discussion is taking place here. Only, please, although it is difficult, everybody stay on the subject, that’s my request.
      On the other side I shall erase any comment along the lines “UD is a scam”, just because I believe it does not lead us anywhere.
      What about the database? The “secret” lies there.

      1. I didn’t say it was a scam, who’s saying that here? Please read my posts carefully.
        As a graphics/games/AI/DSP programmer (for 30+ years) I’m responding to 17genr’s comments. If they’re not a programmer then things like serial disc storage and memory caches are probably not on their mind when thinking about the solution for the database. So please, if you don’t want to home in on a solution for UD with different people, then sure, I’ll go elsewhere.

      2. Dave H, you’re welcome, but is this comment, for example useful? Or constructive? Discuss and criticize as much as you want, but on this blog arguments based on authority are not accepted (I’m a mathematician, remember?).
        There is already a series of comments which is not useful for anybody, that is why I asked to keep the eyes on the ball.
        Eventually I shall keep only the constructive (i.e. proposals of solutions, criticism of those) and I shall erase the useless (i.e. authority based, or attack the messenger, or those of mine like this one and the previous). Please don’t take this personal, is just a matter of hygiene.

  15. I was responding to your comment “I shall leave this comment for a couple of hours, then I’ll trash it”
    Of course I’ll take it personally, what did you expect?
    I took your statement as a direct negative approach to any possible debate on the subject of UD. You’ve got to have counter arguments to people’s ideas, otherwise what kind of mess do you think the world would be in? (well, more of a mess!)

  16. OK! Here’s my try:-
    What we know:
    1. UD loads really quickly, and suggests that it can be streamed over the internet.
    So, this means that octree segments from all visible distances cannot be streamed, even in a top down detail stream where the closest octree leaves get more detail. There just isn’t enough time to process it, the views are huge in the demos and the detail is immense.
    2. It renders those images really quickly, in a single thread on a laptop, which doesn’t leave any room for ray-tracing of any kind really. They also say they gave up on ray-tracing because of speed and show a demo of it it displaying a fish-eyed landscape.
    3. So therefore must be a 2D image based algorithm?
    I cut this list a bit of a short, as I wanted to get on with the description.

    So let’s have a look at it in 2D:-

    The red shape is a simple distance field object that describes the scene. The black area is empty space.
    The yellow squares are what I’m going to call Camera Spaces. These spaces get smaller when closer to the image surface, but they are all in empty space, with no overlap. It’s a non-leaf list of a Quad-tree.
    The reasons are based on relative movement, the closer a surface is to the camera the more movement it’s going to render, so it’ll need more data, but please note that this detail is only needed near a surface.
    The purple centres of the squares are the snap-shot positions, and the corners are the additional data points.
    Snap-shots are taken in cube-map form, of the world at each corner, including the centre, so that makes 9 as we are really dealing with 3D data.
    To create the data, the centre point cube-map pixels are all kept and the process of pixel elimination starts for the corners.
    First project the centre points at a new location, lets say top-left. You will get the correct image, but with holes in it because of all the extra geometry that isn’t seen from the centre. This top-left cubemap will only have to store the points not covered by the centre cube-map, so there’s a lot fewer of them and they can be compressed quite easily.
    Now onto the top-right of the cube, re-project the centre AND top-left cubemap onto a new cubemap at that location, then project and store only the blank areas left by the previous two projections. Keep going until all corners have been processed, The last corner will have only a small amount of data, but that does depend on the scenery around it.

    What you now have is a collection of single points that only give all visible things at all locations from within that cube.

    Remember that no scene object intersects the Camera Cube, so you can never go through anything. It may turn out that the centre projection is not actually needed in the final data set.

    Compression.
    The structure of the octree that created the Camera Space is in a neighbourly format, so traversing and displaying the octree snapshots in order shows frequent commonalities that can be picked up by movie compression algorithms which use motion detectors, and the Intel Media library could create a single movie stream for whole chunks of areas, and I’ve been told by the Intel forums that I can go to any frame in the movie quickly if I have regular I-Frames present in the MPEG. Therefore the a server can find my current ‘picture’ quickly and present the data exactly as needed, including streaming neighbourhood Camera Spaces when not busy.
    They may have used their own jpeg format to compress the data, but we don’t really kmow of course.
    The projection depth for each Camera Space map is stored as a 2 byte grey scale image and it may be even be better at compression than the picture cube-maps.
    There’s also a slight indication from their video’s colour quality that each camera cube could have it’s own 8 bit palette, meaning 1 byte per-pixel data.
    There are many ways of displaying the final results, and they could be described as ‘old school’ which fits right in with UD description. But I think I out of time…

    Well, that’s one possibility anyway. I’ve got to the stage of rendering the centre cube as a point projection, and it runs at 1250 frames per second at 1024×1024. Nice! 🙂

  17. Who’s Hecker? 🙂 Only a Hoskins here.
    Here’s some screen-shots of it so far.
    I don’t have, or like, any of the freely available point data sets so I’ve created a so called fractal Mandelbox for world creation.
    You can see from this image the frame rate per second (FPS) displayed by a third party program called ‘fraps’. This is the normal real-time image creation time.

    Here it’s rendering the a centre Camera Cube. Note it now has a more sensible field of view.

    And here you can see an offset projection. Note the black gaps in the image, with the missing projection, it’s not too obvious with that shot, sorry.

    Nice FPS! But I do have a fast GPU here, so the faster the better.
    Note that the FPS is faster in the last image because some of the pixels have left the screen area and get automatically culled by the graphics card. This gives me hope about rendering the residual points from the other corners, in reasonable time.
    There’s still a lot to do, and it’s getting a little complex so I do wonder whether it’s worth carrying on with it. There’s just so much data to process, plus I still don’t really know how much data each cube will really consume until right near the end of development.

    Cheers,
    Dave.

    Having done all that, the elephant in the room is still the data creation and conversion time for practical usage in certain fields, but we all new about that, didn’t we! 🙂

    1. Thanks for sharing Dave! Your self-referential “cubemap” construction is a really nice idea. I would love to see it applied for real data, like the ones Tony has (see his comments at Applications of UD (part II).

  18. OK! Now I’ve lost my fortune in secret patent malarkey! Oh well, c’est la vie.

    Yeah, well what’s real data! Joking aside the Mandelbox is a 4×4 box of infinite depth, I don’t mind its abstract nature, in fact it’s a good test because of all the tiny holes and spikes on it. But I know what you mean, it should be something for the eye to recognize and be able to reference in the real world.

    1. Dave, yes, a natural environment, like a city. A Mandelbrot fractal has other statistics of say, discrete curvature, than a city. (This “discrete curvature” is a scale dependent parameter which gives an estimate of the number of cubemaps needed). I hope Tony sees this, maybe he can already find some uses and tells us. I don’t know if Tony can see your e-mail, I see one, although I don’t know if it’s for real.

      1. A city, with a definitive floor and squarish nature may need less camera cubes than the Mandelbox, and it maybe one of the reasons UD took that route. It’s impossible to know at this stage though.
        My email address is the one with ‘blueyo****’ in the name.

  19. Dave H, are you willing to detail more, for example about compression, how you construct “camera space” from the usual database, examples? We could make a post about, like the one for JX’s comments.

    Everybody, have you seen Dave H’s “camera space” in the literature? If so please tell us where.

    Until now I find this one the most articulate proposal, I look forward for more.

  20. Hi everyone!
    I don’t know if I should post here or if I’m bumping an old thread or so. Sorry if I did.
    I just released a video about my idea on the functioning of Euclideon’s UD technology.

    I invite you all to give it a look. It does NOT rely on voxels to achieve a constant-time algorithm that is indipendent to the size of the input data. It’s a very simple and limited idea, but hopefully it will trigger more ideas from other people.
    I apologize for my bad english…
    Hope you enjoy! 😀

    1. Thanks for sharing! I have two quick questions.
      1. your rays start from a predefined point, but you need to be able to walk around with is origin, in the 3D space. Now, suppose that UD format records the sorting wrt angles from multiple viewpoints, so that you can use this sorting in realtime (but you have as much time as you want to transform the initial 3D database into the UD format, before using it for realtime virtual walking into the scene). The question is: how can you do that, how can you compress such a sorting into a memory space comparable to the one of the initial database of 3D points?
      2. You may have a point to use angles for sorting, for example if we think how the geospatial data is collected, right? The second question is: does anybody know what is the initial raw format of the collected data? I suspect that it is in polar coordinates, i.e. angles and distances from the POV.

      1. Those are good points. And not quick at all! 😀
        Now, concerning your first point, you are right, I can’t move it. But, as I stated in the vid, I’m also pretty sure that the UD is not using angles for sorting (I relied on angles in the vid because of mere simplicity).
        My guess is that they may use multiple sorted arrays of indices that point to the same unsorted (possibly compressed) array of point data.
        I dont know, probably they sort points by x component in an array, then in another array of pointers by the y component and finally on the z component again in another array.
        Then, there may is a way of “crossing” the informations on those 3 arrays so that, given any camera-space matrix, you should just “pick” the next element matching a ray orientation into those 3 arrays. The search algorithm will keep care of filtering empty space or something like the issue I described in the vid.
        …And keeping everything constant-time!

        For your second point, no. Data in my program is stored as uncompressed raw …XYXYXY… format and is just sorted by angle in degrees. Total of 8 bytes per point (angle is NOT stored, it is “guessed” by the search thingy).
        Source code is freely aviable btw.

        I dont think there is any feasible way of setting up a dynamic POV just by angle orientation of data. It implies resorting or something I guess…

        At last, I want you to know that I’m no expert in 3D graphics or math or anything. And mine are “raw ideas” myself I dont know how to materialize.
        Those are some hard questions for me. But I will try and answer everything hopefully!

  21. Anyone?

    I’ll put my updates here in case someone is interested.
    new demo download (should work on every Windows versions):
    http://www.mediafire.com/?38rs08p9a8rrnig

    Lately I was improving to my 2D sorting-based thingy.
    New features include the following:

    pros:
    – movable camera (finally!);
    – perfect precision search (always grabs the correct point in <4 iterations);
    – shape indipendent accuracy (I can load arbitrary geometry without a problem);
    – still constant-time!

    cons:
    – 50x more memory consumption (!!);
    – VERY slow initial sorting (I'm working on a faster sorting algorithm);
    – step cam rotations.

    The new algorithm works by sorting points not by angle like before but by "y-intercept" of the line passing by the point itself, given a specified slope. This makes possible camera movements but not camera rotations (exact opposite condition from before). So, instead of precomputing every possible cam position we can much more easily precompute cam rotations. This is done by creating many y-offset sorted pointers arrays (one each ~10 degrees).
    Each of those new arrays is as long as the data array itself. So, if we preload 50 of them in a span of 180 degs we have datax50 total ram used… Everything is uncompressed but that's bad shit.

    Future improvements:
    – Supersampling.
    Instead of precomputing more arrays for more rotation precision, to "fill the blanks" one can just translate by little amounts the already loaded rays. Most of the times this will bring accurate results.
    – Better initial sorting organization.
    Since the number of arrays to be sorted increases with the number of rotation subdivisions wanted, the program must do _that_ many more array sortings. Again, bad shit.
    I have all the time I need but 1 minute for just 10000 points is way too slow.

    I'll make a new video explaining everything but before that I want a fully-working supersampling feature and a serious memory management.

    Any suggestion is appreciated.

    Notice: If Dell actually used a method similar to mine, I still think Euclideon found a one-way sorting method for both cam motion&rotation. Maybe that's why Dell claimed that their technology is "memory efficient": they dont need more than one sorted array of pointers to do everything. Quaternions?

  22. Hello, i don’t know is this post is still open but I wan share my idee.

    Let’s assume they call themselves Euclideon because they put Euclide On. http://en.wikipedia.org/wiki/Euclidean_space

    So imagine our world is made off points.

    so you have data witth the position and the color like this:

    (1,4):red (3,6):blue ….

    –step 1 : Choose a scale!

    if you don’t wanna lose precision just take the point whit the most long digit ( assuming this point is ( 1.00001 , 45) lets multiply all the points by 100000
    this one will be = (100001 , 4500000))

    –Step 2 : put the zero in the bottom on the left

    look the point at the bottom left corner it will be ( X ,Y) and then just substract this on all the points.

    if I take the two points I use before they look like this now:

    (100000 – X , 400000 -Y):red (300000 – X,600000 – Y):blue …

    –Step 3: make a new world (empty) with the same scale and the (0,0) at the bottom left corner and save the Polar coordinate system for each points,sort them by angle and then by distance.

    the angle for the point (a,b) is : arctangent(b/a)
    http://en.wikipedia.org/wiki/Inverse_trigonometric_functions
    and the distance is : sqrt(a²+b²)

    so for all (a,b) points in your wolrd you will have:
    (a,b) , arctangent(b/a) , sqrt(a²+b²) in memory.

    sort them by angle
    and then by distance

    –Step4:Choose your screen.

    lets take 4 pixels on the screen,eye distance 2 like the original post

    –Step 5:Compute the angle between eye and pixels for the screen (the center is 0°).

    with the 4 pixels screen with 2 for eye distance you get 5 angle.
    (-45° , -26.565° , 0° , 26.565° , 45°)

    –Step 6:It is all done!
    Put you camera anywhere on a random point of you world pointing in the direction you want.

    to find the 4 pixels you need on you camera it is easy.

    ———————-Lets find the first one:
    assuming the direction of the camera is an angle and the (1,0) direction is 0° and the max 360°.

    We choose the camera on the point (2,5) with a angle of 23,5°.

    first lets go in our empty space with the Polar coordinate system
    (with Angle then Distance Sorting algorithm) and lets look for the point who have a Angle

    (45°+23,5°)° and (26.565+23,5°)
    save them by using Euclidean style (like (x,y) with x and y integer) and distance sorting

    now go back into your world an explore starting at the position of the camera by adding the point you save.

    “””imagine the seach in the empty space gave you (0,1) (0,2) ….
    and your camera is at (2,5) you look the (2,5)+(0,1) (2,5)+(0,2)….””
    the first color you find is the pixel you put on the first pixel of you screen.

    Tips use symetry for camera with more than 90°

    I am from Belgium and i speak french, i am sorry for my english.

Leave a reply to Dave H. Cancel reply