Tag Archives: unlimited detail

Another discussion about math, artificial chemistry and computation

I have to record this ongoing discussion from G+, it’s too interesting.  Shall do it from  my subjective viewpoint, be free to comment on this, either here or in the original place.

(Did the almost the same, i.e. saved here some of my comments from an older discussion, in the post   Model of computation vs programming language in molecular computing. That recording was significant, for me at least, because I made those comments by thinking at the work on the GLC actors article, which was then in preparation.)

Further I shall only lightly edit the content of the discussion (for example by adding links).

It started from this post:

 “I will argue that it is a category mistake to regard mathematics as “physically real”.”  Very interesting post by Louis Kauffman: Mathematics and the Real.
Discussed about it here: Mathematics, things, objects and brains.
Then followed the comments.
Greg Egan‘s “Permutation City” argues that a mathematical process itself is identical in each and any instance that runs it. If we presume we can model consciousness mathematically it then means: Two simulated minds are identical in experience, development, everything when run with the same parameters on any machine (anwhere in spacetime).

Also, it shouldn’t matter how a state came to be, the instantaneous experience for the simulated mind is independent of it’s history (of course the mind’s memory ought to be identical, too).He then levitates further and proposes that it’s not relevant wether the simulation is run at all because we may find all states of such a mind’s being represented scattered in our universe…If i remember correctly, Egan later contemplated to be embarassed about this bold ontological proposal. You should be able to find him reasoning about the dust hypothesis on the accompanying material on his website.Update: I just saw that wikipedia’s take on the book has the connection to Max Tegmark in the first paragraph:
> […] cited in a 2003 Scientific American article on multiverses by Max Tegmark.
http://en.wikipedia.org/wiki/Permutation_City

___________________________
+Refurio Anachro  thanks for the reference to Permutation city and to the dust hypothesis, will read and comment later. For the moment I have to state my working hypothesis: in order to understand basic workings of the brain (math processing included), one should pass any concept it is using through the filter
  • local not global
  • distributed not sequential
  • no external controller
  • no use of evaluation.

From this hypothesis, I believe that notions like “state”, “information”, “signal”, “bit”, are concepts which don’t pass this filter, which is why they are part of an ideology which impedes the understanding of many wonderful things which are discovered lately, somehow against this ideology. Again, Nature is a bitch, not a bit 🙂

That is why, instead of boasting against this ideology and jumping to consciousness (which I think is something which will wait for understanding sometimes very far in the future), I prefer to offer first an alternative (that’s GLC, chemlambda) which shows that it is indeed possible to do anything which can be done with these ways of thinking coming from the age of the invention of the telephone. And then more.

After, I prefer to wonder not about consciousness and it’s simulation, but instead about vision and other much more fundamental processes related to awareness. These are taken for granted, usually, and they have the bad habit of contradicting any “bit”-based explanation given up to date.

___________________________

the past lives in a conceptual domain
would one argue then that the past
is not real
___________________________
+Peter Waaben  that’s easy to reply, by way of analogy with The short history of the  rhino thing .
Is the past real? Is the rhinoceros horn on the back real? Durer put a horn on the rhino’s back because of past (ancient) descriptions of rhinos as elephant killers. The modus operandi of that horn was the rhino, as massive as the elephant, but much shorter, would go under the elephant’s belly and rip it open with the dedicated horn.  For centuries, in the minds of people, rhinos really have a horn on the back. (This has real implications, alike, for example, with the idea that if you stay in the cold then you will catch a cold.) Moreover, and even more real, there are now real rhinos with a horn on the back, like for example Dali’s rhino.
I think we can safely say that the past is a thing, and any of this thing reifications are very real.
___________________________
+Peter Waaben, that seems what the dust hypothesis suggests.
+Marius Buliga, i’m still digesting, could you rephrase “- no use of evaluation” for me? But yes, practical is good!
___________________________
[My comment added here: see the posts

]

___________________________

+Marius Buliga :

+Refurio Anachro  concretely, in the model of computation based on lambda calculus you have to add an evaluation strategy to make it work (for example, lazy, eager, etc).  The other model of computation, the Turing machine is just a machine, in the sense that you have to build a whole architecture around to use it. For the TM you use states, for example, and the things work by knowing the state (and what’s under the reading head).  Even in pure functional programming, besides their need for an evaluation strategy, they live with this cognitive dissonance: on one side they they rightfully say that they avoid the use of states of the imperative programming, and on the other side they base their computation on evaluation of functions! That’s funny, especially if you think about the dual feeling which hides behind “pure functional programming has no side effects” (how elegant, but how we get real effects from this?).
In distinction from that. in distributed GLC there is no evaluation needed for computation. There are several causes of this. First is that there are no values in this computation. Second is that everything is local and distributed. Third is that you don’t have eta reduction (thus no functions!). Otherwise, it resembles with pure functional programming if you  see the core-mask construction as the equivalent of the input-output monad (only that you don’t have to bend backwards to keep both functions and no side effects in the model).
[My comment added here: see behaviour 5 of a GLC actor explained in this post.
]
Among the effects is that it goes outside the lambda calculus (the condition to be a lambda graph is global), which simplifies a lot of things, like for example the elimination of currying and uncurrying.  Another effect is that is also very much like automaton kind of computation, only that it is not relying on a predefined grid, nor on an extra, heavy handbook of how to use it as a computer.
On a more philosophical side, it shows that it is possible to do what the lambda calculus and the TM can do, but it also can do things without needing signals and bits and states as primitives. Coming back a bit to the comparison with pure functional programming, it solves the mentioned cognitive dissonance by saying that it takes into account the change of shape (pattern? like in Kauffman’s post) of the term during reduction (program execution), even if the evaluation of it is an invariant during the computation (no side effects of functional programming). Moreover, it does this by not working with functions.
___________________________
+Marius Buliga “there are no values in this computation” Not to disagree, but is there a distinction between GLC graphs that is represented to a collection of possible values? For example, topological objects can differ in their chromatic, Betti, genus, etc. numbers. These are not values like those we see in the states, signals and bits of a TM, but are a type of value nonetheless.
___________________________
+Stephen Paul King  yes, of course, you can stick values to them, but  the fact is that you can do without, you don’t need them for the sake of computation. The comparison you make with the invariants of topological objects is good! +Louis Kauffman  made this analogy between the normal form of a lambda term and such kinds of “values”.
I look forward for his comments about this!
___________________________
+Refurio Anachro  thanks again for the Permutation city reference. Yes, it is clearly related to the budding Artifficial Connectomes idea of GLC and chemlambda!
It is also related with interests into Unlimited Detail 🙂 , great!
[My comment added here: see the following quote from the Permutation city wiki page

The Autoverse is an artificial life simulator based on a cellular automaton complex enough to represent the substratum of an artificial chemistry. It is deterministic, internally consistent and vaguely resembles real chemistry. Tiny environments, simulated in the Autoverse and filled with populations of a simple, designed lifeform, Autobacterium lamberti, are maintained by a community of enthusiasts obsessed with getting A. lamberti to evolve, something the Autoverse chemistry seems to make extremely difficult.

Related explorations go on in virtual realities (VR) which make extensive use of patchwork heuristics to crudely simulate immersive and convincing physical environments, albeit at a maximum speed of seventeen times slower than “real” time, limited by the optical crystal computing technology used at the time of the story. Larger VR environments, covering a greater internal volume in greater detail, are cost-prohibitive even though VR worlds are computed selectively for inhabitants, reducing redundancy and extraneous objects and places to the minimum details required to provide a convincing experience to those inhabitants; for example, a mirror not being looked at would be reduced to a reflection value, with details being “filled in” as necessary if its owner were to turn their model-of-a-head towards it.

]

But I keep my claim that that’s enough to understand for 100 years. Consciousness is far away. Recall that first electricity appeared as  kind of life fluid (Volta to Frankenstein monster), but actually it has been used with tremendous success for other things.
I believe the same about artificial chemistry, computation, on one side, and consciousness on the other. (But ready to change this opinion if faced with enough evidence.)

___________________________

Is there an uncanny valley between reality and computing?

A very deep one, maybe.

I was looking for news about UD and euclideon. I found this pair of videos [source], the first by BetterReality and the second by The Farm 51 .

Now, suppose that we scanned the whole world (or a part of it) and we put the data in the cloud. Do we have a mirror of reality now on the cloud? No! Why not, the data, according to mainstream CS ideology, is the same: coordinates and tags (color, texture, etc) in the cloud, the same in reality.

Think about the IoT, we do have the objects, lots of them, in potentially unlimited detail. But there is still this uncanny valley between reality and computation.

We can’t use the data, because:

  • there is too much data (for our sequential machines? for our dice and slice ideology,  a manifestation of the cartesian disease ? )
  • there is not enough time (because we ask the impossible: to do, on one very limited PC, the work  done by huge parts of reality? or because the data is useful only together with the methodology (based on  absolute, God’s eye view of reality, based on passive space as a receptacle), and the methodology is what stops us?)

I think that we can use the data (after reformatting it) and we can pass the uncanny valley between reality and computing. A way to do this supposes that:

  • we get rid of the absolute, passive space and time, get rid of global views (not because these don’t exist, but because this is a hypothesis we don’t need!)
  • we go beyond Turing Machine and Von Neumann architecture, and we seriously include P2P asynchronous, local, decentralized way of thinking into the model of computation (like CSP, or Actor Model or why not Distributed GLC?)

This is fully compatible with  the response given by Neil Gershenfeld to the question

2014 : WHAT SCIENTIFIC IDEA IS READY FOR RETIREMENT?

(Thank you Stephen P. King for the G+ post which made me aware of that!)

What’s going on in this UD algorithm?

In the post My idea about UD, completely unrelated to the content of it, b@b made a comment where he gives a link to a post of “BruceRDell” reproduced here:

Okay, I see you understand basic principles. Now let’s move to the more complex things. I can’t share exact UD algorithm (you must understand me, I’ve spent years mastering it) but I can express in a code what I have already said in the interviews. The thing I want to say is: you must use sorting and eliminate raycasting.
Sorting algorithm I introduce here is very VERY simple. You can optimize it a lot using octrees and more elegant heuristics than that. But keep it simple.
You’ve mentioned this http://rghost.net/48541594 dataset, I had some problems downloading it, but I did it that and can say it is good for testing, you may use it with my program.

Here is the code I am talking about. If you’re smart enough, you’ll get benefit from it.

Magio, in this comment on My idea about UD, says:

I’m probably way too late for this. But anyway I made this quick edit of the program:
http://pastebin.com/HiDW5tMB     [updated version:http://pastebin.com/kW1t5s1c  ]

___________________

Question: why does it work?

UPDATE:  It’s not clear if it really works, it’s unknown the quantitative improvement given by the random permutation trick. If you care, the BruceRDell  thing is a prank.

___________________

I would like to understand the following:

  1. what is the role of the random permutation?
  2. why does the main loop work?
  3. is this already done? (send links to papers, if any)

So, let’s have a polite and informative discussion in the comments, if you agree. We all write in some variant of broken english, therefore I don’t see why anybody can’t participate.

___________________

Not interested in: is the real Bruce Dell? Is the real UD algorithm? Only interested to understand the guts of it.

I finish with the video from b@b’s comment:

My idea about UD

Here is what I believe that  UD is doing. This can be surely multiply optimized, but the basics is: put the z buffer in the 3d space.

I. Preparing the database.

1. Imagine a 3D cubic lattice which is as big as your 3D scenery bounding box.  Take coordinates aligned with the lattice. Put the camera at coordinates (0,0,0) and surround it with a cube C. The faces of the cube C will serve as the cubemap we want to construct. Each face is covered by pixels of a given resolution. We have already the following parameters to play with, given in the units of the coordinate system chosen: the step of the lattice, the dimension of the cube C, the resolution of a face of C.

2. render, by any means you like, the lattice, as seen from the (0,0,0) pov, on the 6 “screens” -faces of C.  We have 6 view frustra, the discussion will be the same for each of them. In order to render them you need to put small balls, or squares facing the relevant face of the cubemap, or whatever, at the lattice nodes, so we have another parameter here, the diameter of the ball or square.  As a result of the rendering you now know the following things:

  • for any lattice atom you know which pixel from which face it corresponds. You may do better for lattice atoms which are inside the cube C, namely “ignore” them, say you attribute to them a IGNORE label, otherwise you attribute to each lattice atom the 2D coordinates of the pixel from the cubemap and a information which says which face of the cubemap the pixel is,
  • you have information about the scale, which you attach to the lattice atoms, like this: if two neighbouring lattice atoms project on the same pixel then attach IGNORE to both. If the ball/square/whatever of a lattice atom projects on more than one pixel then attach to it  a number SCALE approximately proportional with the square ROOT (thanks ine) of the number of pixels it projects (or the dimension of the bounding box of the pixels)

Of course, you don’t want to take a huge lattice, with very small balls. That’s all in the parameters choice.

3.  Take now your database of 3D points, i.e. the real one, which you want to render eventually, UD style. I shall ignore, for the sake of the argument, how is the database implemented: as an octree or otherwise, or even if the database is made by 3D points or by some more complex objects, like polygons.  Put this database in the coordinates chosen first, such that, for example, if you are working with octrees, the cells of the lattice correspond with some level of the octree.  Attach to each node of the lattice the supplementary information: the points from the database which are within the ball surrounding the atom, or else the label VOID. Alternatively, think that any point from the database is inside some cell lattice and “project” it on each corner of the the cell lattice (i.e. attach to each lattice atom a list of points from the database which are in the neighbourhood of the lattice atom, the nil list corresponds to VOID)

4.  Let’s see what we have, supposing for example that we use octrees. We add the 3D lattice to the 3D database of points (by using lists of points as explained at 3.) and for any lattice atom we attach also the information as explained at point 2.

How to compress this efficiently? There is a new parameter here, namely the level of the octree and also how is the color, for example, information stored in the octree. Of course, this is a matter of recursion, namely at points 1-3 we may take finer and finer resolutions and lattice steps, and so on, starting from a very gross lattice and resolution, etc, then trying to figure a correct recursion procedure. That’s work to be done, is not trivial but it is somehow straightforward once you figure it.

II. The pipeline and rendering.

The problem is that we want to be able to get very very fast, from the database constructed at (I), only the points which are needed for realtime rendering, when the camera is at coordinates (x,y,z).

This problem splits into two different ones:

  • at the start of the realtime UD rendering, we want to be able to cull something close to the minimum number of 3D points, when camera is at (0,0,0). According to the information given by Euclideon, a good algorithm should   able to do this in about 1s.
  • then, we need a procedure to take what we need from the database when we change the pov from (x,y,z) to (x+1,y, z) (or alike). This should be much more faster, allowing for realtime rendering.

As a preparation, let’s remark that:

  1. if the camera is a (0,0,0), then we already know where each lattice point projects, is written in the database. So we just need to start from a pixel, at a given resolution (reccursion again), and to choose from the database only the lattice atoms which are CLOSE, in the decreasing order of SCALE, and from those the real 3D points which are neighbours (of course we have to use the octree structure). We get for each pixel a number of points of the order of log(dimension of the world).
  2. if the camera is at (x,y,z) we also know where each point from the 3D database projects, because we read it from the data attached to the lattice atom which, translated by (x,y,z), is the neighbour of the point. We get also the SCALE parameter from this.

1. We use remark 1 to solve the first problem, namely what comes through the pipeline from the huge 3D database to the pre-rendering buffer B, when we start with the camera at (0,0,0). The buffer contains about (number of pixels) X log(dimension of the world)  3D points, along with pixels coordinates where they project and with SCALE.  This is fed directly to the rendering procedure, which you can choose freely, but it is almost trivial.

2. What happens when we move the camera? We update the pre-rendering buffer B, only by updating the pixel and scale information for the 3D points in the buffer D, getting only by a translation (addition) the relevant data from the huge database, in case here are two things which might happen: there is a point which exits the buffer, or there is a hole in the image.

_________________________

Is this making any sense? Please let me know, because I was asked repeatedly what do I really believe.

The weekend round table on UD

The comments from   Is 3D Cube a predecessor of UD?   were very useful as concerns the  idea of making open source UD like programs. There are already two projects, created by Bauke and Jim respectively, see the page   UD projects here for the relevant links.

We cannot possibly know if any of  these two proposals is like Bruce Dell’s UD, but it does not matter much because Bauke and Jim programs may well turn out to be as good as Dell’s, or why not even better. (However,  I still want to know how exactly the database of Saj’s 3D Cube is done, because of the claims he made many years ago, which are almost identical with the ones made by Dell, see the link from the beginning of this post.)

Enough chit-chat, the reason for this post   is that I suppose new discussions will follow, for example about Bauke’s still pending detailed explanations about his program. Also, any day now Jim might amaze us.

So, please start commenting here instead, if you feel the need to ask or answer questions about the two projects. Or, hey, you are welcome to announce yours!

_________

UPDATE:   I add here Bauke’s explanations from this comment :

My algorithm consists of three steps:
1. Preparing the quadtree. (This process is very similar to frustum culling, I’m basically culling all the leaf nodes of the quadtree that are not in the frustum.) This step can be omitted at the cost of higher CPU usage.
2. Computing the cubemap. (Or actually only the visible part, if you did step 1.) It uses the quadtree to do occlusion culling. The quadtree is basically an improved z-buffer, though since the octree is descended in front to back order, it is sufficient to only store ‘rendered’ or ‘not rendered’. It furthermore allows to do constant time depth checks for larger regions.

3. Rendering the cubemap. This is just the common cubemap rendering method. I do nothing new here.

My description only explains step 2, as this is the core of my algorithm. Step 1 is an optimization and step 3 is so common that I expect the reader to already know how this step works.

Step 2 does not use the display frustum at all. It does do the perspective. But does so by computing the nearest octree leaf (for the sake of simplicity I’m ignoring the LOD/mipmap/anti-aliassing here) in the intersection of a cone and the octree model. This is shown in the following 2D images:

sbus

 
[image source]

 

sn29
[image source]

I don’t know what you mean with ‘scaling of each pixel’, but I believe that scaling of pixels only happens in step 3. In step 1 and 2 I completely ignore that this happens.

I hope this answers your questions. If not, please tell which of the 3 steps you do not understand.

 

 

_________

UPDATE: You may like techniques used here  [source]

Is 3D Cube a predecessor of UD?

Pablo Hugo Reda has found this old site dedicated to the 3D Cube Project (that’s an even older link sent by appc23, with more access to info), as well as a discussion on the net which looks strangely alike the actual exchanges around UD.  There are an exe and a database, as well. I reproduce further some parts, taken from the two links provided by Pablo  (only boldfaces by me):

3DCube is as far as I know, a completely different way to store and display complex 3D images. The building atoms of the 3D data structure are lines of pixels, not polygons, bitmaps, or voxels. As explained above, the main objective of the project was to create a system that would allow very complex, elaborate models, with hundreds of structures, which could take up to an entire CD-ROM but require only a small portion of the model to reside in RAM. A large portion of the challenge was coming up with data organization that would allow keeping the model on CD ROM but be capable of displaying the perspective of the entire model instantly at any time. This is possible since the high detail of the model is only needed for elements close to the view point origin. Almost no processing is needed to load the model or its parts from the disk (please notice how quickly the demo initializes). Therefore, the disk activity processing load related to tracking the view point movement is very small – much lower than required for playing a digital video for example.

The algorithm required to display the image at any angle from the view point is quite simple. No floating point calculations, trigonometric functions, or even division instructions are needed, and use of multiplication instructions is very limited. A simple custom hardware utilizing my method could render the image with the same ease as a video card hardware displays the bitmap stored in its video memory. […]

My rendering algorithm is essentially a DSP-type algorithm, working with 64-bit data, which generates the image scan-line by scan-line with operations being mostly adding of 64-bit data. If the 80×86 just had a few more registers, the entire rendering algorithm would use no temporary RAM data (just the CPU registers) and would render the entire image by reading the model and outputting the resulting scan-lines. The biggest current problem in implementing the algorithm now is the necessity to swap the temporary data in and out of the registers to memory.

3D Cube project was originated with the intention of making it a new generation 3D game engine, allowing unprecedented detail and complexity of “virtual worlds” created. After seeing the performance and model sophistication possible with this method, I realized that the possible applications of the method are far beyond just video games. […]

In addition to the above, 3D Cube could allow things like taking pictures of some scene from different directions, building a model out of it. 3D Cube storage method

It’s not a mistake, the text ends like this.

From the discussion, a comment by the 3D Cube creator, S.A. Janczewski:

Well, let us try to clarify the term voxel.
– If a “voxel” can have different color/attribute from each of 6  directions, is it still a voxel?
– If it is not cubical — can have sloped surfaces, is it still a  voxel?
– If the six colors of a “voxel” are not at all stored together as a unit, is it still a voxel?
If the data organization of a 3D engine does not have any kind of  data structure that would store data through clear (x,y,z) – coordinate
granularity, is it still a voxel engine?

If you answer yes to all the above questions than my engine is a voxel engine but so is any polygon-based engine.

I hope that the above will make it clear that the reasons I did not use  the term voxel anywhere were following:
– My method has absolutely nothing in common with commonly known voxel engines or storage techniques
– Since the term voxel is not clearly defined, using it would not  contribute anything(rather than confusion) to the descriptions
– Some “voxel engine” techniques are patented — using the term could result in getting myself (without a reason) accused of “patent infringement.”

Please forgive me if you somehow interpreted my previous message as   criticism of your input. I did however get accused quite a few time of ignorance (not by you) for not using the term voxel in my description  and felt it was appropriate to respond to it.

What do you think?

___________

UPDATE: Is maybe relevant for the discussion to state that the goal is to produce an open source variant of an UD like algorithm. As you can see, supposing of course that 3D Cube was indeed a precursor of UD, the problems are the same, i.e. a probably very good idea, with a lot of potential for cash and also a game changer for an industry. Communicating it means loosing an advantage, but not communicating it leads to disbelief. There is a third way, open source. Right, no direct money from it, but everybody benefits and new possibilities open. So, in case you are working on that, don’t be shy or secretive, that’s not the good idea. Share.