For the background see the previous post Digital materialization, Euclideon and fractal image compression.
As I wrote before, this may have been done, please tell me if so.
The idea is that once a 3D image is encoded by a kind of fractal image compression algorithm, then the problem of attribution of a “3D atom” of the 3D image to a “2D pixel” from the screen becomes a search problem in a tree, as maybe the Unlimited Detail – Euclideon algorithm is. The kind of encoding I am writing about may have been done by, or may be useful for the groups working in “Digital materialization”.
UPDATE: Here is a youtube video made by AEROmetrex company, “a leader in photogrammetric solutions is launching its latest technological service aero3Dpro today: a complete 3D modelling service for Australian and overseas users of geospatial and 3D data”
In the comments, they write they are using Geoverse/Unlimited Detail/Euclideon technology and they mention that
For a 15Gb 3D model the Euclideon Unlimited Format file is about 2 Gb
Further is detailed my speculation that Euclideon may use a variant of fractal image compression in their format.
I do not want to trespass any boundaries or pretending I am doing anything new, that is why I repeat my request to tell me if anything like this was done (and who did it).
1. Let me first recall a famous theorem by Hutchinson, concerning iterated function systems.
Let be a complete metric space and let be the collection of compact sets in . On the space we may put the Hausdorff distance between subsets of , defined in the following way. For any and for any set , the neighbourhood of is the set
where is the closed ball centered at , of radius .
The Hausdorff distance between sets is then
The important fact happening here is that is a complete metric space. Moreover, if is compact, then is compact too.
An iterated function system (IFS) on the compact is a finite collection of transformations of , say , such that every is a contraction: there is such that
for any .
The Hutchinson operator associated to the IFS is the transformation
goes to .
Hutchinson theorem says that is a contraction, therefore it has a unique fixed point, i.e. a compact set such that , which is the same as
2. Michael Barnsley had the idea of using this result for doing fractal image compression. In few words, fractal image compression is any algorithm which solves the inverse problem: given , find an IFS which has as a fixed point. In this way the set (which represents the image, for example as the graph of the function which associates to any pixel a RGB vector of colors) is “encoded” by the functions from the IFS. More specifically, we may take to be (a compact subset of) and look for an IFS of affine contractions. Then the set is encoded in (or compressed to) the set of coefficients of the affine transformations of the IFS.
3. Without going to the last detail, let us see this construction from the point of view of “Digital materialization”. The idea which is behind is to characterise a subset of by a function such that if and only if . If is described by and is described by , then
– is described by and
– is described by and
– for any bijective transformation of , the set is described by .
Thus, starting from a library of functions and from a library of bijective transformations of the space (like translations. rotations, etc, where this makes sense), we may describe a huge collection of sets by a tree, which has nodes decorated with , and edges decorated with compositions , with from the library of “textures” [my choice of name maybe not good, correct me please] and from the library of “legal” bijective transformations.
In fact, we may refine a bit this description by giving: [note added later: is this some version of “Image codingbased on a fractal theory of iterated contractive image transformation”, AE Jaquin – IEEE Trans. on image Processing, 1992 ?]
– a master shape , for example, taken from the equation of a ball of radius , if we suppose that is an euclidean space. Let be the set described by .
– a collection of “locators” [I’m totally inventing names here], that is a finite collection of (affine, say) contractions of such that they send to a subset of
– a potentially infinite collection of “textures”, one for every IFS constructed from a finite set of locators and the masted shape . Therefore, to any finite collection of locators we associate the “texture”
– a finite collection of “legitimate translations”, which are just affine (say) transformations with the property that they move to sets which do not intersect with , and such that they generate a group which is roughly equivalent with the space (simply put, if is then the group generated by legitimate translations contains a ).
Given a IFS constructed from compositions of “locators” and “legitimate translations”, say , there is a Hutchinson-like operator which associates to any “shape” (a function obtained from the application of a finite number of “min” and “max” to a finite collection of functions obtained from the master shape composed with locators and legitimate translations) the new shape
Notice that, in terms of the tree which describes the shapes, the tree which describes is easily (recursively) obtained from the tree which describes .
Now, a compression algorithm associated to this data is any algorithm which solves the following problem: given a (compact) set and a , find an IFS constructed from “locators” and “legitimate translations” which has the fixed point inside .
By using locators and legitimate translations, one may define “textures” and “shapes” at a scale.
The big problem is to find efficient algorithms, but once such an algorithm is used to encode a shape (which might be time intensive) the decoding is easy!