# Digital materialization, Euclideon and fractal image compression (II)

For the background see the previous post Digital materialization, Euclideon and fractal image compression.

As I wrote before, this may have been done, please tell me if so.

The idea is that once a 3D image is encoded by a kind of fractal image compression algorithm, then the problem of attribution of a “3D atom” of the 3D image to a “2D pixel” from the screen becomes a search problem in a tree, as maybe the Unlimited Detail – Euclideon algorithm is. The kind of encoding I am writing about may have been done by, or may be useful for the groups working in “Digital materialization”.

UPDATE:  Here is a youtube video made by AEROmetrex company, “a leader in photogrammetric solutions is launching its latest technological service aero3Dpro today: a complete 3D modelling service for Australian and overseas users of geospatial and 3D data”

In the comments, they write they are using Geoverse/Unlimited Detail/Euclideon technology and they mention that

For a 15Gb 3D model the Euclideon Unlimited Format file is about 2 Gb

Further is detailed my speculation that Euclideon may use a variant of fractal image compression in their format.

I do not want to trespass any boundaries or pretending I am doing anything new, that is why I repeat my request to tell me if anything like this was done (and who did it).

1. Let me first recall a famous theorem by Hutchinson, concerning iterated function systems.

Let $(X,d)$ be a complete metric space and let $H(X)$ be the collection of compact sets in $(X,d)$. On the space $H(X)$ we may put the Hausdorff distance between subsets of $X$, defined in the following way. For any $\varepsilon > 0$ and for any set $A \subset X$, the $\varepsilon$ neighbourhood of $A$ is the set

$A_{\varepsilon} = \left\{ y \in X \mbox{ : } d(x,y) \leq \varepsilon \right\} = \cup_{x \in A} B(x,\varepsilon)$

where $B(x, \varepsilon)$ is the closed ball centered at $x$, of radius $\varepsilon$.

The Hausdorff distance between sets $A, B \subset X$ is then

$d_{H}(A,B) = \inf \left\{ \varepsilon > 0 \mbox{ : } A \subset B_{\varepsilon} , B \subset A_{\varepsilon} \right\}$

The important fact happening here is that $(H(X), d_{H})$ is a complete metric space. Moreover, if $X$ is compact, then $H(X)$ is compact too.

An iterated function system (IFS)  on the compact $(X,d)$ is a finite collection of transformations of $X$, say $a_{1}, ... , a_{N}$, such that  every $a_{i}$ is a contraction: there is $r_{i} \in (0,1)$ such that

$d(a_{i}(x), a_{i}(y)) \leq r_{i} d(x,y)$ for any $x,y \in X$.

The Hutchinson operator associated to the IFS is the transformation

$A \in H(X)$    goes to  $T(A) = \cup_{i = 1, ..., N} a_{i}(A)$.

Hutchinson theorem says that $T$ is a contraction, therefore it has a unique fixed point, i.e. a compact set $A \subset X$ such that $T(A) = A$, which is the same as

$A = \cup_{i = 1, ..., N} a_{i}(A)$.

2. Michael Barnsley had the idea of using this result for doing fractal image compression.  In few words, fractal image compression is any algorithm which solves the inverse problem: given $A \in H(X)$, find an IFS which has $A$ as a fixed point. In this way the set $A$ (which represents the image, for example as the graph of the function which associates to any pixel a RGB vector of colors) is “encoded”   by the functions from the IFS. More specifically, we may take $X$ to be (a compact subset of) $\mathbb{R}^{n}$ and look for an IFS of affine contractions. Then the set $A$ is encoded in (or compressed to) the set of coefficients of the affine transformations of the IFS.

3. Without going to the last detail, let us see this construction from the point of view of “Digital materialization”. The idea which is behind is to characterise a subset of $X$ by a function $\phi: X \rightarrow \mathbb{R}$ such that $x \in A$ if and only if $\phi(x) \leq 0$.  If $A$ is described by $\phi$ and $B$ is described by $\psi$, then

$A \cup B$ is described by $\min \left\{ \phi, \psi \right\}$  and

$A \cap B$ is described by $\max \left\{ \phi , \psi \right\}$ and

– for any bijective transformation $a$  of $X$, the set $a(A)$ is described by $\phi \circ a^{-1}$.

Thus, starting from a library of functions $\phi, \psi, ...$ and from a library of bijective transformations of the space (like translations. rotations, etc, where this makes sense), we may describe a huge  collection of sets by a tree, which has nodes decorated with $\min$, $\max$  and edges decorated with compositions $\phi \circ a$, with $\phi$ from the library of “textures” [my choice of name maybe not good, correct me please] $\phi, \psi, ...$  and $a$ from the library of “legal” bijective transformations.

In fact, we may refine a bit this description by giving: [note added later: is this some version of  “Image codingbased on a fractal theory of iterated contractive image transformation”,  AE Jaquin – IEEE Trans. on image Processing, 1992 ?]

– a master shape $\phi$, for example, taken from the equation of a ball of radius $1$, if we suppose that $X$ is an euclidean space. Let $M$ be the set described by $\phi$.

– a collection of “locators” [I’m totally inventing names here], that is a finite collection of (affine, say) contractions of $X$ such that they send $M$ to a subset of $M$

– a potentially infinite collection of “textures”, one for every IFS constructed from a finite set of locators and the masted shape $\phi$. Therefore, to any finite collection of locators $a_{1}, ... , a_{p}$ we associate the “texture”

$\psi = \min$   $\phi \circ a_{1}^{-1} , ... , \phi \circ a_{p}^{-1}$.

– a finite collection of “legitimate translations”, which are just affine (say) transformations with the property that they move $M$ to  sets which do not intersect with $M$, and such that they generate a group which is roughly equivalent with the space $X$ (simply put, if $X$ is $\mathbb{R}^{n}$ then the group generated by legitimate translations contains a $\mathbb{Z}^{n}$).

Given a IFS constructed from compositions of “locators” and “legitimate translations”, say $a_{1}, ... , a_{M}$, there is a Hutchinson-like operator which associates to any “shape” $\phi$ (a function obtained from  the application of a finite number of “min” and “max” to a finite collection of functions obtained from the master shape composed with locators and legitimate translations) the new shape

$T(\phi) = \min$  $\phi \circ a_{1}^{-1} , ... , \phi \circ a_{M}^{-1}$.

Notice that, in terms of the tree which describes the shapes, the tree which describes $T(\phi)$ is easily (recursively) obtained from the tree which describes $\phi$.

Now, a compression algorithm associated to this data is any algorithm which solves the following problem: given a (compact) set$A \subset X$ and a $\varepsilon > 0$, find an IFS constructed from “locators” and “legitimate translations” which has the fixed point inside $A_{\varepsilon}$.

By using locators and legitimate translations, one may define “textures” and “shapes” at a scale.

The big problem is to find efficient algorithms, but once such an algorithm is used to encode a shape (which might be time intensive) the decoding is easy!