Tag Archives: digital materialization

Digital materialization and synthetic life

Digital materialization (DM) is not the name of a technology from Star Trek.  According to the wikipedia page

  DM can loosely be defined as two-way direct communication or conversion between matter and information that enables people to exactly describe, monitor, manipulate and create any arbitrary real object.

I linked to the Digital Materialization Group, here is a quote from their page.

DM systems possess the following attributes:

  • realistic – correct spatial mapping of matter to information
  • exact – exact language and/or methods for input from and output to matter
  • infinite – ability to operate at any scale and define infinite detail
  • symbolic – accessible to individuals for design, creation and modification

Such an approach can be applied not only to tangible objects but can include the conversion of things such as light and sound to/from information and matter. Systems to digitally materialize light and sound already largely exist now (e.g. photo editing, audio mixing, etc.) and have been quite effective – but the representation, control and creation of tangible matter is poorly supported by computational and digital systems.

My initial interest in DM came from possible interactions with the Unlimited Detail idea, see this post written some time ago.

Well, there is much more into this idea, if we think about life forms.
In the discussion section of this  article  by Craig Venter et al. we read:

This work provides a proof of principle for producing cells based on computer-designed genome sequences. DNA sequencing of a cellular genome allows storage of the genetic instructions for life as a digital file.

In  his book Life at the speed of light  Craig Venter writes (p. 6)

All living cells run on DNA software, which directs hundreds to thousands of protein robots. We have been digitizing life for decades, since we first figured out how to read the software of life by sequencing DNA. Now we can go in the other direction by starting with computerized digital code, designing a new form of life. chemically synthesizing its DNA, and then booting it up to produce the actual organism.

That is clearly a form of Digital Materialization.

Now, we have these two realms, virtual and real, and the two way bridge between them called DM.

It would be really nice if we would have the same chemistry ruled world:

  • an artificial version for the virtual one, in direct correspondence with those parts of
  • the real version (from the real world) which are relevant for the DM translation process.

This looks like a horribly complex goal to reach, because of the myriad concrete, real stumbling blocks, but hey, this is math for, right? To simplify, to abstract, to define, to understand.

[posted also here]

__________________________________________

Call for analysis of the new UD video

Thanks to preda  and to appc23 for showing us the new UD video:

The previous post on this subject,  New discussion on UD (updated) , has lots of interesting comments. It has become difficult to make sense of the various proposals concerning the UD algorithm, therefore I made this new post which shall serve first as a place for new discussions, then it will be updated.

It is maybe the time to make sense a bit of the material existent (or linked to) on this blog. That is why I invite everybody who is willing to do this to make it’s own proposal, preferably with (either) programs or proofs or evidence (links). Especially, in a random order, this invitation is addressed to:

  • Dave H
  • preda
  • 17genr
  • JX
  • appc23
  • bcmpinc
  • Shea
  • Tony

but anybody which has something coherent to communicate is welcome to make a proposal which will blow our minds.

Make your respective proposals as detailed as you wish, take your time to make them convincing and then, if you agree, of course, we shall make feature posts here with each of them, in order to become easier to follow the different threads.

Let me finish, for the moment, by stating which points are the most important in my opinion, until now (I stress, in my opinion, feel free to contradict me):

  • UD works like a sorting algorithm,
  • cubemaps are used, but their main utility is to eliminate rotations wrt the POV from the problem,
  • UD is not a rendering algorithm (or, at least, the most interesting part of UD is not one), it is an algorithm for fast searching the data needed to put a colour on each pixel,
  • UD needs  to turn a cloud of points into a database, only once, operation which takes considerable more time than the searching algorithm part,
  • does not need antialiasing
  • does not raycasting for each pixel
  • it is a mathematical breakthrough, though not a CS  or math technical one (i.e. does not need the latest edge CS or math research in order to make sense  of it, but it’s a beautiful idea).

Almost finished, but I have to explain a bit my attitude about UD.  I am thorn between my curiosity about this, explained in other posts (for example by it’s partial overlap with the goals of Digital Materialization), and the fact that I don’t want this blog to be absorbed into this subject only. I have my ideas concerning a possible UD algorithm, especially from a math viewpoint, but unless I produce a program, I can’t know if I’m right or wrong (and I am not willing to try yet, because I am sympathetic with the underdog Dell and also because it would take me at least several months to get off the rust from my programming brain and I am not yet willing to spent real research time on this). Suppose I’m wrong, therefore, and let’s try to find it in a collaborative, open way. If we succeed then I would be very happy, in particular because it would get out of my mind.

Applications of Unlimited Detail

The least interesting application of UD is in the games industry. Here are some other, more appealing, at least to me:

  1. Google street view with UD. Pro: they (google) only need to change the format of their pictures database (and for example deduce some 3D data from different pictures of the same place made from different POVS, a lot of work, but they surely have the means). Con: there cannot be a too precise such tool, for obvious reasons, from security to privacy. Anyway, imagine you walk on the street (in street view) and you see instead of a sequence of photos a continuous 3D world.
  2. Google earth with UD, you can pass continuously from a scale to another, like flying in a super space ship. Better yet, at really small detail you pass continuously from google earth to street view.
  3. Not only geographical databases are interesting to see, but also very complex objects, like in the medical realm. I remember now that in the post on Digital Materialization and Euclideon, I noticed a similarity of goals between the DM project and the UD project. In the video of the interview provided on this blog by Dave H., at a certain point Bruce Dell mentions the difficulty of creating the 3D real objects (like a dragon or what ever head) and the easiness of looking at the virtual version created by scanning and then visualizing the scan with the UD machine. This is quite similar to one of the applications of DM, which consists in preserving cultural objects by using DM. Or by using UD, why not?
  4. If we talk about games, then the world of a MMORPG could be put in UD form, probably.

What else?

UPDATE (20.02.2014): Here is something which would look well together with UD:

Digital materialization, Euclideon and fractal image compression (III)

Bruce Dell definitely oriented his Euclideon-Unlimited Detail technique towards the geospatial industry, see for example the conference summary of the International Lidar mapping forum, Denver 2013, where he will speak about “The impact of unlimited processing power on the geospatial industry”.

He participated at the 2012 Brisbane international Geospatial Forum, July 8-11, here is the presentation program with abstracts.   His talk abstract, named the same, gives a video link which is very interesting.

The points I understood and want to stress are the following:

– he speaks about the “two bridges” between the virtual world and the real world, this is really very close to the Digital materialization philosophy.  So, I  guessed right such a link (here and here), from the exterior of both interested parts. My question, to the DM groups is: are you going to do something about this, for example collaborations with Euclideon? And even more: has the geospatial industry  things to learn from DM (I think they do)?

– there is definitely an Euclideon format which “may take a while” to get from the format of the laser scans used by the geospatial industry. In the video Dell puts an image with big racks of computers needed to do the conversion format. My question is: is he using  some part of the fractal image compression idea (maybe, for example, Dell is not using freps, but it might use in his data structure ideas from fractal image compression). Again, for the DM community, I have a question: giving that you use huge files, maybe you can use some Euclideon tricks to ease the use of them? and blend them with freps?

– really the Euclideon algorithm (which has as the fundamental part the data structure of the Euclideon format) works well.  By looking at the images from the presentation, I am asking myself if “Euclideon” name comes from some clever embedding of the Euclidean group of isometries into the data structure. I feel there must be something obvious about principal bundles … 🙂 which model an observer in the euclidean space AND the data acquired by laser scans … To think.

Digital materialization, Euclideon and fractal image compression (II)

For the background see the previous post Digital materialization, Euclideon and fractal image compression.

As I wrote before, this may have been done, please tell me if so.

The idea is that once a 3D image is encoded by a kind of fractal image compression algorithm, then the problem of attribution of a “3D atom” of the 3D image to a “2D pixel” from the screen becomes a search problem in a tree, as maybe the Unlimited Detail – Euclideon algorithm is. The kind of encoding I am writing about may have been done by, or may be useful for the groups working in “Digital materialization”.

UPDATE:  Here is a youtube video made by AEROmetrex company, “a leader in photogrammetric solutions is launching its latest technological service aero3Dpro today: a complete 3D modelling service for Australian and overseas users of geospatial and 3D data”

In the comments, they write they are using Geoverse/Unlimited Detail/Euclideon technology and they mention that

For a 15Gb 3D model the Euclideon Unlimited Format file is about 2 Gb

Further is detailed my speculation that Euclideon may use a variant of fractal image compression in their format.

I do not want to trespass any boundaries or pretending I am doing anything new, that is why I repeat my request to tell me if anything like this was done (and who did it).

1. Let me first recall a famous theorem by Hutchinson, concerning iterated function systems.

Let (X,d) be a complete metric space and let H(X) be the collection of compact sets in (X,d). On the space H(X) we may put the Hausdorff distance between subsets of X, defined in the following way. For any \varepsilon > 0 and for any set A \subset X, the \varepsilon neighbourhood of A is the set

A_{\varepsilon} = \left\{ y \in X \mbox{ : } d(x,y) \leq \varepsilon \right\} = \cup_{x \in A} B(x,\varepsilon)

where B(x, \varepsilon) is the closed ball centered at x, of radius \varepsilon.

The Hausdorff distance between sets A, B \subset X is then

d_{H}(A,B) = \inf \left\{ \varepsilon > 0 \mbox{ : } A \subset B_{\varepsilon} , B \subset A_{\varepsilon} \right\}

The important fact happening here is that (H(X), d_{H}) is a complete metric space. Moreover, if X is compact, then H(X) is compact too.

An iterated function system (IFS)  on the compact (X,d) is a finite collection of transformations of X, say a_{1}, ... , a_{N}, such that  every a_{i} is a contraction: there is r_{i} \in (0,1) such that

d(a_{i}(x), a_{i}(y)) \leq r_{i} d(x,y) for any x,y \in X.

The Hutchinson operator associated to the IFS is the transformation

A \in H(X)    goes to  T(A) = \cup_{i = 1, ..., N} a_{i}(A).

Hutchinson theorem says that T is a contraction, therefore it has a unique fixed point, i.e. a compact set A \subset X such that T(A) = A, which is the same as

A = \cup_{i = 1, ..., N} a_{i}(A).

2. Michael Barnsley had the idea of using this result for doing fractal image compression.  In few words, fractal image compression is any algorithm which solves the inverse problem: given A \in H(X), find an IFS which has A as a fixed point. In this way the set A (which represents the image, for example as the graph of the function which associates to any pixel a RGB vector of colors) is “encoded”   by the functions from the IFS. More specifically, we may take X to be (a compact subset of) \mathbb{R}^{n} and look for an IFS of affine contractions. Then the set A is encoded in (or compressed to) the set of coefficients of the affine transformations of the IFS.

3. Without going to the last detail, let us see this construction from the point of view of “Digital materialization”. The idea which is behind is to characterise a subset of X by a function \phi: X \rightarrow \mathbb{R} such that x \in A if and only if \phi(x) \leq 0.  If A is described by \phi and B is described by \psi, then

A \cup B is described by \min \left\{ \phi, \psi \right\}  and

A \cap B is described by \max \left\{ \phi , \psi \right\} and

– for any bijective transformation a  of X, the set a(A) is described by \phi \circ a^{-1}.

Thus, starting from a library of functions \phi, \psi, ... and from a library of bijective transformations of the space (like translations. rotations, etc, where this makes sense), we may describe a huge  collection of sets by a tree, which has nodes decorated with \min, \max  and edges decorated with compositions \phi \circ a, with \phi from the library of “textures” [my choice of name maybe not good, correct me please] \phi, \psi, ...  and a from the library of “legal” bijective transformations.

In fact, we may refine a bit this description by giving: [note added later: is this some version of  “Image codingbased on a fractal theory of iterated contractive image transformation”,  AE Jaquin – IEEE Trans. on image Processing, 1992 ?]

– a master shape \phi, for example, taken from the equation of a ball of radius 1, if we suppose that X is an euclidean space. Let M be the set described by \phi.

– a collection of “locators” [I’m totally inventing names here], that is a finite collection of (affine, say) contractions of X such that they send M to a subset of M

– a potentially infinite collection of “textures”, one for every IFS constructed from a finite set of locators and the masted shape \phi. Therefore, to any finite collection of locators a_{1}, ... , a_{p} we associate the “texture”

\psi = \min   \phi \circ a_{1}^{-1} , ... , \phi \circ a_{p}^{-1} .

– a finite collection of “legitimate translations”, which are just affine (say) transformations with the property that they move M to  sets which do not intersect with M, and such that they generate a group which is roughly equivalent with the space X (simply put, if X is \mathbb{R}^{n} then the group generated by legitimate translations contains a \mathbb{Z}^{n}).

Given a IFS constructed from compositions of “locators” and “legitimate translations”, say a_{1}, ... , a_{M}, there is a Hutchinson-like operator which associates to any “shape” \phi (a function obtained from  the application of a finite number of “min” and “max” to a finite collection of functions obtained from the master shape composed with locators and legitimate translations) the new shape

T(\phi) = \min  \phi \circ a_{1}^{-1} , ... , \phi \circ a_{M}^{-1} .

Notice that, in terms of the tree which describes the shapes, the tree which describes T(\phi) is easily (recursively) obtained from the tree which describes \phi.

Now, a compression algorithm associated to this data is any algorithm which solves the following problem: given a (compact) setA \subset X and a \varepsilon > 0, find an IFS constructed from “locators” and “legitimate translations” which has the fixed point inside A_{\varepsilon}.

By using locators and legitimate translations, one may define “textures” and “shapes” at a scale.

The big problem is to find efficient algorithms, but once such an algorithm is used to encode a shape (which might be time intensive) the decoding is easy!

Digital Materialization, Euclideon and Fractal Image Compression

Are these:

Digital Materialization (DM)

Euclideon (Unlimited Detail)

Fractal Image Compression

related? Can be this rather simple (from a pure math viewpoint) research program doable and moreover DONE by anybody?

Let me explain. My wonder came after searching with google for

– “Digital materialization” AND “euclideon”   – no answers

– “Digital materialization” AND “Fractal image compression” – no answers

– “Fractal image compression” AND “euclideon” – no answers

1. Cite from the Digital Materialization Group homepage:

Digital Materialization (DM) can loosely be defined as two-way direct communication or conversion between matter and information that enable people to exactly describe, monitor, manipulate and create any arbitrary real object. DM is a general paradigm alongside a specified framework that is suitable for computer processing and includes: holistic, coherent, volumetric modeling systems; symbolic languages that are able to handle infinite degrees of freedom and detail in a compact format; and the direct interaction and/or fabrication of any object at any spatial resolution without the need for “lossy” or intermediate formats.

DM systems possess the following attributes:

  • realistic – correct spatial mapping of matter to information
  • exact – exact language and/or methods for input from and output to matter
  • infinite – ability to operate at any scale and define infinite detail
  • symbolic – accessible to individuals for design, creation and modification

As far as I understand, this works based on Function Representation (FREP), see HyperFun.org . The idea is to define an object in \mathbb{R}^{3} say, by a function F: \mathbb{R}^{3} \rightarrow \mathbb{R}, which describes the object as the set of points x where F(x) \leq 0. Start with a small library of functions (for example polynomials) and then construct other functions by using min, max, and so on. Therefore an object is described by a tree, with leaves decorated by functions and nodes decorated by min, max, … , operations.  This is a very simplistic, but fundamentally precise description. The main point is that there is no object (defined for example by a mesh in space), but instead we may check if a point x  in space belongs or not to the object by evaluating the sign of F(x). Moreover, translations, rotations, dilations (or any other easy to implement change of parametrization of the space) are implemented by composition with the function F which describes the object.

In particular, we may easily pass to polar coordinates based on the point of view and stay in this  coordinate system for visualization.

2. Fractal image compression is based on the fact that any compact set (like an image, described as a compact set in the 5D space = 2D (spatial) times 3D (RGB levels)) can be described as a fixed point of an iterated function system.

This brings me to

PROBLEM 1: Is there any version of the Hutchinson theorem about fixed points of IFS, with the space of compact sets replaced by a space of FREP functions? Correspondingly, is there a “FREP  compression algorithm”?

My guess is that the answer is YES. Let’s assume it is so.

3. In the article “Digital Foundry vs. Unlimited Detail”   it is reported the following 2008 post by Bruce Dell:

Hi every one , I’m Bruce Dell (though I’m not entirely sure how I prove that on a forum)

Any way: firstly the system isn’t ray tracing at all or anything like ray tracing. Ray tracing uses up lots of nasty multiplication and divide operators and so isn’t very fast or friendly.
Unlimited Detail is a sorting algorithm that retrieves only the 3d atoms (I wont say voxels any more it seems that word doesn’t have the prestige in the games industry that it enjoys in medicine and the sciences) that are needed, exactly one for each pixel on the screen, it displays them using a very different procedure from individual 3d to 2d conversion, instead we use a mass 3d to 2d conversion that shares the common elements of the 2d positions of all the dots combined. And so we get lots of geometry and lots of speed, speed isn’t fantastic yet compared to hardware, but its very good for a software application that’s not written for dual core. We get about 24-30 fps 1024*768 for that demo of the pyramids of monsters. The media is hyping up the death of polygons but really that’s just not practical, this will probably be released as “backgrounds only” for the next few years, until we have made a lot more tools to work with.

Assuming that we take a database representing a 3D very complex object (like a piece of landscape) and we convert it, by using a FREP compression algorithm, into a tree as described at point 1, then it becomes easy to imagine how the Unlimited Detail algorithm might work.

Problem 2:  Given a FREP representation of a collection of 3D objects, describe an efficient sorting algorithm which uses the representation and outputs the part of the union of object visible from a given point at  infinity.

Conclusion: unless this is utter giberish, the modus operandi of an “unlimited detail” algorithm could be the following:

1)- start with a database of a collection of 3d objects and compress it into a FREP format

2)- perform a “mass 3d to 2d conversion”, by using a solution of the problem 2,  in polar coordinated from the viewpoint.