Someone is claiming to be able to render point clouds to achieve "infinite detail" at usable FPS.

According to the claims:
- You start with the point cloud data for an object
- The data is searched using a "patent pending" algorithm that picks out only the points that correspond to the pixels on the monitor
- Those specific points are rendered
So, for 1080p monitor you'd only need 2,073,600 points / frame

This claim seems like it's on the level of a perpetual motion machine, in that it's unbelievable, but if true it would be a game changer.

My 3d graphics experience is limited, so I'm interested in hearing other people's take on this.
Posted on 2010-04-23 08:34:43 by r22
At university years ago, I have discussed with someone who was developing a voxel renderer... Given his results back then, I can imagine that it is theoretically possible today to get 'pixel-perfect' rendering, with a clever enough algorithm to determine visibility. Similar to raytracing, you need to divide-and-conquer, which you can do with logarithmic performance, so it scales well.

I think the caveat is also similar to raytracing: you need acceleration structures to make the clever algorithms work. This means that it works for static geometry only.
Games want animation. A point cloud may be interesting to render, but how does one implement character animation with it? Animating each individual point is going to be WAY slow.
So I think games won't work because:
1) Your animation would have to regenerate all the point clouds all the time, being very expensive.
2) The clever algorithm can't be used, because you have to regenerate your acceleration structures, which takes too much time to be used in interactive situations.
Posted on 2010-04-23 09:01:04 by Scali
I agree, it would only be used for static geometry like maps/terrain, but couldn't you render animations on top of it, if the Z for the points is correct it shouldn't be an issue.

But that brings up another issue, how much space will these point clouds take up.

If 1 point is a FloatX FloatY FloatZ ByteR ByteG ByteB ByteA that's 16 bytes 1 dqword per point, and you'd need 100's of millions of them in your map/terrtain file. I suppose certain elements/meshes could be reused, but compared to vertex, texture constructs this seems like it will use a lot more space.
Posted on 2010-04-23 09:46:22 by r22
John Carmack has been talking about similar stuff a while ago. The "Unlimited Detail" doesn't look all that hot, as scali says I doubt it will do animation very well, and "unlimited" isn't true anyway, you need lots of disk space & memory. If lighting & antialiasing stuff is worked out, it could turn out useful for terrain, since it'd be pretty deformable... but then there's the issues of doing physics with "unlimited detail".

Btw, check Agenda. I suggest you watch an online pre-rendered HD version unless you have very powerful hardware (GF8800 + Q6600 doesn't quite cut it).
Posted on 2010-04-23 09:53:07 by f0dder
Lighting requires a surface normal, so that'd be float x, y, z, nx, ny, nz per point, and then either a colour or texcoords.
So you'd be looking at more than 32 bytes per pixel at the least.

It seems like the only way to really manage this chaos is with procedural geometry.
But you could do the same with polygons, eg NURBS, isosurfaces and whatnot. I think the 'key' here is a REYES renderer: it uses 'polygons', but it procedurally subdivides them during 'rendering', until they are < 1 pixel large ('micropolygons'). Then they can be plotted as if they are points (and antialiasing is easy aswell). So in a way it's the same 'point cloud' idea, just based on a description of the surface, not a generic 3d volume.
Posted on 2010-04-23 10:34:40 by Scali
This surfaced ~ Sept 2008 if my logs are correct. Even if it comes out flawless, I can't see enough appeal. Procedural detail down to sub-pixel sizes is certainly possible nowadays, and comes in nice data structures.

GL3.2 (without tessellation):
(also see the videos of flying into orbit and landing elsewhere on the planet).

Then, with tessellation and height+procedural_material data per pixel, any LOD can be covered.
Posted on 2010-04-23 12:20:10 by Ultrano
Yea, like I said... all static geometry. You can't build a game with that. Just as pointless as realtime raytracing.
Posted on 2010-04-23 12:37:40 by Scali
there has been discussion about this here:

lots of info and comments :)
Posted on 2010-05-15 01:17:36 by HeLLoWorld
maybe it is a very good technique, remember that everything done in technology is a pure copy of real physical world.
af course, physical world is not based on polygons.. then, maybe the algorithm techniques are better then years ago and let kind of stuff possible.

in fact, it is very possible to do it, but needs a lot of datas.
Posted on 2010-05-15 04:43:42 by edfed
But did anyone see any animation there?
It was all static... all axis-aligned, and seemed to use a lot of instancing.
It doesn't look like a game, does it? It doesn't convince me that you can do a game with this technology.
Aside from that, although they point out the 'popping' with polygons, I can clearly see 'popping' in their stuff aswell (there must be some kind of LOD going on in their search algorithm, that's probably what causes it.. and probably what stands in the way of true realtime animated characters, just like raytracing voxels etc... again, Pixar uses polygon models for their movies, there's a good reason for it... in fact he even says so himself at one point, that artists can model in 'unlimited polygon detail' like the movies, and then convert it to their voxel data format. So basically he still depends on polygons :)).
Posted on 2010-05-15 06:10:27 by Scali

It's possible to raytrace onto a NURBS or other non-polygonal geometric representation (been done)... that would give you pixel-perfect rendering of objects with 'infinite detail' - within the bounds of numerical precision.... I believe some of the 'metaball' demos work this way.

But they say it's not a raytracer.
And they say nothing of the internal / intermediate geometric representation.

Posted on 2010-05-15 07:04:50 by Homer
NURBS is not really non-polygon data.
It's low-resolution polygons ('patches') with polynomial subdivision.
That's why NURBS are easy to model and animate. The artists can just manipulate the control points of the mesh at a high level, to get the results they want at the subdivided level. And that's why Pixar loves them.
Pixar renders them with pixel-perfection because they subdivide NURBS to sub-pixel level (aka micropolygons) with a rendering technique called REYES.
Current GPUs are actually getting quite close to REYES now, with programmable tessellation.

While NURBS can be raytraced, this is far less efficient than the REYES technique, and it provides no visual advantages.
Pixar has released some interesting papers about their Cars movie, where they used raytracing on a large scale for the first time. They explain that although they use raytracing, it is still kept to a minimum because of performance issues. The 'first bounce' is still done with REYES even for raytraced surfaces, and the raytracing is mainly used for reflections and indirect light/shadow detail. But only for 'close ups'. They stick to conventional REYES rendering techniques such as environment maps and shadowmaps for parts of the scene that do not require the added detail that raytracing provides.

I think that is the path that games/GPUs will follow aswell.
You pretty much NEED polygon meshes for decent animation (skinning, inverse kinematics, realtime interaction between characters/objects), physics and things like that. Those things are much more complicated and expensive with voxel techniques or other types of geometry.
The biggest advantage of NURBS is that you can manipulate everything with the control points, which is 'instant LOD'. Trying to manipulate a blob of millions of voxels is just not going to happen.
Posted on 2010-05-15 07:18:02 by Scali
words of wisdom :)
Posted on 2010-05-16 06:35:04 by HeLLoWorld
Reyes is effing impressive especially considering its visionary aspect regarding adaptive complexity, so many moons ago.

Not your breathtaking piece of "beautiful, powerful-yet so-simple" magical pure equation that solves the problem with elegance of course, it's a complex and multilayered pipeline, then again, pieces of technology that get complex jobs done right rarely are not complex. One can only get humble regarding how this sacred artifact created in ancient times is still pointing the direction for rendering technology (yes, i'm such a poet!)
Posted on 2010-05-16 06:54:33 by HeLLoWorld
NURBS actually started in the French car industry.
They were among the first to use CAD/CAM technology to design and manufacture their cars.
Bezier-patches and b-splines were developed by these French engineers... which eventually led to the generalized NURBS form.
Posted on 2010-05-16 07:31:17 by Scali
Reason for this is that Pierre Bézier was French, as a note he wasn't an academic reserarcher but an engineer working for the industry ; I'm proud to be one of his fellow countryman :)
Posted on 2010-05-16 23:09:34 by HeLLoWorld
Yea, and De Casteljau is another famous name for an algorithm (related to Bezier patches), again named after a French engineer.
Posted on 2010-05-17 03:14:00 by Scali
Oh by the way...
A (former? don't know really) member of our Bohemiq demogroup is now working for this company.
He is working on the actual code to subdivide polygon meshes into very high detail and convert them to point clouds.
Funny really, apparently they still haven't solved this problem properly, even though we've discussed it here a long time ago.
Doesn't seem like the company is really getting anywhere... On the other hand, they are still around, so they must be doing something right.
Posted on 2011-07-28 18:09:40 by Scali
They released a new video yesterday:
It is now called Euclideon rather than just "Unlimited Detail".
Posted on 2011-08-02 08:49:44 by Scali

They released a new video yesterday:
It is now called Euclideon rather than just "Unlimited Detail".
...and there's still no animation or physics and the lighting still sucks. Has ANYTHING changed? But hey, I guess they're getting investor cash. I wonder how long they can keep draining it ;)
Posted on 2011-08-02 15:39:59 by f0dder