Let's write a Photon Tracing package ( ok, not realtime :grin: ).
Posted on 2003-02-13 06:24:32 by Maverick


Let's write a Photon Tracing package ( ok, not realtime :grin: ).


including quantum effects? :grin:
Posted on 2003-02-13 06:37:07 by Terab

Yeah! :alright:
Posted on 2003-02-13 07:10:08 by Maverick


Yeah! :alright:


actually thinking about it a bit, using quantum effects in a photon 'tracing' program would make for some very realistic effects.

like caustics and soft edges around objects etc, without having to actually program those features into the renderer, of course renders times would go up a 'bit' :grin:

might be an interesting project - using feyman diagrams to calc the probabilities
Posted on 2003-02-13 07:18:28 by Terab

It wasn't my idea.. such projects really exist.. and you're right, they're extremely slow. :)

http://www.google.it/search?q=%22photon+tracing%22&ie=ISO-8859-1&hl=it&lr=
Posted on 2003-02-13 07:57:42 by Maverick
Henrik Wann Jensen is a friend of mine, or at least we're acquaintenances. Well, at least a long time ago.

Ray tracing is tracing a ray from the eye through a view plane, typically your monitor's screen, into a scene to detect what you actually see. The point you see on an object is then tested against light sources to determine whether it's lit at all and it's color. It is also tested against other objects to see if any light is reflected onto the point from other objects (as well as light interreflection).

Photon mapping is a new method where packets of photons are emitted from light sources toward objects in the scene. Each object receives a percentage of the total scene illumination which is mapped onto each object in a method similar to radiosity. Then rays are traced from the eye into the scene to determine lighting for each point. The method is great for caustics, which are concentrated reflections similar to what you might see from a magnifying glass.

This methods memory requirements are somewhat high but the operation is relatively fast. It creates very realistic images. Using importance sampling, ie, determining which reflections and rays are important to sample, greatly speeds up the operation.
Posted on 2003-02-13 08:48:30 by drhowarddrfine
AmkG
----------
Here is a tutorial to make you understand what is raytracing:
- http://www.2tothex.com/raytracing/index.html
- http://come.to/polygone (go to Docs section)
there are some more i am sure you can find some with Google :)

EvilHomer2k
-----------------
I mean like in raytracing... (see above links) moddeling objects with MATH formulae :)

Maverick and others
----------------------------
Hihihi .... yeah lets add photons and quantum affects to this :) but as "an option" and let's make it the fastest available... in time CPU will gain speed and we will "evolve" to realtime ...
Posted on 2003-02-13 18:12:56 by BogdanOntanu
Bogdan: yep, what I was driving at : formulaic objects...
nurbs and bsplines are systems for defining complex curves bound by weighted control vertices. I worked on high end systems for a plastic injection tool company tenured to a large auto manufacturer, and worked on 3D curved surface geometry for bumper bars.
These systems were using curves instead of lines, and creating meshes from "patches" defined as a grid between the control vertices.
The software was capable of creating not only the geometry, but also of creating toolpaths for robotic machinery to create the geometry in the real world with known cutting tool dimension, and also of creating male and female core and cavity objects to recreate the geometry as a moulding... now they are doing flow analysis of the injection tool they have designed to find the optimal gating points (where plastic is injected), using particle physics to resemble known fluids at known pressure and tempurature , using a large database of known plastics...
Anyway, I was uncomfortable with the idea of making more stuff out of plastic, I now run a company involved in plastic recycling.
I took a keen interest during my education as a precision engineer in the fine points of trigonometry and euclidean and non-euclidean geometry which I'd managed to sleep through at school.
So anyways, would your geometry system for non-terrain objects be based on deformed spheres, like complex elipsoid formula, or would they be arrays of weighted vertices for curve formula, or both, or something else?
And how would it deal with edge blurring for straight edges?
Posted on 2003-02-14 04:00:45 by Homer
Thanks Bogdan and drhoward!

I think we got a bit into that in our DRAFTING class... To do Perspective we would create lines from the viewpoint to a point in the 2-d projection, then we would project where they intersected a 'view line' (or view plane, in 3d space). He he he I was one of the two who finished our finals for that one...

Anyway most of the other stuff you are talking about now is just above my head, maybe with a bit of effort I feel I can start to understand it...
Posted on 2003-02-14 04:32:25 by AmkG
Has everybody seen this renderer?
http://www.winosi.onlinehome.de/

I really like the image quality. It is very slow presently.
Check the docs section for a quick look at how it works.
Posted on 2003-02-14 08:50:12 by bitRAKE
This stuff is very advanced, though I think I will need an indepth tutorial, how is it different from 3D graphics?
Posted on 2003-02-14 20:06:13 by x86asm
From what little I know, it is simply another way of rendering 3d graphics. Instead of rendering whole polygons, you render individual pixels.
Posted on 2003-02-15 02:39:10 by AmkG
The implementation is relatively simple and you should try and not think it's harder than it is. What makes it simlple(r) is you're just shooting a line from your eye through a pixel into the scene and finding what object it hits. You then determine if that point can be seen by a light (so it's lit and not in the dark).

It starts to get complicated when you have to determine if that point is lit by reflections. A light bounces off another object and hits the point you're currently seeing.

Then there's stochastic sampling where you're "jittering" the samples for each pixel. In other words, each pixel is in the center of a rectangle with neighboring pixels at the corners. You randomly sample within that rectangle so you don't miss any objects in between pixels. Anti-aliasing also occurs.

Of course all this takes time. You figure a 1kx1k screen is a million samples just to start. Ten samples per pixel is 10 million and that's just for the first hit not considering reflections/refractions.

So time is the biggest enemy, not complexity.
Posted on 2003-02-15 10:43:03 by drhowarddrfine
For some good direction about ray tracing:
http://graphics.stanford.edu/courses/cs348b-02/
Posted on 2003-02-16 18:02:55 by bitRAKE
Read something recently on an innovation to raytracing where there are two passes: on the first pass, you throw N random rays from each (colored) light source in a box around the viewer, and record the points that they hit on relative pixels in the viewport. On the second pass, you cast you eye rays to each pixel as usual, but you modulate the color by blending it with the weighted average color of any prerecorded lighting hitpoints within a given radius.
You can then repeat this process as many times as the hardware allows until you hit a low framerate threshold (say 30 frames).
The more times you repeat it, the clearer the image will be.
What you get is a "watery", dynamic view of the scene - especially with moving lightsources.
If you begin your raytracing at the centre of the screen, and have timed the hardware to know how many iterations you can perform, you can overlap calls to your renderer and produce an image that has blurry peripheral vision.
Posted on 2003-02-16 23:50:59 by Homer