This is an XBox game:


How did they achieve the reflection - with waves and all. I noticed there are no shadows, and as if they create a viewport - whose 2D size is as big as the visible water's on the 2D screen. And maybe later they apply some whirling+blurring effect on the rendered data of that viewport. If only it had shadows, too, it'd be awesome :)
Posted on 2004-06-11 12:50:32 by Ultrano
i think its called RayTracing method.
Posted on 2004-06-11 13:45:50 by wizzra
No it's not. Raytracing is too slow to be useful in realtime graphics.
It's reflection mapping. Basically they mirror the camera in the water-plane. Then they render the view to a texture. Then they map that texture on a wavy mesh.
Something like that.
Shadows should be possible, if you also create mirrored lightsources and apply the shadowing technique both ways.
Posted on 2004-06-11 13:49:04 by Scali
first of all how can you know if this is a real reflaction of this object since for example i dont see reflection of this man on a water ??
Posted on 2004-06-11 14:22:19 by AceEmbler
Thanks, Scali - that is what I suspected, too :) but I didn't know about such meshes
AceEmbler: Look the source of the image, you'll see the other screenshots, though it's not necessary: See that the rocks and building get reflected in the water.
I pressume the game uses Z-buffer ,
draws the sky, beach and jet. Makes reflection map of the already rendered stuff, draws the water, then the man, and finally - some effects like water stream from the jet. I guess the coders chose this sequence, because they don't want low fps when many people get shown on screen - for example , some crowd of 20-50 people, drawn twice because of reflection map :) - might be heavy for the XBox.
I am a newbie in 3D , so sorry if I'm writing rubbish :o
Posted on 2004-06-11 15:24:46 by Ultrano
Well, come to think of it... it doesn't have to be a mesh per se.
It could also be a flat plane with distortion generated by creative use of dependent texture reads. But these waves seem to be too large to be generated by that. It can be combined though... Small 'noisy' waves with dependent texture reads, and large waves with a mesh.
You can also expand the technique to do both reflection and refraction (render a refraction map of the 'inside' of the water, and calc refraction from the vertex normals rather than reflection, then blend the reflection over it).
Posted on 2004-06-11 15:32:46 by Scali

Well, come to think of it... it doesn't have to be a mesh per se.
It could also be a flat plane with distortion generated by creative use of dependent texture reads. But these waves seem to be too large to be generated by that. It can be combined though... Small 'noisy' waves with dependent texture reads, and large waves with a mesh.
You can also expand the technique to do both reflection and refraction (render a refraction map of the 'inside' of the water, and calc refraction from the vertex normals rather than reflection, then blend the reflection over it).

The method described Scali would mean that they would have to render the scene twice right? ouch.
Don't they use pixel shaders? how would they use PS's to accomplish this effect Scali (ur game programmer? )?
Posted on 2004-06-11 16:58:20 by x86asm
well, if the camera remains in this position, they can render the scene once and mirror it down the middle, as Ultrano said, I suppose.
But even if they have to render it twice, what's the big deal? :P
They could use lower res for the water reflection anyway. Because of the distortion, it will not be noticable. They could also render the world with lower LOD settings for the mirror, etc.
When you use dynamic cubemaps, you have to render the scene 6 times for each cubemap, and even that can be done in realtime.

Pixelshaders aren't really required for this effect. If they do it the way I think they do it (render to texture), then they would just need vertex shaders to calc the reflection vectors. But this functionality is also present in fixed-function hardware. Only the refraction calculation would require vertex shaders.
If they opt for the dependent texture reads aswell, they could use fixedfunction still, on hardware that supports environment-mapped bumpmapping, because basically that's just a dependent texture read. So even there, pixelshaders aren't really required.
Posted on 2004-06-11 17:19:10 by Scali