I consider depth of field as one of the most beautiful post-processing effects of the “next-gen” games.
It was natural for me to choose it as the first shader demo to implement after months of inactivity, as a matter of fact GLSL_impgro was really just a testbed for post-processing basic techniques, like Frame Buffer Objects.
I have studied the theory from an ATI paper included in the ShaderX2 book, titled Real-Time Depth of Field Simulation, I have choosen the first of the two different implementation and converted it from Direct3D and HLSL to OpenGL and GLSL.
Of course, being a post-processing effect, the rendering is actually divided in two pass:
- Rendering the scene storing the depth of every vertex and calculating the amount of blur per fragment
- Applying the blur per fragment based on the value from the previous step
The second pass fragment shader, the one which is really applying the blur effect, is slow even on my 8600GT, because it performs several calculations for every one of the twelve fragments that are contributing to the blur of the center one.
Another interesting aspect is that, in order to calculate a correct approximation of the circular blur needed for circles of confusion simulation, these twelve pixel are sampled around the center based on a poissonian disc distribution, thus creating much less artifacts than a small convolution matrix scaled too much in order to sample from far away the center.
Just like the previous demo you can view it in action on my YouTube Channel, but I really suggest you to give a look to the high definition 720p version instead, hosted together with the other ones on my Vimeo page. 😉
Ciao, sono un tuo coetaneo, utilizzo archlinux e faccio la tua stessa facoltà a napoli, mi farebbe piacere scambiare due chiacchiere… mandami una mail, oppure vieni a trovarci su #archlinux di Azzurra…
Saluti e complimenti per il blog
Good work, i like a lot your animation and i have two question for you….
1) depth of field seems slow on a NVidia GeForce 8600GT, do you thin that performance will grow whit a GeForce 9800GX2 or this kind of graphic adapter need different source code????
2) depth of field work on next gen console too. Does it use the same source code of pc???
Sorry for my English and for my question that for you may be stupid…
waiting for your answer
first of all I have to say that the same effect is possible to achieve in many different ways, someone could be less precise, more approximated or based on a more complex or easier model of what happens in reality.
1) The performances should scale with raw GPU power, of course, but I think that, for this effect to be included in a 3d game engine, it should be modified a lot to become less heavy, maybe the whole implementation and mathematic model should be swapped with something else.
2) These post-processing effects all use shaders, on Xbox360 shader language is really similar to HLSL (the shader language used by D3D). About PS3, it should use OpenGL and GLSL, like my demos. 😉
But maybe you intended to know whether the implementation I use is the same other programmers use in theri games, in this case I really don’t know. 😉
but the Reused method i think is to store a depth-texture,it will help to do other post-process,like motion blur, but i am Doubt that whith depth should be stored in the depth-texture, the Z in the ViewMatrix or the Z/W after being Projected. Expect for your reply. my msn is firstname.lastname@example.org
I’ve learned some new things by means of your blog. One other thing I want to say is the fact that newer pc operating systems are inclined to allow far more memory to be used, but they as well demand more memory simply to run. If people’s computer can’t handle more memory plus the newest computer software requires that memory space increase, it could be the time to shop for a new Computer system. Thanks