Monthly Archives: March 2008

I love depth of field

I consider depth of field as one of the most beautiful post-processing effects of the “next-gen” games.
It was natural for me to choose it as the first shader demo to implement after months of inactivity, as a matter of fact GLSL_impgro was really just a testbed for post-processing basic techniques, like Frame Buffer Objects.

GLSL_DoF

I have studied the theory from an ATI paper included in the ShaderX2 book, titled Real-Time Depth of Field Simulation, I have choosen the first of the two different implementation and converted it from Direct3D and HLSL to OpenGL and GLSL.

Of course, being a post-processing effect, the rendering is actually divided in two pass:

  1. Rendering the scene storing the depth of every vertex and calculating the amount of blur per fragment
  2. Applying the blur per fragment based on the value from the previous step

The second pass fragment shader, the one which is really applying the blur effect, is slow even on my 8600GT, because it performs several calculations for every one of the twelve fragments that are contributing to the blur of the center one.

Another interesting aspect is that, in order to calculate a correct approximation of the circular blur needed for circles of confusion simulation, these twelve pixel are sampled around the center based on a poissonian disc distribution, thus creating much less artifacts than a small convolution matrix scaled too much in order to sample from far away the center.

Just like the previous demo you can view it in action on my YouTube Channel, but I really suggest you to give a look to the high definition 720p version instead, hosted together with the other ones on my Vimeo page. 😉

glUniform1f() is working!

I faced this problem for the first time a year ago, while working for my parallax mapping demo, and I met it again these days, in which I’m busy to fine tune my depth of field demo to permit keyboard driven parameters tweaking.

Bug

The issue I’m talking about is quite seriuos, on my machine it is impossible to pass a float uniform variable to a shader, and I’m not the only one reporting it:

The first link is a forum thread from GameDev written by a girl whose applications suffer from this annoying issue, he has written a proof of concept which works perfectly on my box, i.e. float uniforms are NOT passed. 🙂
But it has been the third one which made me think about how to fix the problem: it has been reported that, after calling glewInit(), glUniform*f() functions work again.

The first thing I did, of course, was to download and investigate inside GLEW sources to see what was happening inside that magic function. What it does, actually, is redefining all the GL function pointers calling glXGetProcAddress() for everyone of them, I thought it would have been a good thing to try to replicate this behaviour in my programs, and I was right! 😀

This is what I added to my sources for the incriminated function to work:

PFNGLUNIFORM1FPROC glUniform1f = NULL;
glUniform1f = (PFNGLUNIFORM1FPROC)glXGetProcAddress((const GLubyte*)"glUniform1f");

This also seems to explain why my Python shader demo didn’t suffer from all of this, I think that PyOpenGL initializes itself retrieving the addresses for all the GL functions it needs.

IMPORTANT UPDATE
M3xican, the shader master came with THE solution, just add -DGL_GLEXT_PROTOTYPES to CFLAGS.
Hail to the master! 😀

Image post-processing with shaders

I’m back to work after many months, university exams take really a lot of time…
For I am a bit rusty on GLSL programming, but willing to learn new things anyway, I have decided to begin with a simple yet interesting topic, image processing.

GLSL_imgpro

The whole thing, actually, needs two rendering passes and relies heavily on Frame Buffer Objects because:

  1. You render the scene to an off-screen texture.
  2. You render a quad covering the entire screen and binded to the previously written texture.
  3. You make a shader process the fragments resulted from rendering this textured quad, i.e. post-processing the original scene.

In this program post-processing is demanded to convolution matrices calculated with these kernels:

GLfloat kernels[7][9] = {
    { 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f}, /* Identity */
    { 0.0f,-1.0f, 0.0f,-1.0f, 5.0f,-1.0f, 0.0f,-1.0f, 0.0f}, /* Sharpen */
    { 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f}, /* Blur */
    { 1.0f, 2.0f, 1.0f, 2.0f, 4.0f, 2.0f, 1.0f, 2.0f, 1.0f}, /* Gaussian blur */
    { 0.0f, 0.0f, 0.0f,-1.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f}, /* Edge enhance */
    { 1.0f, 1.0f, 1.0f, 1.0f, 8.0f, 1.0f, 1.0f, 1.0f, 1.0f}, /* Edge detect */
    { 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f,-1.0f}  /* Emboss */
};

The final fragment color is calculated by a simple shader which, at the core, just performs the following:

for(i = -1; i <= 1; i++) for(j = -1; j <= 1; j++) { coord = gl_TexCoord[0].st + vec2(float(i) * (1.0/float(Width)) * float(Dist), float(j) * (1.0/float(Height)) * float(Dist)); sum += Kernel[i+1][j+1] * texture2D(Tex0, coord.xy); contrib += Kernel[i+1][j+1]; } gl_FragColor = sum/contrib; [/sourcecode] When the user chooses a filter, the application updates the kernel currently in use with a call to: [sourcecode language='cpp'] loc = glGetUniformLocation(sh.p2, "Dist"); glUniform1i(loc, dist); loc = glGetUniformLocation(sh.p2, "Kernel"); glUniformMatrix3fv(loc, 1, GL_FALSE, &kernels[curker]); [/sourcecode] Dist is a user defined parameter (you can change it using arrows) that defines the distance in pixels from the center to the contributing sample. Since a month I have created a YouTube Channel, now you can have an idea of how this demo works without downloading and compiling the source code: have a look at this link! 😉