I started exploring deferred shading rendering to display multiple light sources and ended writing a demo featuring eight different lighting techniques and a PyOpenGL class library. 🙂
The whole story is more than a month old, just after releasing the first depth of field demo I began studying deferred shading, but I extended my purpose to include other lighting methods, like single and multi-pass fixed-pipeline lighting, per-vertex and per-pixel single and multi-pass shader lighting and, of course, deferred one.
While writing the C code, I thought it was going to be fun to also port it to Python, this way I could have also have a look to the “new” (ArchLinux adopted it quite late 🙂 ) ctypes PyOpenGL, aka PyOpenGL 3.
Unfortunately, many little but annoying issues delayed me until today:
- not setting explicitely glDepthFunc(GL_LEQUAL) (or, alternatively, not clearing the depth buffer at each pass) for multi-pass scene rendering made every pass to be discarded excepting the first one.
- trying to make a buggy Python glDrawBuffers() wrapper work.
Actually I had no luck with this and give up on MRTs support in PyOpenGL.
- trying to figure out why VBOs didn’t work on PyOpenGL, I give up on this too. 🙂
- using a uniform variable to index the gl_LightSource structure array, which prevented the shader from running on Shader Model 3.0 cards
- exploring all the possibilities that could ever lead to “the brick room is very dark in fixed-pipeline mode” issue, only to discover today that this was a mere scaled normals problem.
It was easily solved enabling GL_RESCALE_NORMAL
At last I made it, I have made a multi light demo that includes deferred lighting (although very rough and not optimized at all) and shows coherent lighting in all rendering modes.
The PyOpenGL class library almost works, no MRTs and VBOs, but it is functional enough to sport a complete DoF2 and multilight (without deferred mode, which relies on MRTs, of course) demo conversions.
It’s not a news anymore that you can view it in action on my YouTube Channel, or in a high definition 720p version hosted on my Vimeo page.
All’s well that ends well. 🙂
Lately I’ve been really disappointed by the poor performances of my first depth of field implementation, thus I decided to do something about it…
The most natural step to do was to give a look to the second Direct3D example from the same paper I used for the first one, as I was sure it would have led to more satisfactory results.
I spent the two last nights converting, correcting and fine tuning it, but I was rewarded by the fact that I was right: even if it is a five passes algorithm which is using four different Frame Buffer Objects, it is about 2.5 times faster than my previous implementation!
I think the speed boost depends on the two following:
- image blurring is achieved by a gaussian filter which is calculated separating the X from the Y axis, it is an approximation of a standard 2D kernel but it also means that the convolution matrix calculation complexity decreases from a quadratic to a linear factor.
- this filter operates only on a downsampled (1/4th of the screen resolution actually) FBO
Another nice note about this new implementation is that there are only two focal parameters, focus depth and focus range, which really help to setup a correct scene.
Now let’s review the five passes in detail:
- Render the scene normally while calculating a blur amount per-vertex, then store the interpolated value per-pixel inside the alpha component of the fragment.
The calculation at the vertex shader is just:
Blur = clamp(abs(-PosWV.z - focalDistance) / focalRange, 0.0, 1.0);
- Downsample the scene rendered at the previous pass storing it in a smaller FBO
- Apply the gaussian filter along the X axis on the downsampled scene and store it in a new FBO
- Apply the gaussian filter along the Y axis on the already X blurred scene and store it in a new FBO
- Calculate a linear interpolation between the first full resolution FBO and the XY downsampled blurred one
This is performed in the fragment shader as:
gl_FragColor = Fullres + Fullres.a * (Blurred - Fullres);
Again, you can view it in action on my YouTube Channel, or in a high definition 720p version hosted on my Vimeo page. 😉