Category Archives: Programming

Close to deferring the deferred demo post…

Yeah, I was quite close to postpone the writing of this post to tomorrow, but I won’t. πŸ™‚
The title is humorous but clear, my last demo deals again with deferred shading.

GLSL_deferred with SSAO, iterative parallax mapping and Depth of Field

This time I made every effort to do things right: there is support for directional, point and spot lights and, much more important for performances, lights are rendered only inside their influence volumes using projective texturing.
This means that while directional ones affect everything and have to be rendered as full screen quads, point lights are only rendered as cubes and spot lights as pyramids.

Comparison between disabled and enabled SSAO

This is slightly different from the usual approach taken, which uses spheres and cones but is also a bit more vertex heavy. πŸ™‚
Rendering bounding shapes should barely affect performance, but makes the check for camera position a bit different.
The check is disabled by default as it is sufficient to cull front faces and invert the depth test to avoid double lighting issue and benefit from some depth optimization.
By enabling the check the application will expose my poor approximation of checking if the camera is inside a pyramid by using a formula meant for a cone, so better if it stays disabled, while the volume is not intersecting the far plane everything should behave as expected… πŸ˜€

Comparison between normal mapping and iterative parallax mapping

The demo is also a bigger attempt into effects integrations, it will serve as a test bed for a more organized and high level framework.
As a matter of fact it also features Screen Space Ambient Occlusion, Iterative Parallax Mapping and Depth of Field.
Very cool stuff, but I already have a list of things that I would like to implement sooner or later, like a Light Pre-Pass, a couple of advanced shadow mapping techniques, the integration of water rendering, HDR illumination and the reconstruction of position from depth.

Normal rendering and Depth of Field blur

I have published the sources and a video (Vimeo | YouTube), that for the first time comes with a nice soundtrack: it’s Nova Siberia by Big Giant Circles (Jimmy Hinson) from OverClocked ReMix! πŸ˜‰

Note: I have had a hard time with glc, x264, mencoder and ffmpeg but still YouTube doesn’t accept my video together with the sound, at the moment I have uploaded a mute version.

PacStats revamped, enhanced and published

Many months ago I was dealing with OpenGL, shaders and C++, the usual tools of a graphics programmer wannabe. πŸ™‚
This time I’m writing about something completely different, the recent developments of PacStats.

The new PacStats logo

PacStats is a 2007 project that was born as a toy experiment borrowing a lot of GL O.B.S. code: it is a program that analyzes the log of pacman, the ArchLinux package manager, and then plots a bunch of statistical charts about its activity.
Have you ever wandered how many packages have a name that begins with the letter “F”, or who is the packager that has contributed the most? With PacStats you can easily answer to those questions. πŸ˜‰

But what have been the main changes since 2007?
Well, first of all I have renewed the code as already done for GL O.B.S., so the GUI is now based on GtkBuilder, Matplotlib imports NumPy and the Python print statements have been converted to functions.
The program have also gained a toon shaded Blender made logo, and the possibility to be installed thanks to distutils through a very simple PKGBUILD that is already on AUR.
While some tables of the database have been refactored, the parsers made more robust and the base/derived chart classes relationship more polished, the most important news for the end users are represented by the possibility to configure the program through the GUI or a text file, the addition of a database information window and a menu item to optimize it and the addition of a couple of new charts and a toolbar to control them.

The Preferences window


The Database Information window

The original project have been hosted for a long time on my personal site, but the efforts spent to reboot it have suggested the possibility to have it published in a more “official” way: as a matter of facts you can find it on Google Code.
I have been interested in this open source project hosting platform for a long time and until now I’m really satisfied by its streamlined yet flexible interface.
I’m also very glad to have employed Mercurial for source revision control, after having been positively struck by the git experience and been advocating distributed systems for quite some time now, my decision to use it instead of Subversion feels very natural. Sure enough it is a really nice tool, easier to grasp than git but capable of most of the things of its famous contender, written in Python but yet not suffering too much when coming to performance.
Just like every other free software I have been responsible for, I have created a project page on Ohloh too, have a look at it and enjoy your charts! πŸ˜‰

Gimme some (real-time) water!

Computer generated water has always interested me, since the days of POVRay on Amiga I was trying to simulate it in some way.
Some days ago, while studying another technique, I put everything aside because in that particular moment I felt the urge to implement a water shader. πŸ˜€

water_crop

I began looking for existing implementations and I found Reimer’s XNA tutorial, a simple approach which I think could be optimized, but that already provides a nice looking water.

The technique is composed of four passes:

  • Rendering the reflection map
  • Rendering the refraction map
  • Rendering the scene
  • Rendering the water plane linearly interpolating the two maps with a Fresnel term

One of the drawbacks is represented by the fact that the whole scene is rendered three times during the first three passes, I’m sure that this procedure could be optimized, but I was lazy enough to cease any further test. πŸ™‚
Moreover having every pass clearly separated helps with debugging and makes the application capable of displaying them one at a time.

The scene is rendered with parallax mapping enabled, and that is more evident than ever thanks to the new bricks textures, but with an altered shader that also performs user plane clipping, decisive for the first two passes.
Talking about the Fresnel term I have implemented a naive (nothing more than a dot(V, N)) and a better approximation based on the Nvidia’s Fresnel reflection paper.
In the source you will find both but only the first one is actually used, it works better in the scene used in this demo.
You can easily see that waves are fake, the water is composed of just two triangles, the ripple animation is generated by the fragment shader altering texture coordinates based on a normal map and using a time variable.

Of course you can have a look at videos on YouTube (GLSL_water, GLSL_water_HD) or Vimeo (GLSL_water, GLSL_water_HD) and download the sources.

High Dynamic Range galore

In January, during my internship activity, I was researching in the field of HDR imaging, today I had the time, at last, to polish a bit and release the two demos I made at the time.

They both load an RGBE image (the two you see here are courtesy of the Paul Devebec’s Light Probe Image Gallery) through the library of Bruce Walter.

Light probe at different exposure levels (hdr_load1)

Light probe at different exposure levels (hdr_load1)

The first demo implements the technique described in the article High Dynamic Range Rendering published on GameDev.net and is based on five passes and four FBOs:

  1. Rendering of the floating-point texture in an FBO
  2. Down-sampling in a 1/4 FBO and high-pass filter
  3. Gaussian filter along the X axis
  4. Gaussian filter along the Y axis
  5. Tone-mapping and composition

The algorithm is very simple, it first renders the original scene then it extracts bright parts at the second pass, which merely discards fragments which are below a specified threshold:

// excrpt from hipass.frag
if (colorMap.r > 1.0 || colorMap.g > 1.0 || colorMap.b > 1.0)
	gl_FragColor = colorMap;
else
	gl_FragColor = vec4(0.0);

While the third and fourth passes blurs the bright mask, the last one mixes it with the first FBO and sets exposure and gamma to achieve a bloom effect.

// excerpt from tonemap.frag
gl_FragColor = colorMap + Factor * (bloomMap - colorMap);
gl_FragColor *= Exposure;
gl_FragColor = pow(gl_FragColor, vec4(Gamma));
Light probe at different exposure levels (hdr_load2)

Light probe at different exposure levels (hdr_load2)

The second demo implements the technique described in the article High Dynamic Range Rendering in XNA published on Ziggyware and is based on seven passed and more than five FBOs:

  1. Rendering of the floating-point texture in an FBO
  2. Calculating maximum and mean luminance for the entire scene
  3. Bright-pass filter
  4. Gaussian filter along the X axis
  5. Gaussian filter along the Y axis
  6. Tone-mapping
  7. Bloom layer addition

This approach is far more complex than the previous one and is based on converting the scene to its luminance (defined as Y = 0.299*R + 0.587*G + 0.114*B) version, the mean and maximum value can be calculated using a particular downsampling shader and working in more passes, at each one rendering on an FBO with a smaller resolution than the previous until the last pass, when you render the luminance of the entire scene on a 1×1 FBO.

As usual you can have a look at YouTube (GLSL_hdrload1, GLSL_hdrload2) or Vimeo (GLSL_hdrload1, GLSL_hdrload2) videos and download the sources.

GL O.B.S.: two years after

You’re reading about GL O.B.S. after *exactly* 730 days (I swear it was not made on purpose ;-)): today revision 50 has been committed.
No new feature has been added, I’ve dedicated the efforts of the last days to a case study about updating an application after a very long time, the process mainly involved removing deprecated things and adding support for new ones.

GL O.B.S. GUI is now based on GtkBuider

GL O.B.S. GUI is now based on GtkBuider

Let’s read together some lines from the revision log:

  • GUI migrated from libglade to GtkBuilder
    This was, with no doubt, the most time consuming task: gtk-builder-convert tool was not so reliable and I was compelled to recreate the GUI from scratch with Glade. I have also set the minimum GTK version required to 2.16
  • using JSON format for storing benchmark information dictionaries
    This feature is a way to get rid of the deprecated execfile() built-in function, instead of executing a python script to read a dictionary, the Benchmark class now parses a JSON file with the new integrated json module.
    Comparing an old and a new file you can notice how simple it was the conversion, I’ve only lost the ability to comment a line. πŸ˜‰
  • importing numpy instead of numerix in matplotlib
    This is another deprecation related issue, matplotlib is now based upon numpy.
  • matplotlib canvas get refreshed instead of being destroyed and then recreated
    I don’t know why two years ago I was destroying and recreating the object instead of just calling the draw() method every time it had to be refreshed. πŸ˜€
  • some more attributes and objects integrated in the GtkBuilder XML file
    I’ve integrated more attributes in the XML GUI file, like default size for secondary windows or Paned positions.
    Moreover, thanks to GtkBuilder, even some objects have been integrated, like Adjustment or TextBuffer.
    Actually also TreeModel objects should be configurable inside Glade and integrated inside the XML, but I didn’t manage to make them work. πŸ™‚

Other remarkable changes in the field of Python3 future support and deprecated features removal were:

  • all the print statements converted to functions
    Well, we all know the content of PEP 3105… πŸ˜‰
  • using hashlib.md5 instead of md5
    The md5 module has been deprecated in favour of the hashlib one
  • using “key in dict” instead of dict.has_key()
    The has_key() method no longer exists in Python 3
  • using subprocess.Popen instead of os.popen()
    os.popen() have been deprecated in favour of the subprocess module
  • using urllib2.urlopen() instead of urllib.urlopen()
    Another deprecation suggested substitution, from urllib.urlopen() to urllib2.urlopen()
  • removed “copyright” argument and added “classifiers” in setup.py
    The distutils.core.setup() function loses the “copyright” argument but gains “classifiers”. πŸ™‚
  • dropped support for pysqlite2 and old webbrowser module
    Now that the minimum requirement is Python 2.6, there is no need for pysqlite2 or old webbrowser module support
  • I have also removed the deprecated Options class in some SCons scripts.

I’m not sure I will have some time in the future to continue developing, but this was a nice “reboot” and I also have some new ideas in my to-do list. πŸ˜‰

Yet another toon shader

Maybe is true, as I wrote in the README file, that I coded this demo because I felt like the only one who hasn’t yet implemented a toon shader. πŸ™‚
Actually this is not the only reason, I came with the inspiration when I was presenting the first part of my updated Modern GPUs slides at the university, this time the event was organized by some students and advertised with leaflets. πŸ˜‰
So, for the second part that will be held next Wednesday, I’m planning to integrate the explanations about the internals of this demo.

From untextured to textured with outlines

From untextured to textured with outlines

It was easy and fast to have a basic toon shader working, thanks to the Lighthouse 3D tutorial.
This version uses a cascade of if-then-else instead of a more usual 1D texture lookup but, judging from the tests I have run, it’s not a performance issue, at least on GeForce 8 and newer cards.

For the edge detecting I wanted to exploit the fragment shader capabilities, working in screen space with the sobel operator and thus being independent from geometric complexity.

The only problem was about *what* to filter.

  1. The first test was straight, I filtered the rendered image, a grey version of the textured and lit MrFixit head, but the results were poor: edge detecting outlined toon lighting shades too.
  2. In the second one I decided to filter the depth buffer, I could get rid of colour to grey conversion but, again, the results were not satisfactory: there were no outlines in the model, just a contour all around.
    Maybe it could have been corrected with a per-model clip planes tuning, but I gave up.
  3. With the third test I filtered out the unilluminated color texture and the results were better. Unfortunately it relied on the presence of a texture and outlined too much details.
  4. I think the fourth approach, as seen in this demo, is the best one.
    I used MRTs to save the eye-space normal buffer during the toon shader pass, then I filtered a grey version of it, outlining the contour plus some other geometric details.

A small note: saving an already grey converted buffer in the toon shader pass speeds up the demo a bit, but storing the normal in a single 8 bits component of the texture causes a loss of precision that leads to some visible artefacts.
Using a floating point texture helps with the precision issue but makes the demo too slow.
Maybe I should try using a single component texture or some kind of RGBA packing algorithm…

As usual you can have a look at YouTube or Vimeo videos and download the sources.

Converting weights to vertex colors

If you have read my LinkedIn profile lately you should already know that, by now, some months have passed since I started my pre-graduation internship activity at Raylight (and since I signed my first NDA πŸ˜‰ ).
The real-time graphic R&D work that I’m doing there for my thesis is enjoying and stimulating, but this is not the topic of the post…

Raylight logo

Some days ago a 3d artist of the team, Alessandro, asked me a script that would have helped him using Blender for one more task along the company asset creation pipeline, weight painting.
He needed a simple script to actually convert vertex weights to per-bone vertex colors layers, in order to bake them to per-bone UV maps and later import them inside 3d Studio Max.

Weight painting

Weight painting

Vertex painting

Vertex painting

At first I didn’t even know where to start, how to extract and match per-vertex weight data with per-face vertex color one, but my second try with it went as smoothly as honey.
The core algorithm is, indeed, very simple:

for f in faces:
  for i,v in enumerate(f):
      infs = me.getVertexInfluences(v.index)
      for vgroup, w in infs:
        me.activeColorLayer = vgroup
        col = f.col[i]
        col.r = col.g = col.b = int(255*w)

Well, he has not yet taken advantage of it nor I know if he will ever use it, nevertheless the script is working and I have shared it on my site, as usual. πŸ˜‰

Habemus OpenGL 3.0!

We have been waiting OpenGL 3.0 for ages, we were all very excited about the wonderful features ARB was promising, then, on the 11th of August 2008, the specs were released…
I, like everyone else, was really disappointed, the Architecture Review Board was not only really late on schedule but it didn’t keep its word about many key features that should have been introduced with this release.
Nevertheless I’m still confident in the future, when older API functions will be removed and not simply, as in the current version, tagged as deprecated.

OpenGL3 Logo

Now, after the long but needed introduction, let’s talk about things that matter: today the ArchLinux team moved the new stable 180.22 driver release from the [testing] to the [extra] repository.
Well, apart from the equally important CUDA 2.1 and VDPAU support, this release has been bundled with OpenGL 3 and GLSL 1.30 support, so have a look at how to create an OpenGL 3.0 context.

First of all, it seems like there’s no other way to open the new context without getting your hands dirty, that is talking directly with GLX.
What follows is a snippet to create an OpenGL 3.0 context integrated in SDL 1.2, which still doesn’t support it natively.

First of all you need some new defines.

#define GLX_CONTEXT_DEBUG_BIT_ARB                         0x00000001
#define GLX_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB   0x00000002
#define GLX_CONTEXT_MAJOR_VERSION_ARB                  0x2091
#define GLX_CONTEXT_MINOR_VERSION_ARB                   0x2092
#define GLX_CONTEXT_FLAGS_ARB                                0x2094

You need also to retrieve the address of the following new GLX function.

typedef GLXContext ( * PFNGLXCREATECONTEXTATTRIBSARBPROC) 
	(Display *dpy, GLXFBConfig config, GLXContext share_context, Bool direct, const int *attrib_list);
PFNGLXCREATECONTEXTATTRIBSARBPROC glXCreateContextAttribsARB = 
	(PFNGLXCREATECONTEXTATTRIBSARBPROC)glXGetProcAddress((GLubyte*)"glXCreateContextAttribsARB");

Then you have to define a bunch of GLX related variables.

Display *dpy;
GLXDrawable draw, read;
GLXContext ctx, ctx3;
GLXFBConfig *cfg;
int nelements;
int attribs[]= {
	GLX_CONTEXT_MAJOR_VERSION_ARB, 3,
	GLX_CONTEXT_MINOR_VERSION_ARB, 0,
	GLX_CONTEXT_FLAGS_ARB, GLX_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB,
	0
};

At last, after having called SDL_SetVideoMode(), create a new context, make it current and destroy the old one.

ctx = glXGetCurrentContext();
dpy = glXGetCurrentDisplay();
draw = glXGetCurrentDrawable();
read = glXGetCurrentReadDrawable();
cfg = glXGetFBConfigs(dpy, 0, &nelements);
ctx3 = glXCreateContextAttribsARB(dpy, *cfg, 0, 1, attribs);
glXMakeContextCurrent(dpy, draw, read, ctx3);
glXDestroyContext(dpy, ctx);

Don’t forget to put some querying code, just to be sure the whole process worked. πŸ˜€

const GLubyte* string;

string = glGetString(GL_VENDOR);
printf("Vendor: %s\n", string);
string = glGetString(GL_RENDERER);
printf("Renderer: %s\n", string);
string = glGetString(GL_VERSION);
printf("OpenGL Version: %s\n", string);
string = glGetString(GL_SHADING_LANGUAGE_VERSION);
printf("GLSL Version: %s\n\n", string);

On my workstation I get this:
Vendor: NVIDIA Corporation
Renderer: GeForce 8600 GT/PCI/SSE2
OpenGL Version: 3.0 NVIDIA 180.22
GLSL Version: 1.30 NVIDIA via Cg compiler

If you want to retrieve also the extension list, a new function can help you simplify the process.

typedef const GLubyte * (APIENTRYP PFNGLGETSTRINGIPROC) (GLenum name, GLuint index);
PFNGLGETSTRINGIPROC glGetStringi = (PFNGLGETSTRINGIPROC)glXGetProcAddress((GLubyte*)"glGetStringi");

You can then use it like this:

GLint numExtensions = 0;
glGetIntegerv(GL_NUM_EXTENSIONS, &numExtensions);

printf(“Extension list: \n”);
for (int i = 0; i < numExtensions; ++i) { printf("%s ", glGetStringi(GL_EXTENSIONS, i)); } printf("\n"); [/sourcecode] This new OpenGL version seems to perform just like the old one at the moment, drivers do not honour the GLX_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB attribute, this means everything is still in place, backward compatible and unoptimized. πŸ™

Composing renders in a strip

First of all happy new year to everyone, then let’s talk about this post topic… πŸ™‚

During these days I was relaxing and practicing subdivision modeling, after a long time away from Blender I was back to the dream of creating a convincing human head model, but my programming side win the day. πŸ˜€
While I was studying in detail some face key parts topology from here, I noticed the PiP-like composed images attached to the first post…

Showing camera keyframes

Last night I was thinking of a way to automates the process and today it becomes reality in the form of a Blender Python script: it is capable of producing an image which is composed of multiple rendered frames, think of a daily comic strip and you understand the name ;).

The user can select which frames to render specifying a string similar to the following one: “1-3, 5, 7, 9-11“.
Moreover it is possible, of course, to choose the size of a single frame and the composed image table dimensions, i.e. how many rows and columns it should have.
Have a look to how well my topology study renders fit the script purpose. πŸ˜‰

The resulting composed image

This second script is a bit more complex than my first one, making use of the Registry module to load and save options and the Draw.PupBlock() method to display a bigger GUI.

Of course it is released under the GNU GPL License and available online, download it from here.

Blurring the parallax

Today I have published the first demo making use of my new C++ class library, I designed it to be very easily ported to a strict GL3 profile or to ES 2.0.

From plain rendering to depth of field

From plain rendering to depth of field

As a matter of fact, it doesn’t make use of fixed pipeline or deprecated functions at all:

  • No immediate mode, only VBOs
  • No use of OpenGL matrix stacks, I have my classes handling transformations and passing matrices to shaders directly
  • No OpenGL lighting, only per-fragment one
  • No quads or polygons, just triangles
Normal versus parallax mapping

Normal versus parallax mapping

I couldn’t release something only to show changes “under the hood”, I had to make something cool, so I decided to mix together parallax mapping (that, as you can see in the screenshot, is a lot more pronounced now) and depth of field, with the little addition of Stanford PLY mesh loading. πŸ˜€

Mr.Fixit model and maps (the character players portray in Sauerbraten) are courtesy of John Siar, thank you John. πŸ˜‰

As usual, you can have a look to Vimeo videos (640×480, 1280×720) and download the sources.