Tag Archives: Blender

Converting weights to vertex colors

If you have read my LinkedIn profile lately you should already know that, by now, some months have passed since I started my pre-graduation internship activity at Raylight (and since I signed my first NDA ๐Ÿ˜‰ ).
The real-time graphic R&D work that I’m doing there for my thesis is enjoying and stimulating, but this is not the topic of the post…

Raylight logo

Some days ago a 3d artist of the team, Alessandro, asked me a script that would have helped him using Blender for one more task along the company asset creation pipeline, weight painting.
He needed a simple script to actually convert vertex weights to per-bone vertex colors layers, in order to bake them to per-bone UV maps and later import them inside 3d Studio Max.

Weight painting

Weight painting

Vertex painting

Vertex painting

At first I didn’t even know where to start, how to extract and match per-vertex weight data with per-face vertex color one, but my second try with it went as smoothly as honey.
The core algorithm is, indeed, very simple:

for f in faces:
  for i,v in enumerate(f):
      infs = me.getVertexInfluences(v.index)
      for vgroup, w in infs:
        me.activeColorLayer = vgroup
        col = f.col[i]
        col.r = col.g = col.b = int(255*w)

Well, he has not yet taken advantage of it nor I know if he will ever use it, nevertheless the script is working and I have shared it on my site, as usual. ๐Ÿ˜‰

Composing renders in a strip

First of all happy new year to everyone, then let’s talk about this post topic… ๐Ÿ™‚

During these days I was relaxing and practicing subdivision modeling, after a long time away from Blender I was back to the dream of creating a convincing human head model, but my programming side win the day. ๐Ÿ˜€
While I was studying in detail some face key parts topology from here, I noticed the PiP-like composed images attached to the first post…

Showing camera keyframes

Last night I was thinking of a way to automates the process and today it becomes reality in the form of a Blender Python script: it is capable of producing an image which is composed of multiple rendered frames, think of a daily comic strip and you understand the name ;).

The user can select which frames to render specifying a string similar to the following one: “1-3, 5, 7, 9-11“.
Moreover it is possible, of course, to choose the size of a single frame and the composed image table dimensions, i.e. how many rows and columns it should have.
Have a look to how well my topology study renders fit the script purpose. ๐Ÿ˜‰

The resulting composed image

This second script is a bit more complex than my first one, making use of the Registry module to load and save options and the Draw.PupBlock() method to display a bigger GUI.

Of course it is released under the GNU GPL License and available online, download it from here.

Automatic parallax map generation with Blender

It has been a long time since I last wrote something here, during these months two new things happened that are worth to be mentioned: first of all I’m really close to graduation!
Well, actually I need to pass the last exam and spend a period of at least four months of internship, nothing is sure now but I’m in close contact with a game developing company… ๐Ÿ˜‰

The second thing is closely related to this post instead, a couple of months ago I began to convert, following M3xican’s advice, my OpenGL demos to object oriented C++.
What I have now is really not much, nevertheless my class library can load a Stanford PLY model, it is ES 2.0 compliant (this means it will be easily converted to “Pure” OpenGL 3.x), and it can already display both parallax mapping and depth of field!

I’m not going to publish any screenshot by now because I think it’s not the time yet, what I’m showing you is a easy script, my first one, which I wrote yesterday night using the Blender Python API.
What it does is really simple yet time-saving, you select a high-poly and a low-poly model, run the script from the Object->Scripts menu and watch Blender baking your normal and height map and then saving them.

Blender parallax maps

I have also set up an easy compositing nodes configuration to mix the two images in a single parallax map with height data encoded in the alpha channel.

Blender parallax maps nodes

You can download the script from here.
Everything is very simple (and funny!) with the astonishing power of Blender! ๐Ÿ™‚

Blender 2.43

Today is a memorable day for Blender!
First of all blender.org got a massive redesign and it’s now really cool and professional. For instance, have a look at the new features page, I think it’s a perfect showcase for our beloved program.

Moreover Blender 2.43 was released!

Blender 2.43 Splash

As usual the release log is full of appealing new functionalities that I’m eager to try.
Some of them can be previewed in feature videos or tested using the blend files which come inside the test243.zip archive.

I would like to express my congratulations to the whole Blender community.

Mighty SysAdmins

It has been some time now since I first used university servers for Blender or Yafray renderings. ๐Ÿ™‚

I have an account on these ones:

  • server1: 2 x Xeon 2.8GHz and 3GB RAM
  • server2: 4 x Xeon 3.2Ghz and 4GB RAM
  • server3: 4 x Xeon 3.4GHz and 4GB RAM

The first one is accessible from the outside, then you can log in the other two, which are serving our computer labs.

Servers Room

But as soon as I was just wetting my appetite I faced a serious problem, for long renderings the renderer process got killed unexpectedly.
The first thing that came up to my mind was a kind of cpu limitation, but both /etc/limits and /etc/security/limits.conf had all the lines commented out and no PAM module was loaded in the kernel.

Today I just discovered that is a lot simple if you just run ulimit -t, instead of inspecting system configuration files. ๐Ÿ™‚
This way I discovered that there is a 1s limit for server2 (that as a matter of facts was quite unusable for anything serious), 1000s for server3 and unlimited for server 1 (but it runs net daemons so it’s a lot more controlled than the other two). ๐Ÿ™

I feel defeated by the mighty power of the root user, who, as a demiurge, can choose everything and enforce his wills with unbreakabre rules. ๐Ÿ™
As if the cpu limitation were not enough, today I’ve also spotted one of the roots encoding a mpeg file of “Young Frankenstein” in xvid with mencoder!
Why he can use a university server for a task as personal as encoding a movie (forget for a moment about any legal issue…) while me, a student who pays for education, cannot use it for something as noble as learning computer graphics?

While overwhelmed by these thoughts I had a nice idea, to render with Yafray’s regions option and compose pieces with ImageMagick:

  • nice ./yafray -c 4 -r -1:1:-1:0 /tmp/YBtest.xml
  • mv /tmp/YBtest.tga /tmp/YBtest1.tga
  • nice ./yafray -c 4 -r -1:1:0:1 /tmp/YBtest.xml
  • mv /tmp/YBtest.tga /tmp/YBtest2.tga
  • composite -compose difference tmp/YBtest1.tga tmp/YBtest2.tga YBtest.png

Actually -compose difference just ignores black regions, and this can lead to wrong results, I should find in composite manual a compose operator that works with image coordinates.

I’m too lazy for it, but I’m sure it’s possible to automatize the task of region subdivisions using a recursive script that splits every region that cannot be rendered in less than 1000s…
But anyway, mission accomplished! ๐Ÿ˜‰