Why I would have liked to buy a PS3

When I first read about the PS3 I was so astonished by its specs that I immediately fell in love with it.
When the Unreal Engine 3.0 was first presented at E3 2005, Tim Sweeney suggested a GeForce 6800 Ultra for just a decent framerate, and it was the top of the line hardware of that time!
The RSX GPU inside the Playstation 3 was based on the new G70 architecture by nVidia, it had more power than anything on the consumer market in 2005.

Playstation 3

I was also wondering about the Cell, this really innovative CPU which was attractive for its strong multi-threaded design, with a PPE controlling seven SPUs for enhanced parallel performance.
A hard disk, which came pre-installed with Linux, and the fact that it would have used OpenGL ES completed this fantastic image I had about the PS3 as the perfect geek solution for high power computing (and awesome programming possibilities) at a reasonable price.

While not only me, but the entire world, was dreaming about the new Sony console, at Microsoft they were working hard and by the end of the same year the Xbox 360 became reality.
Ok, it had no Blu-ray (which Sony claims to be a must have for next-gen games…), it had a more ordinary (still great!) double-threaded tri-core PowerPC Xenon, it had not a “Reality Synthesizer” but a more modest Xenos GPU.
The Xenos is actually modest only in the name, its unified shader architecture was something ahead of its time, something that nVidia has adopted only with GeForce 8 (and it’s not a DirectX 10 requirement, as someone could believe), the same could be told about the embedded DRAM, a solution never seen in the past.

What is really irritating, at least from my point of view, is the usual Microsoft way to do things, they planned 1000 ways to raise a fence all around their console, they absolutely don’t want you to run anything else than their kernel, and they enforce this concept taking all the possible hardware measures like hardwiring a different encryption key in every Xenon CPU produced!
I should also admit that Microsoft (as Sony) gives you some kind of support for homebrew games, but, for it is mainly a software company, you are again enforced to use their software only: you cannot program on your console directly, running Linux, Vim and GCC, like you could on the Playstation 3, you must use XNA Game Studio Express (which now can be downloaded for free) on the PC with its Visual Studio environment…

We have demonstrated that the Xbox 360 is a great machine but not a viable solution for a geek seeking for a flexible and powerful programming platform, at least until someone finds how to crack it to install Linux, as done on the first Xbox (but this time I think it will be much harder).
And so we come to the Playstation 3, it seems perfect, it has a parallel CPU that makes you learn and stress multi-threaded programming (and not only, as the Cell is something so innovative that needs a new approach), a powerful high-end GPU and a price which is undeniably high but that, as time passes, can only lower.

I thought: “Great, now I know what I will buy next, a Playstation Three!”. I was also wondering about graduating with a thesis on numerical analysis performed on RSX shaders, it would have been so cool and geek. ๐Ÿ˜€
The only problem is that we’re not in 2005! We are in 2007, what was the future is now the present (like the Unreal Engine 3, like Gears of War, which is running only on the Xbox 360).
Now, I know that PS3 is also the present, that it is real, but it is not yet distributed in Europe, not yet produced in high volumes and, what is worst, it has been in the market only for a couple of months and it is sold for a high price that won’t decrease for a very long period!

I was feeling abandoned and betrayed, orphaned of what I thought would have been my future rendering and developing platform, but then, little by little, PCs inspired my confidence again. ๐Ÿ™‚
Now I think that a Core 2 Quad Q6400 and a GeForce 8600 Ultra (which specs has been rumored this very day) cannot be so bad, I’m eager to see them sold in shops, and this time we are sure they won’t be postponed for a year. ๐Ÿ˜‰

Code Freeze

Semester is over, it’s time for university examinations.
This means that all my projects are frozen until the end of February.
Anyway, let’s take a look at what will come next.

Freeze or Stop

Mars. M3xican is studying too (we are from the same university after all), so he has less time to dedicate to Mars, but nevertheless he’s currently involved in writing design documents for the game. In March we will get back to work on the gameplay side!

GL O.B.S.. The plan is to release a 0.2 version as soon as possible, I still need to adapt the submission code to the new site and db interface developed by cowa. It is also possible for a RC version to appear before the final release.

PyLOT. This is a project I start working on in December, a prototype 3d engine written in Python. At the time of writing it has more shortcomings than features, and it’s nothing more than a conversion of an old C++ project plus the addition of code and ideas from some OpenGL demos I wrote, but in the future M3xican will eventually spend some time in designing a game for it (we have already a bunch of ideas :)).

Mighty SysAdmins

It has been some time now since I first used university servers for Blender or Yafray renderings. ๐Ÿ™‚

I have an account on these ones:

  • server1: 2 x Xeon 2.8GHz and 3GB RAM
  • server2: 4 x Xeon 3.2Ghz and 4GB RAM
  • server3: 4 x Xeon 3.4GHz and 4GB RAM

The first one is accessible from the outside, then you can log in the other two, which are serving our computer labs.

Servers Room

But as soon as I was just wetting my appetite I faced a serious problem, for long renderings the renderer process got killed unexpectedly.
The first thing that came up to my mind was a kind of cpu limitation, but both /etc/limits and /etc/security/limits.conf had all the lines commented out and no PAM module was loaded in the kernel.

Today I just discovered that is a lot simple if you just run ulimit -t, instead of inspecting system configuration files. ๐Ÿ™‚
This way I discovered that there is a 1s limit for server2 (that as a matter of facts was quite unusable for anything serious), 1000s for server3 and unlimited for server 1 (but it runs net daemons so it’s a lot more controlled than the other two). ๐Ÿ™

I feel defeated by the mighty power of the root user, who, as a demiurge, can choose everything and enforce his wills with unbreakabre rules. ๐Ÿ™
As if the cpu limitation were not enough, today I’ve also spotted one of the roots encoding a mpeg file of “Young Frankenstein” in xvid with mencoder!
Why he can use a university server for a task as personal as encoding a movie (forget for a moment about any legal issue…) while me, a student who pays for education, cannot use it for something as noble as learning computer graphics?

While overwhelmed by these thoughts I had a nice idea, to render with Yafray’s regions option and compose pieces with ImageMagick:

  • nice ./yafray -c 4 -r -1:1:-1:0 /tmp/YBtest.xml
  • mv /tmp/YBtest.tga /tmp/YBtest1.tga
  • nice ./yafray -c 4 -r -1:1:0:1 /tmp/YBtest.xml
  • mv /tmp/YBtest.tga /tmp/YBtest2.tga
  • composite -compose difference tmp/YBtest1.tga tmp/YBtest2.tga YBtest.png

Actually -compose difference just ignores black regions, and this can lead to wrong results, I should find in composite manual a compose operator that works with image coordinates.

I’m too lazy for it, but I’m sure it’s possible to automatize the task of region subdivisions using a recursive script that splits every region that cannot be rendered in less than 1000s…
But anyway, mission accomplished! ๐Ÿ˜‰

Mars r594 and the vflip hack

For my first entry on this blog let me tell you a tale, it’s about OpenGL framebuffer and vertical flipping…

Mars 0.1.1 1st

Once upon a time a little fool called Encelo used to perform, in a little testing program, a vertical flip of the entire OpenGL framebuffer this way:

if(_flags & SDL_OPENGL)
{

  GLvoid * pixels;

  pixels = (GLvoid *) malloc(_width * _height * 4);
  glPushAttrib(GL_COLOR_BUFFER_BIT | GL_CURRENT_BIT | GL_PIXEL_MODE_BIT);
  glReadBuffer(GL_FRONT);
  glReadPixels(0, 0, _width, _height, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
  glDrawBuffer(GL_BACK);
  glRasterPos2f(-1.0f, 1.0f);
  glPixelZoom(1.0f, -1.0f);
  glDrawPixels(_width, _height, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
  glReadBuffer(GL_BACK);
  glReadPixels(0, 0, _width, _height, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
  glPopAttrib();
  output_surf = SDL_CreateRGBSurfaceFrom(pixels, _width, _height, 32, _surface->pitch, rmask, gmask, bmask, amask);
}

It wasn’t really bad, it performed some interesting tricks with buffers, and, as a matter of fact, Encelo was really proud of this implementation. ๐Ÿ™‚
But… it didn’t work on Mars. Yes, no matter how much Encelo tested, changed, and tested again, it simply didn’t work on anything else than the original testing program.
A decision had to be taken soon, to persevere or not to persevere? That was the question.

Encelo chose not to persevere and to try a completely different approach… memcpy() flipping! ๐Ÿ˜€
Yeah, something as simple, elegant, fast and smart as this:

if(_flags & SDL_OPENGL)
{
  int row, stride;
  GLubyte * swapline;
  GLubyte * pixels;

  stride = _width * 4; // length of a line in bytes
  pixels = (GLubyte *) malloc(stride * _height);
  swapline = (GLubyte *) malloc(stride);

  glReadPixels(0, 0, _width, _height, GL_RGBA, GL_UNSIGNED_BYTE, pixels);

  // vertical flip
  for(row = 0; row < _height/2; row++)
  {
    memcpy(swapline, pixels + row * stride, stride);
    memcpy(pixels + row * stride, pixels + (_height - row - 1) * stride, stride);
    memcpy(pixels + (_height - row -1) * stride, swapline, stride);
  }

  output_surf = SDL_CreateRGBSurfaceFrom(pixels, _width, _height, 32, _surface->pitch, rmask, gmask, bmask, amask);
}

This story is true, and happened exactly a month ago, on the 30 of November 2006.
As a proof have a look at r594 log and at Screen.cpp changes. ๐Ÿ˜‰