"Pixar Quality Graphics" is 720 Gflops

For at least 10 years GPU vendors have been talking about "Pixar Quality" graphics. But what does that mean? Well, according to this lecture on The Design of Renderman, the original goals for the REYES architecture were
  • 3000 x 1667 pixels (5 MP)
  • 80M Micropolygons (each 1/4th of a pixel in size, depth complexity of 4)
  • 16 samples per pixel
  • 150K geometric primitives
  • 300 shading flops per micropolygon
  • 6 textures per primitive
  • 100 1MB textures
That's a shading rate of 80M * 300 * 30 Hz = 720 Gflops. (They were probably more concerned about 24 Hz, but for games 30Hz is better.)

In comparison I think the peak shader flops of high-end 2008-era video cards are in the 1 TFlop range. (Xbox 360 Xenos is 240 Gflops, PS3 is a bit less.). Now, GPU vendors typically quote multiply-accumulate flops, because that doubles the number of flops. So it's more realistic to say that 2008-era video cards are in the 500 Gflop range. So we really are entering the era of "Pixar Quality" graphics.

Comments

Ho Ho said…
Riiigth. If we don't consider the efficiency of patching, memory fetches, changing shaders and metric ton of other things that are needed to be done to get image to screen then that really simple calculation holds true.

Unfortunately in real world we can't just calculate performance by simple formulas. I'd say we are about 2-3 orders of magnitude short of true Pixar quality images.

If I remember correctly Reyes spent around 2-4h per frame on average 2p rendering node a few years ago. At that time high-end 2p machines had around 20GFlops of theoretical peak computing power. That meant around 150-300k gflops could have been performed over that time and that was around 8 years ago, today Pixar movies take several times more computing power. I'd say current high-end 2P nodes should have around 100-150gflops of peak throughput but rendering takes just as long if not longer than back then.

Also, that 1Tflop for GPU performance is vastly exaggerated. You'll be lucky to get >20% of it in any real-world scenario. In GPGPU you might get a bit more, in games less.

Popular posts from this blog

GLES Quake - a port of Quake to the Android platform

A Taipei-Torrent postmortem: Writing a BitTorrent client in Go

A Multi-threaded Go Raytracer