An impending revolution in image rendering
By Ajith Ram November 1, 2016
- GPU-accelerated rendering promises great savings in movie production costs
- Industries like product design and architectural visualisation are also impacted
Phillip Miller, Nvidia's Director of Product Management,
RENDERING the individual frames of an animated movie like Toy Story or Frozen is one of the most processor-intensive tasks in the computing industry. For decades, since the release of Pixar's seminal Luxo Jr. short animation in 1986, all Hollywood animated movies and special effects in live action movies have been rendered using the computer's CPU.
Typically, rendering a single frame of each movie takes several hours or sometimes, even days. Although CPU speeds have increased dramatically since 1986, movie rendering times have remained the same as studios kept increasing the complexity of the special effects.
For a brief introduction to the science of CG film lighting and rendering, watch this video below.
However, in recent years, this arduous process of rendering individual movie frames using the CPUs has been getting some help. Thanks to the introduction of highly programmable and powerful GPUs, what used to take hours can now be done in real-time - just like in videogames.
This is nothing short of a revolution. An animated movie which required hundreds of servers and thousands of CPUs can now be finished using just a fraction of that hardware. This leads to massive savings in time and money.
The impact of GPU acceleration is felt not just in the movie business. Several other industries such as product design, vehicle manufacturing and architectural visualisation have also been impacted by the GPU winds of change.
One of the software solutions that enable GPU rendering is Nvidia's IRay. Today, there are also others such as VRay-RT, Octanerender and Redshift.
DNA interviews Phillip Miller, Nvidia's Director of Product Management, about iRay and GPU rendering. He is a 17-year veteran of the professional software tools industry and joined Nvidia in 2008.
Previously, Phillip managed entertainment products at Autodesk, web products at Adobe and business development at Mental Images. Phil holds a Masters of Architecture from the University of Illinois, is a registered architect and co-author of multiple books explaining computer modeling and animation.
DNA: Please give our readers a brief history and overview of IRay.
After adding physically based rendering (PBR) to mental ray (for Maya 2007) it was clear that the “ease-of-use” possible with PBR was not easily achieved in a renderer with so many legacy ways of being non-physical.
Work was then started on a purely PBR renderer to truly deliver “camera-like” simplicity, one that was also designed from the beginning for interactivity, scalability, and remote access – trait is not historically found in renderers. The rendering approach chosen for Iray was path tracing – which delivers accuracy without burdening the user with complexity.
It is also progressive – meaning you can see the overall image improve as it “resolves” and finishes. When this is combined with interactivity, it allows for quick decision making as you often get enough information right after making an edit. The quicker the feedback, the quicker the understanding of your choices, which also makes it far easier to learn and faster to use.
The downside with path tracing is that it is computationally expensive. That’s where the parallel processing power of GPUs make the rendering times practical. We then looked to make Iray even more interactive by providing two additional rendering modes – a much faster, approximate, ray tracing mode (called Iray Interactive) and a very fast OpenGL mode (called Iray Realtime).
For this to be practical, we needed a material model that was usable by all of them so that users would never have to rework their scene when switching modes. This led us to making the Material Definition Language (MDL) which we designed to be usable by any PBR renderer – and which now is being supported by the likes of mental ray and V-Ray.
DNA: What sets IRay apart from other real-time renderers such as V-Ray RT and Redshift?
Iray is physically based and highly accurate – all the time. Redshift is intentionally not PBR so that expert artists can break the laws of physics. VRay-RT is somewhere in between – where you can be PBR or not, you just have to keep all your options straight to be accurate.
DNA: When can we see iRay integrated with Renderman?
Renderers don’t integrate with other renderers, so this wouldn’t happen. Now that the latest RenderMan has a PBR method, it could support MDL materials though.
DNA: Is it possible to integrate IRay with game engines such as Unreal and Unity 3D?
Yes, this is possible and MDL could connect the content from real-time to photorealism.
DNA: What is the current status of its integration with non-Autodesk DCC applications like Lightwave and Blender?
We have nothing to announce with those products at this time. Our latest adoptees have been Siemens NX and Daz Studio.
DNA: In terms of image quality, would you put iRay in the same category as Maxwell?
Maxwell and Iray are very similar in their approach and it can be very difficult to determine if an image was done by one or the other. Which one is best to use really comes down to your 3D tool of choice and how well the renderer supports a workflow with it. Maxwell just recently added GPU acceleration, so users can invest in the same GPU hardware and use it for many renderers – Maxwell, Iray, VRay-RT, Octane, Redshift, etc.
DNA: What features do you have planned for 2017?
Quite a few. One area of research for us right now is deep learning and how it can be applied to make various aspects of rendering faster or easier. We expect some real breakthroughs, especially on GPUs since they can process deep learning problems so quickly.
Related Stories:
Nvidia announces supercomputer deployment
Nvidia accelerates self-driving vehicle, AI research
For more technology news and the latest updates, follow us on Twitter, LinkedIn or Like us on Facebook.