Maya and mental ray Hate Me: The Linear Workflow
Today, we’re going to dive into the murky waters of the linear rendering workflow in Maya and mental ray. Hold on to your hats, because it’s going to be a bumpy ride. Much, much more below the fold.
Let’s get something straight right from the start:
Mental Ray sucks. (Apologies, Zap)
Computer monitors suck.
The human eye sucks.
The real world sucks.
Why? Because none of these things can seem to agree on reality. Okay, so the real world is kind of the last word on reality, but it sucks too for being so difficult to simulate properly. So I guess what I’m saying is, physics sucks too. Harrumph. If you’ve stuck with me so far, you’re probably wondering where the hell I’m going with this. Or, maybe you’re wondering why Darwin’s only mention of the giant tortoises in the Galapagos was a note about how absolutely delicious they were. No way for me to be sure. But, you are reading this, so I’ll get on with my point. Ahem.
What I’m saying is, the human eye and brain don’t actually see the world very accurately — we have to compress the raw data down into a visual facsimile of color and shade. The same problem exists with computer monitors and other types of display — they have to take a wide gamut of information and compress it down to a certain number of bits — if you’re reading this on a regular computer monitor, this means 8 bits per color channel. Unfortunately, 8 bits just isn’t enough data to hold all of the dynamic range that the human eye can see, so computer monitors use a feature called gamma correction to better match the way our eyes see the world.
And what is gamma correction?
Gamma correction essentially takes the color values that your computer outputs, and raises them to a power called gamma (the greek letter symbolizing a huge pain in the ass). If everything goes according to plan, the monitor outputs an image that looks reasonably close to the way it would if we were to look at it in the real world. On the vast majority of monitors, this gamma value is 2.2.
Alright, I’ll bite… So what’s the problem?
The problem is that 8-bit images are usually pre-corrected for this gamma curve before they ever get to Maya. Whenever you’re creating textures or using digital photos, you’re looking at an image that has been prepped for this gamma-correction process. These images are non-linear, meaning they’ve had the gamma curve encoded into them. Most of the time, this is a good thing. Your average grandmother or corporate CEO doesn’t even have to know about gamma — it’s all in the background, and the gamma-encoded image that comes to them in the end looks pretty. But you and I, well, we have to make sure we take this entire process very seriously. What we want to do is work with linear data, without the gamma encoding.
Again… for the cheap seats, what’s the problem?
Alright, so maybe an example is necessary. Until very recently, most 3D renderers and photo-editing software didn’t bother with gamma at all. They assumed that gamma = 1 from start to finish, which meant that all of their calculations were done with these pre-corrected, gamma-encoded images. Let’s dig into a simple example in Photoshop. In standard 8-bit mode, I’m going to start with a simple red color, and add a layer with a gradient of green-to-transparent. I also added a soft brushstroke of the green color across the bottom. This is the result:
Take a look at the way Photoshop handles the transparency of the gradient. It looks as if it’s dipping to a darker shade between the red and the green, creating a poo-colored halo around the transparency, especially around the brushstroke at the bottom. The intensity of the green and red aren’t changing, so why should the mix between them be darker? This is caused by a bad move on Photoshop’s part — using the non-linear values for these colors to calculate the transition. In 8-bit mode, this is the default. In 32-bit mode, Photoshop uses floating-point colors, which are not gamma-encoded. So, what will happen if I do the exact same thing in 32-bit mode?
The dark haloing is gone, and the color values transition in the same way you’d expect them to in the real world.
So you start to see where the problem comes from. Working non-linearly, you’re essentially creating your textures incorrectly in photoshop, and then bringing them into Maya to be rendered incorrectly. Everything you do in a non-linear workflow is wrong.
This is not to say that you can’t get pretty images non-linearly. You really can, and we’re pretty used to seeing 3D art produced this way; however, it means that you’ll have to work especially hard to compensate and fix things that shouldn’t have been broken in the first place.
Before we move into Maya, how the hell do I fix this in Photoshop?
Well, they certainly don’t make it intuitive. However, there is an option (well-hidden) to change the default behavior of 8-bit mode. If you go up to Edit — Color Settings, and click on More Options on the right, you get a checkbox at the bottom with “Blend RGB Colors Using Gamma”. Turn on the checkbox, and make sure the gamma is set to 1, and you’ll be good to go.
What does all this have to do with Maya?
Like I established at the beginning, both Maya and mental ray (apologies again, Zap) suck. Neither one of them operates linearly out-of-the-box, although both are capable of it. This is where I introduce one of my new heroes, Master Zap. Zap Andersson has been a tireless defender of the linear workflow at mental images, the creator of mental ray. He’s the shader writer responsible for the mia_material and the subsurface materials, and his blog and FxPHD class are the source of pretty much all the information in this blog post. Zap has been working hard to revolutionize the way that the 3D world thinks about gamma and photo-realistic rendering.
He’s been pretty successful with his efforts in 3DS Max, which has recently included what amounts to a “linear workflow” button inside the application. Maya users are not so lucky. Mental ray is capable of working linearly, but you’ll have to twist Maya’s arm to get it to cooperate.
So why should I work linearly in Maya?
Hopefully by this point you’ve got a fairly good grasp of what exactly the problem is. If we calculate our renders using gamma-encoded data, then the end result will be incorrect. Here’s a pretty dramatic example using a standard Maya area light, a plane with a texture, and a sphere with a gray lambert. The area light has a falloff type set to “quadratic,” which is supposed to be physically accurate.
This is what we get working non-linearly:
This image sucks for a lot of reasons. The area light produces an unacceptably bright highlight on the plane, and even with a massively bright value of 300 in the intensity slot, the sphere in the scene is barely even lit. Maya’s default lights suck in general, but this is just sad. Working non-linearly, this scene would only be salvageable if we added in all kinds of bounce or fill lights, or resorted to hacky tricks to fake our way through, like using linear falloff instead of quadratic. So let’s take that same scene, and use the linear workflow (don’t worry, I’ll show you how it’s done here in a second)
The area light behaves the way we would expect it to, and the superbright highlight is diminished. The light still has a massive intensity value of 300, but the end result is much, much better. It still isn’t perfect, but this is a serious improvement.
Let’s do this. What’s the workflow in Maya?
The workflow in Maya is actually pretty simple, once we figure out how to go about it. Essentially, what we want to do is linearize or decode the gamma on our colors and textures before any calculations are done using them. Then, we want to reapply the gamma curve to the image so that it will display properly on your monitor.
STEP ONE: Linearizing Textures
In order to linearize your textures, you’re going to use a maya utility node called “gammaCorrect“. The gamma correct node is found on the maya side of the render nodes list in the hypershade.
The icon for gamma correct is three curves in red, green, and blue. I always thought it looked more like a professional bicycler’s helmet as seen from above, but whatever.
The gammaCorrect node will be connected between the texture and the shader. The outColor of the texture is plugged into the “value” attribute of the gamma node, and the outValue of the gamma node connects to the “color” or “diffuse” attribute of the shader. So, my shading network for the example above looks like this:
Inside the gammaCorrect node, set the gamma for R, G, and B each to .454 (the inverse of 2.2)
If you’re using a solid color as your “color” or “diffuse” value, then you can use the “value” color box inside the gamma node as the input.
STEP TWO: Re-applying the Gamma Curve
In order to get the rendered image to look pretty again once you’ve done all of your calculations, you’re going to need to reapply the gamma curve that you previously removed from all of your textures. To do this, you can use a lens shader in mental ray. For this example, I will use a “mia_exposure_simple” node, found on the mental ray side of the node list in the hypershade window under the “lens” category. The gamma value is automatically set at 2.2, which will apply the 2.2 gamma curve to your image. If you have a good grasp of photographic principles, you can also use “mia_exposure_photographic”.
Plug this exposure node into the “lens shader” in the mental ray rollout of your render camera.
And that’s pretty much it. When you render, your final image will have been produced using a linear workflow, and should make realistic lighting and rendering much, much easier.
I hope you’ve enjoyed this little journey into the strange world of gamma and linearity! Check out Zap’s blog, and do a little digging around on the internets if you want to learn more.