GPU acceleration gives the same level of render quality as Maximum Render Quality, even if you don’t check the Maximum Render Quality box.
But instead of a performance hit, you gain a 6-7x performance boost.
Let me start-off by saying that Jan Ozer is my favourite video technology writer and speaker. I first met him in Jacksonville, FL at the defunct 4EVERGroup’s Video ’07 video conventions, where we were both speakers. Steve Nathans, EventDV Magazine’s editor-in-chief, was also there and recruited me to join the EventDV team. Since then I have been writing alongside of Jan at EventDV and there is definitely some overlap in our interests and areas of expertise. For example, we both specialize in Adobe Premiere Pro and have experience with webcasting and dance recital video production.
To this day I am still a big fan of his and try to read everything he writes, which is challenging as he writes so much and for so many different publications.
Yesterday I was reading some articles at OnlineVideo.net and came across one of his new video tutorials. The title of How to Dramatically Improve Your Video Quality in Adobe Premiere caught my attention and I immediately watched the video, which I have embedded below.
In his tutorial, Ozer discusses the difference in render quality when you check the Use Maximum Render Quality box in the Export Settings. He also commented on the use of NVIDIA graphics cards and their impact on render time. I’ve reviewed Adobe Production Premium CS5.5, the suite that includes Premiere Pro and Adobe Media Encoder, for EventDV, and received a some reviewer training from Adobe when it was still in beta. Part of the training included a discussion of the impact of GPU acceleration, provided by approved NVIDIA CUDA cards, on both render time and render quality.
At the time I was performing my owns test and writing my review, I didn’t spent much time analyzing the differences between using Maximum Render Quality because it was explained to me that when you use a CUDA card for GPU acceleration, the graphics card, and not the CPU, processes the video, and the GPU does a better and faster job than non-GPU rendering. To allow non CUDA card owners to still produce high quality renders, Adobe has a Maximum Render Quality option, that uses a higher bit colourspace, but the cost was a dramatic increase in time. I was also told that if you used the Maximum Render Quality option with GPU acceleration, it increased your render time but did nothing for the render quality.
I run a very high volume video production company so render time is very important to me. I had already been using an NVIDIA CUDA card since CS5 and my Premiere Pro CS5 benchmarks showed a dramatic reduction in rendering time. So because of all of this, I never looked into quality differences, although in my beta tests I did notice that with GPU acceleration enabled, the maximum render quality box did increase rendering time.
So what did Jan Ozer conclude?
Ozer concluded that the maximum render quality box improved render quality. Not as much in the video but he noticed it in the titles and drop-shadows. Below are screenshot of one of the titles where he noticed a difference.
Here is what Ozer concluded regarding the use of NVIDIA Cuda cards (and although he didn’t specify, in order to enable Adobe Premiere Pro GPU acceleration, you need to use a certified NVIDIA CUDA card from this list):
So this left me wondering two things:
1) How many scenarios did Jan test? Max Render Quality and No Max Render Quality are only two variables but within each, there are two options – GPU acceleration and Software only (CPU only).
2) What was the render time and render quality difference between the four different render options?
Software Only, No Max Render Quality
Software Only, Max Render Quality
GPU, No Max Render Quality
GPU, Max Render Quality
So rather than ask Ozer, I decided to test for myself and share the results.
I designed a simple 60 second timeline using some 1920×1080 30P AVCHD footage, shot on my Sony NEX-FS100 video camera at 24Mbps (Sony FX setting). There were six clips, each exactly ten seconds long and I applied a title with a transparent gradient background, colour corrected half of the clips using the Fast Color Corrector, and inserted a second layer of SD video, a motion background, in the upper left corner of the first ten seconds. So pretty-much an average timeline in terms of effects and layers.
I decided that because I was using a 720P HD monitor, I would export my timeline four times, once with each of the four above scenarios, to 720P, so that I would be able to view the rendered video with 1:1 pixel mapping and no scaling. I didn’t add the render to the render queue, which opens Adobe Media Encoder, but chose to export it directly from Premiere Pro CS5.5 as my previous tests have shown me this results in a slight improvement in render time.
1920×1080 30P* sequence with AVCHD footage
1280×720 30P * render, H.264 Variable Bitrate 2 pass, 6 Mbps target
|No Max, Software Only||2m01s|
|Max Render Quality, Software Only||5m49s|
|No Max, GPU enabled||1m05s|
|Max Render Quality, GPU Enabled||1m05s|
*(dropframe 29.97 frames per second)
Clearly there was a dramatic difference between the four options and GPU acceleration provided an advantage over Software Only rendering of between 1.9-5.3x, depending if Maximum Render Quality was enabled. Also interesting to note was that there was no difference in render time between both GPU enabled options. You may recall that I mentioned earlier that previously I had experienced a render time difference when I compared these two options.
Here are some 1:1 scale crops of the non-GPU renders and the master for comparison purposes. Well, they are 1:1 if you click on them to open a full size version.
Can you see the difference? I’ll give you a hint: look at the thumb on the right hand side of the middle image. You can see a very subtle difference between the way Adobe Premiere Pro CS5.5 handles colour, transparencies, and gradients between GPU and Software only. GPU does a better job and is virtually identical to the master. The software only with Maximum Render Quality enabled was very similar to the master and both GPU renders.
So if you are like me you are probably thinking, what is the big deal and is it worth the extra render time? In software only mode it took 2.9 times longer to render my video with Maximum Render Quality enabled than with it disabled. And 5.3x real-time is just too long for me. By the way, I should share with you the specs of my editing system so you know that I’m not editing on a five-year-old dual core computer.
Intel Core i7 2600K CPU
Asus P8Z68 Motherboard
16 GB RAM
64 bit Windows 7 Professional
RAID 0 video drives
SSD operating system drive
NVIDIA GTX470 graphics card
Maybe I wasn’t seeing a more dramatic difference because I gave my video too high of a bitrate (6Mbps) and did 2 pass variable bitrate encoding. So I rewatched Ozer’s tutorial and looked closer at his render settings:
I was a bit confused why he was taking a widescreen source and outputting a 4:3 render (Ozer himself taught me that 854×480 is the widescreen square pixel equivalent of NTSC 720×480 with a widescreen pixel aspect ratio and 640×480 is the square pixel equivalent of NTSC 4:3 SD video) and why both his source and output were both 30fps.
There is a lot of interpolation that happens when you scale and going from non-square to square to a different non-square pixel aspect ratio (I’m not entirely sure of the path that Ozer’s footage took) can’t be good for transparancies, colour gradients, and drop shadows. Also I’ve been ranting in my aforementioned previous Premiere Pro reviews (CS5 & CS5.5) that Adobe has problems with their export presents in that they have seemingly random frame-rates. Unless there is valid reason to do so, you always want to shoot, edit, and deliver video in the same frame rate. Changing frame rate at any stage in the process requires additional render time (especially when you go from drop frame to non-drop frame as extra frames need to be added). So I question Ozer’s source frame rate of 30 fps, although perhaps this is valid as he did mention DSLR footage at one point and I know that some of the Canon DSLRs shoot in non-dropframe 30fps and not 29.97 fps. Regardless, if his footage was in fact 29.97 fps and Adobe had to convert it to 30 fps, this could explain why Ozer experienced a difference, as he concluded, of up to 10x and mine was slightly over 5x.
I’m also wondering why, if Ozer was rendering with 1 pass variable bitrate encoding, is he concluding that you should not exceed 1-2x real time for GPU rendering. My first result showed 0.9x real time with 2 pass variable bitrate encoding, which is only slightly slower than real time.
I decided to test my same original footage again, but this time I reduced the bitrate and number of passes.
In the interest of time, I only exported the first 10 seconds, which was not as precise as the initial 60 second render when you factor in my manual time-keeping. But at this point I wasn’t as concerned with time as I was with render quality anyways.
854×480 30P * render, H.264 Variable Bitrate 1 pass, 3 Mbps target.
|No Max, Software Only||07.3s|
|Max Render Quality, Software Only||32.3s|
|No Max, GPU enabled||04.5s|
|Max Render Quality, GPU Enabled||04.5s|
The GPU times were consistent with the previous render and were now just slightly ahead of 2x faster than real time per pass. The software-only times had different ratios but because of the margin of error of my manual timing I’m not going to delve into them too much, other than to say that the difference between fastest to slowest times was around 7x.
Here are screenshots of the new lower bitrate renders – you will have to click on them to get a full size version in order to see the difference:
The full size versions (click on the individual photos) show a more dramatic quality difference. I did test the other two combinations that aren’t up here but like with the previous example, I wasn’t able to discern a difference between GPU accelerated and Computer only Maximum Render Quality versions. Well I should clarify that a bit more – there were differences I was able to see in the compression on the black of the speaker’s suit but I can’t say one was better or worse than the other. What was obvious was that the computer only (non GPU) and non-Maximum Render Quality version was the least like the Master. There was a loss of detail in the speaker’s face, a general softness, and the transparency in the lower third colour gradient below the title was noticeably different from the master, especially on the yellow/green side.
Here are my conclusions and tips, as they pertain to obtaining the Maximum Render Quality with Premiere Pro CS5.5:
1) If you don’t have an NVIDIA CUDA card that is on the certified list, you can’t enable GPU acceleration.
2) GPU acceleration gives the same level of render quality as Maximum Render Quality, even if you don’t check the Maximum Render Quality box. But instead of a performance hit, you gain a 6-7x performance boost.
3) The cost of a certified card is relatively low for the time and performance benefit you gain.
4) You don’t need the most expensive card. In fact, I found that the less expensive video gaming line of GeForce cards outperformed some of the more expensive line of Quadro cards.
5) If you don’t have an approved card you can still get great render quality but you need to check the maximum render quality box and wait a lot longer.
6) Always shoot, edit, and render footage with the same frame rate. And pay attention to pixel aspect ratios, frame size, and number of passes.
7) With GPU acceleration you can achieve render speeds twice as fast a real time per encoding pass.