Original Link: https://www.anandtech.com/show/1174



I never knew working an average of 18 hours a day and sleeping every other night could be so incredibly enjoyable. These past two weeks have been so full of benchmarking and analysis that I hardly have time to breathe. Of course, when people come up to me and tell me "man, I wish I could play games for a living too," I can't help but laugh out loud. I tell them: its not about games, it's about trying to understand the hardware. Of course, that is my kind of fun. The only problem is that I don't get to see what the picture looks like until I benchmark games for 50 hours.

When we sat down to start working on this series, I was very excited. I know that it's taken a long time to try to get the whole picture out in the open, but we wanted to be very thorough. Some of the motivation behind Part 1 was to give everyone an idea how these two cards perform vs. mid/high end cards that are already out. We wanted to give a basis for comparison so that numbers between 9800XT and NV38 had some way to relate back to what we already know. So now we can get on with trying to push these to their limits and beyond. The only other card we will be testing in Part 2 is the GeForce FX 5900 Ultra with both 52.14 and publicly available 45.23 WHQL drivers. We will also be doing a separate article on ATI's Catalyst 3.8 drivers when they are released.

This time around we tested at 1280x1024 (or 960 in some cases), and 1600x1200. At each of these resolutions we tested with AA and AF off and on when possible. Some games brought both cards to their knees, while others provided little more than a bump in the road. There is an incredible amount of information in this article so you may want to set aside some time to digest it all. We've done one unconventional test that will at least be a very good point of discussion, and there are plenty of surprises within.

The series is far from over and the next thing on the plate is a value/mid-range roundup to show you some cards that are actually feasible to purchase.

We hope you will enjoy reading this as much as we did putting it together.



An even more updated Test Suite

In Part 1 we introduced our new test suite; this edition brings some refinements and four additions, but (believe it or not) it is still not complete. It isn't our goal to simply throw numbers into space and see what happens, so we are really focusing on honing our benchmarks to make them as robust and accurate as possible. As such, we have had to forgo a few additions that we really wanted to make, and we've had to drop one of the titles we had included in Part 1. This is how the new suite looks as things stand for this article:

Aquamark3

C&C Generals: Zero Hour

EVE: The Second Genesis

F1 Challenge '99-'02

Final Fantasy XI

GunMetal

Halo

Homeworld 2

Jedi Knight: Jedi Academy

Neverwinter Nights: Shadow of Undrendtide

SimCity 4

Splinter Cell

Tomb Raider: Angel of Darkness

Tron 2.0

Unreal Tournament 2003

Warcraft III: Frozen Throne

Wolfenstein: Enemy Territory

X2: The Threat

Our previous Flight Simulator benchmark just didn't push the game far enough, and we are hard at work trying to find a benchmark that better reflects gameplay and is completely repeatable. We have really appreciated your feedback, and we ask that you continue to suggest games for possible inclusion in the suite. Just so you'll know what we already have slated to make it in "When their done" (to borrow from 3DRealms), these games will be added either as we finalize a benchmarking procedure for them or as they are released:

Doom3

MS Flight Simulator 2004

Battlefield 1942: Secret Weapons of WWII

Halflife 2

FIFA Soccer 2004

We wanted to include Battlefield in this review unfortunately we were still unable to come up with a repeatable test to include. We have looked at other tests on the net and would rather use something a bit more scientific if possible but it's going to take some more time. If anyone from the Battlefield community has any suggestions on how to reliably benchmark the game, we're all ears.

 

As we received some criticism that the CPU we used in Part 1 wasn't fast enough, we upgraded our testbed for Part 2; the test system we used is as follows:

AMD Athlon64 FX51

1GB DDR400 (2x512MB)

nForce3 motherboard

With all of that out of the way, it's time to get to the benchmarks…



Aquamark3 IQ

Aquamark3 is a synthetic benchmark based on a game called Aquanox, and it uses the DX9 API. In Part 1 we included it for completeness, but our main focus this time around is image quality. There have been lots of IQ problems found around the net that have helped earn NVIDIAs 51.xx drivers the title "cheatenators". With the 52.14 drivers we have been using, we didn't notice any image quality issues on a visual inspection, which was a very good sign, Lets take a closer look at frame 4000 to point out problems:

 

NVIDIA 45.23

Note the ground around the explosion

 

NVIDIA 52.14

Note the difference between the new and old drivers

 

ATI Catalyst 3.7

Identical to NVIDIA

As we can see, the lighting problem that was once there no longer exists, and the image is consistent with what it should be. On top of this, the 52.14 drivers give us about a more than 50% performance improvement over the 45.23 drivers. We have to remember that this is a benchmark, and any company that wants to sell anything is going to try to fix benchmark problems first. Of course, that's just anther reason I really don't like synthetic benchmarks: I'd rather have NVIDIA and ATI hard at work on the games I'm playing rather than on something I'll never care about using.

Here we can also see a comparison of the images with the highest quality AA setting in the game and 8xAF:

NVIDIA 45.23

NVIDIA 52.14

ATI Catalyst 3.7



Aquamark3 Performance

Even with the image quality issues cleared up, we still don't think this is a good benchmark to try to judge performance with future games. That being said, beating these scores is going a little difficult unless your system is very nicely overclocked.

 

 

As we can see, the 52.14 drivers show a vast improvement, but they still fall just short of ATI. Honestly, if anyone actually played this game, there wouldn't be a significant difference between any of these three cards, and all of them would be very good choices for playing this particular DX9 game.



C&C Generals: Zero Hour IQ no AA/AF

We used version 1.0 of Zero Hour (as that's the only one out right now) which is a DX8 Real Time Strategy game. This game looks really nice on both cards, and we didn't notice any visual quality issues. Have a look for yourself:

 

NVIDIA 45.23 (click to see full image)

 

NVIDIA 52.14 (click to see full image)

 

ATI Catalyst 3.7 (click to see full image)



C&C Generals: Zero Hour Performance no AA/AF

In looking at these graphs, you'll notice that not much changes when we move to higher resolutions from 1024x768:


The interesting thing to note is that all of the cards have the exact same minimum frame rate (not depicted here). It is unfortunate that your video card choice isn't going to save you from choppy frame rates during a big battle scene, as this is more of a CPU limitation. Of course, since we are running on an Athlon64 FX51, it seems reasonable to assume that the way the game was written has more to do with these numbers than processor speed.



C&C Generals: Zero Hour Performance 4xAA/8xAF





EVE: The Second Genesis Performance no AA/AF

EVE is one of our new additions and was chosen as a representative of the MMORPG group of games. Of course, this game is a bit different than most MMORPGs, but the advantage with this game is that one of the most graphically intensive aspects of the game is just sitting in a space station all alone while having a bunch of transparent windows open to manage your character.

Of course, the first thing Anand said about this game was "Oh look, it's the Linux desktop!" And, to be honest, it really does look more like my visions of Longhorn than any other game I've ever played.

This game is was very slow on my computer at home, so these new cards really do have a high impact on performance.

We can see that NVIDIA is slightly ahead, but this isn't a significant difference.



EVE: The Second Genesis Performance 4xAA/8xAF

We see some really interesting numbers when we turn on AA and AF.

 

 

The ATI card pulls way ahead in this test and is almost 50% faster than the NVIDIA card. If this is the game you play, ATI is definitely the way to go.



F1 Challenge '99-'02 IQ no AA/AF

The first thing that needs to be mentioned about this game is that NVIDIA has some known issues that we saw pop up. When driving the car and driver will kind of shimmy back and forth. It's slightly irking, but very playable. Of course, in the replay feature, this problem is amplified any is almost more anoying than sitting through the 10 minute X2 demo 16 times.

 

The individual image comparisons really don't show this effect at all. NVIDIA is aware of the problem and they're working on a fix.

 

NVIDIA 45.23

 

NVIDIA 52.14

 

ATI Catalyst 3.7



F1 Challenge '99-'02 Performance no AA/AF

The jittering issue we mentioned happened across the board on NVIDIA with new and old drivers.

 


 

ATI is the clear leader, but honestly, the bug with NVIDIA could very well be hampering performance as well. We will just have to wait and see what happens when this issue is resolved.



F1 Challenge '99-'02 IQ 4xAA/8xAF

NVIDIA 45.23

 

NVIDIA 52.14

 

ATI Catalyst 3.7



F1 Challenge '99-'02 Performance 4xAA/8xAF




Final Fantasy XI Performance

We couldn't take any screenshots of Final Fantasy XI because the demo would quit if you hit any keys at all, thus preventing us from capturing any screenshots. Final Fantasy doesn't really show image quality differences, though, in my opinion, overall image quality could be greatly improved. The graphics are many times motion blurred, AA and AF can't be used, and it really does look like a playstation port to the PC. Of course, there are some very cool effects and the overall look is definitely on par with Square's previous work.

 

Again, we take the frame count provided by the benchmark and divide by its runtime (loosly 275 seconds). The actual number we divide by doesn't make a difference for comparison sake as its simply a scaling factor. Since this is the same number we used for the previous article, we can get a little taste of Final Fantasy CPU scaling.

 

 

As we can see, ATI is still ahead in this benchmark.



GunMetal IQ

GunMetal is a game somewhat in the tradition of Robotech (giant robots that can turn into planes). This is a DX9 game that uses VS2.0, but only PS1.1 (PS2.0 is one of the biggest advantages of DX9 and isn't employed here). Yeti studios was kind enough to provide the world with a benchmark featuring 2 different scenarios, both of which we used for this article.

 

One of the more interesting features of this benchmark is that its impossible to turn off anisotropic filtering and antialiasing must be set at 2x or higher. The argument is that these effects are becoming necessary for higher image quality in games and playing without them will become a thing of the past in the future. In fact, there are quite a few next generation games that are not compatible with driver AA settings (either because they implement their own AA, or they don't do AA at all). Apparently, NVIDIA has known issues with this game, but after going through it plenty of times we really didn't notice anything going wrong. Take a look:

 

NVIDIA 45.23

NVIDIA 52.14

ATI Catalyst 3.7



GunMetal Performance

The performance of this game was very slow compared to what we are used to seeing, but, again, 2xAA and AF are always on, so take that into consideration when looking at the graphs.





In both benchmarks, all the cards perform very similarly. This test is definitely a good one for pushing the GPUs, but we really aren't able to get any deep level of understanding about the differences between the two cards from this. In our opinion, if this benchmark had been written with PS2.0 shaders, we would have gotten much more useful data.



Halo IQ

The long awaited PC port of Halo is DX9 based. We ran our benchmark using their built-in feature from the commandline (-timedemo -use20 -vidmode x,y,r). Unfortunately Halo does not currently support antialiasing so we are unable to bring you those numbers.

Of course, we have gotten plenty of emails pointing out the hack that allows AA to work with Halo. This involves adding a flag to tell Halo to DisableRenderTargets for your particular video card. The problem with this is that rendered textures are used all throughout the game, and many of the awesome PS2.0 effects are lost. Water effects disappear, walls go from dirty and grimy looking to plastic and shiny, and the image quality we get from the hack takes away too much to be worth it in our opinion.

NVIDIA 45.23 (click to see full image)

NVIDIA 52.14 (click to see full image)

ATI Catalyst 3.7 (click to see full image)

It is also nice to see that there isn't any image quality difference between NVIDIA and ATI cards with this game.



Halo Performance

While performance on this game is a little lower than we'd like to see for a FPS at high resolutions, the visual effects and eye candy are very cool. The sun shining through the trees provides a nice visual effect.



From the data, ATI and NVIDIA are in a dead heat on this game with negligible difference in performance. It is important to note that anyone out there running the current 45.23 Detonators will get something along the lines of a 65% performance increase with no loss in image quality for the FX series of cards with the new 52 series of drivers due out late this month.

Yes, they actually did do that. NVIDIA got a 65% performance increase in their new drivers. Drivers really are making that big a difference as the way the code gets to the hardware is playing a larger roll as the complexity of the hardware increases. As both ATI and NVIDIA diverge from each other while still needing to maintain support for 2 common APIs, we will be seeing more and more differences in how each card will perform based on that common code. But I'll touch on that a little more later…



Homeworld 2 IQ no AA/AF

The image quality in Homeworld 2 is pretty good across both cards here. The game has some very pretty particle effects and does cool things like geometric LOD (level of detail) to reduce overhead when large battles are going on further away from the viewer. Of course, for our test, we turned this feature off to add some stress to the cards.

 

NVIDIA 45.23

NVIDIA 52.14

ATI Catalyst 3.7

Again, we have to note that NVIDIA has known issues with this game, but we will clarify this in a few pages.



Homeworld 2 Performance no AA/AF

Again, we ran two different benchmarks, one on a cut scene, and one on the battle following the cut scene.



ATI leads this one all the way at 1600x1200, but NVIDIA takes the lower resolution win.



NVIDIA does much better in the second tests which has lots of cool particle effects and explosions. ATI still holds on to its lead in the highest resolution though.



Homeworld 2 IQ 4xAA/8xAF

NVIDIA 45.23

NVIDIA 52.14

ATI Catalyst 3.7

Upon closer inspection, we can see that ATI actually does AA while NVIDIA does not. This is a known issue, and we hope to see it corrected as soon as possible.



Jedi Knight: Jedi Academy IQ no AA/AF

This is an OpenGL game based on the Quake III engine. Of course, Raven managed to add plenty of bells and whistles making this game more visually appealing than Quake III as well as GPU limited. The image quality on this game was very nice and showed no noticeable differences.

NVIDIA 45.23 (click to view full image)

NVIDIA 52.14 (click to view full image)

ATI Catalyst 3.7 (click to view full image)



Jedi Knight: Jedi Academy Performance no AA/AF

These benchmarks were done during the same cut scene with the Wookie and a gunfight going on.


NVIDIA is the clear leader in this benchmark. Each card had very similar minimum framerate numbers, so there isn't really one card that has a distinct advantage, and either ATI or NVIDIA will work just fine for running this game at the highest resolution and the highest quality.



Jedi Knight: Jedi Academy IQ 4xAA/8xAF

Anisotropic filtering is selectable in the game, and we recommend that you use their slider if you want to use it. For our tests, however, we set AF to 8x in the drivers and left it off in the game. There were no noticeable performance or image quality differences between the two ways of setting this option, but you get a finer granularity of control from the in game setting.

NVIDIA 45.23 (click to view full image)

NVIDIA 52.14 (click to view full image)

ATI Catalyst 3.7 (click to view full image)



Jedi Knight: Jedi Academy Performance 4xAA/8xAF



In looking at these numbers, NVIDIA still leads at both resolutions. This isn't really surprising as NVIDIA seems to consistently lead ATI in OpenGL performance. Of course, the industry (with the noble exception of John Carmack's Id team) is moving towards DirectX as the API of choice. Whether that is a fortunate or unfortunate thing remains to be seen.



Neverwinter Nights: Shadow of Undrendtide IQ no AA/AF

This time we have to point out that ATI has known image quality issues. There have been many complaints from problems with AA to missing/flickering shadows and a lack of shiny water.

When this game was released, it focused heavily on using NVIDIA hardware and even has an AA slider in the game featuring Quincunx. Obviously this won't work with ATI, and even though we got the 4x AA setting in the game to work a couple of times (if you toggle it on and off while the game is running), we got much better results when setting AA in the driver.

We didn't notice the problems with water that have been discussed online (though we didn't find a really good body of water at which to look), but we did see some shadow flickering occurring. There weren't any missing shadows as we can see from the screenshots. In reading some forums across the web, we have seen indications that there weren't any issues with ATI cards when Neverwinter was released, and that future driver versions broke functionality for the game. We aren't able to confirm this ourselves, but problems such as these have existed for ATI in the past, and we are very hopeful that this serpent won't again rear its ugly head.

NVIDIA 45.23

NVIDIA 52.14

ATI Catalyst 3.7



Neverwinter Nights: Shadow of Undrendtide Performance no AA/AF

The benchmark didn't change from the previous article, and simply consisted of the first in game cut scene in the expansion pack.



We can see that NVIDIAs old drivers just inched out ATIs performance, while the 52.14s help the both NVIDIA cards extend their lead to a significant one. The 52.14 series had the added bonus of smoothing out some random bumps in the road, and image quality is top notch across both drivers.



Neverwinter Nights: Shadow of Undrendtide IQ 4xAA/8xAF

Once we actually got AA working on the ATI card, everything looked good. We don't have any image quality complaints beyond the known issues with the game here.

NVIDIA 45.23

NVIDIA 52.14

ATI Catalyst 3.7



Neverwinter Nights: Shadow of Undrendtide Performance 4xAA/8xAF

NVIDIA maintains their lead across both resolutions with AA and AF enabled, and we continue to see that driver improvements have helped nudge performance upward for NVIDIAs cards.




SimCity 4 IQ no AA/AF

This is a fairly impressive game, and continues the SimCity tradition very well. There weren't any noticeable image quality differences between these cards for this test. Again we used the same scrolling test we used previously.

NVIDIA 45.23

NVIDIA 52.14

ATI Catalyst 3.7



SimCity 4 Performance no AA/AF


The lead belongs to ATI who handily holds on to the fps lead. It is very worth mentioning that the minimum fps are similar across all the cards, but the high for ATI is some where near 200 every time while NVIDIA tops out at (a surprisingly consistent) 77.



SimCity 4 IQ 4xAA/8xAF

There are some differences between ATI and NVIDIA on this one. While scrolling, one way Maxis handled all the data that SimCity has to deal with was to not render everything completely as it entered the screen (if you hadn't seen it recently), but to kind of approximate the detail. This leads to a bit of blocky-ness while scrolling. This isn't a big deal because as soon as the scrolling stops or changes directions the enter screen is drawn as it should be. Of course, ATI antialiased this blocky-ness like crazy while the NVIDIA card left it alone for the most part. I'm not really sure which one is right or better, so I'm going to leave that decision up to the user. For the purpose of comparing image quality during normal gameplay, we took screenshots while stationary.

NVIDIA 45.23

NVIDIA 52.14

ATI Catalyst 3.7



SimCity 4 Performance 4xAA/8xAF


Again, ATI runs away with this one. The minimum fps numbers for ATI are also higher than for NVIDIA which is the significant plus for ATI in this benchmark.



Splinter Cell Performance

Image quality on Splinter Cell is very nearly identical between the competitors. There isn't a difference to speak of in this game, as all the benchmarks looked great.

The GeForce FX 5950 leads in performance here at 1280x1024, but the lead changes to ATI at 1600x1200. Of course, at each of these resolutions, the performance difference is completely negligible and we can't really say that one card is better than the other under this game.



Tomb Raider: Angel of Darkness

So, as I'm writing this, the phrase "be careful what you wish for" comes to mind. I had my reasons for not wanting to benchmark this game, and in order for me to feel comfortable with handing out the numbers I need to touch on some of the more important issues. The inclusion of TRAOD in this benchmark suite is based on the demand of the community (as everything here always will be). But it's also our duty to try to make sure the information you get here is complete (which is a daunting task for this particular game).

Our initial thinking was that TRAOD simply isn't a very good game, nor would it be representative of future DX9 games. The graphics features are no where near as impressive as something along the lines of Half Life 2 and high dynamic range effects, and it looks more like a DX7 game running on DX9 shaders. It is our opinion that this game won't be heavily played and is more of just a synthetic benchmark people want to see in order to try to predict future performance.

Unfortunately, future performance can't be predicted until we have games from the future. No one seems to want to lend me a time machine, so I can't get those numbers yet. Looking back though, I can offer this advice: don't spend $500 on a video card until the game you want to play on it comes out. Trying to buy something now in order to be ready for games of the future only means that you won't have that money to spend on the newest best card that's out at that point. I also feel comfortable saying that TRAOD performance is a predictor of nothing but TRAOD performance.

In taking this stance, we have decided to do things a little differently than most other sites when it comes to TRAOD. We have turned this game into a sort of stress test that pushes the cards as far as they can go in order to only test the real world impact of DX9 Pixel Shaders. We did four tests at each resolution in order to see the performance differences with and without PS 2.0 and with and without AA. For each card, we use the application to set all the features and left the drivers alone. Part of the reasoning behind this was that AA in Tomb Raider only works if set by the application. Anisotropic filtering is selectable in the game, and was left off for all tests. The reason we check AA and not AF is that AF happens during texturing, but AA is implemented via shaders in TRAOD so it stresses the card in more of the way we want to test. But since we are comparing performance of each card to itself in order to see a performance delta, the actual settings shouldn't be a problem. Beyond3d has some extensive documentation of the TRAOD settings and all the options. If you'd like to learn more, I would point you to them.

For our tests, the only really important information is that we use the NVIDIA Cg compiler rather than the DX9 HLSL default compiler (there was no performance difference between the two on NVIDIA cards for the most part, only image quality improvements).



Let's talk Compilers...

Creating the perfect compiler is one of the more difficult problems in computing. Compiler optimization and scheduling is an NP-complete problem (think chess) so we can't "solve" it. And compounding the issue is that the best compiled code comes from a compiler that is written specifically for a certain processor and knows it inside and out. If we were to use a standard compiler to produce standard x86 code, our program will run much slower than if we tell our compiler we have a P4 with SSE2 and all the goodies that go along with it. I know this all seems pretty obvious, but allow me to illustrate a little.

Since I've always been interested in 3D graphics, back in 1998 I decided to write a 3D engine with a friend of mine for a project in our C++ class. It only did software rendering, but we implemented a software z-buffer and did back face culling with flat shading. Back then, my dad had a top of the line PII 300, and I acquired an AMD K6 200. Using a regular Borland C++ compiler with no real optimizations turned on, our little software 3D engine ran faster on my K6 than it did on my dad's PII. Honestly, I have no idea why that happened. But the point is that the standard output of the compiler ran faster on my slower platform while both systems were producing the same output. Now, if I had had a compiler from Intel optimized for the PII that knew what it was doing (or if I had hand coded the program in assembly for the PII), my code could have run insanely faster on my dad's box.

So, there are some really important points here. Intel and AMD processors were built around the same ISA (Instruction Set Architecture) and had a great deal in common back in 1998. Yet, performance varied in favor of the underpowered machine for my test. When you look at ATI and NVIDIA, their GPUs are completely and totally different. Sure, they both have to be able to run OpenGL and DirectX9, but this just means they are able to map OGL or DX9 function calls (via their drivers) to specific hardware routines (or even multiple hardware operations if necessary). It just so happens that the default Microsoft compiler generates code that runs faster on ATI's hardware than on NVIDIA's.

The solution NVIDIA has is to sit down with developers and help handcode stuff to run better on their hardware. Obviously this is an inelegant solution, and it has caused quite a few problems (*cough* Valve *cough*). The goal NVIDIA has is to eliminate this extended development effort via their compiler technology.

Obviously, if NVIDIA starts "optimizing" their compiler to the point where their hardware is doing things not intended by the developer, we have a problem. I think its very necessary to keep an eye on this, but its helpful to remember that such things are not advantageous to NVIDIA. Over at Beyond3d, there is a comparison of the different compiler (DX9 HLSL and NV Cg) options for NVIDIAs shaders.

We didn't have time to delve into comparisons with the reference rasterizer for this article, but our visual inspections confirm Beyond3d's findings. Since going from the game code to the screen is what this is all about, as long as image quality remains pristine, we think using the Cg compiler makes perfect sense. It is important to know that the Cg compiler doesn't improve performance (except for a marginal gain while using AA), and does a lot over the 45.xx dets for image quality.



Back to the game...

Since NVIDIA can't do floating point textures, PS2.0 shadows were left off, and we didn't use the NVIDIA shadow (depth sprites) for our cards as ATI doesn't support that. We have decided that since the glow effect uses PS 2.0 (and we are using this as a DX9 stress test rather than an actual game) this needs to be enabled. The 'goodness' of the glow effect has been questioned, but we aren't here to critique the quality of the implimentation. We simply want to test the raw power each card has to push TRAOD PS2.0 code. Personally, I think the effect glow had on the wall lights in the Paris demo was one of the only "pretty" things in the game.

Depth of Feild (DoF) is also on. After watching this demo hundreds of times, it really seems to me that using PS2.0 for DoF in TRAOD was overkill for what they ended up with. It just seems like they could have gotten similar results (with better frame rates) using lower detail (frequency) mipmaps and dynamic reduction of geometry. Of course, I could be way off base, but it just seems like there were better things that could have been done with PS 2.0 in this game.

We note that there have been issues with the accuracy of the Depth of Field post processing, but we think that the new 50 series of Detonators (along with the Cg compiler) will alleviate this issue. Of course, there are still some IQ issues in ATI's 3.7 cats.

As games and hardware move forward, post effects like DoF and rendered textures are going to be getting more and more complex, and the way hardware handles these things will be slightly different. It's less important to look at pixel level "sameness" between two solutions, but rather at overall image quality, and the impact of the effect. The user experience is what matters in this arena, and some things are going to be subjective. Pixel shader effects are much more intricate than geometry or T&L, and differences in architecture, precision, and drivers will all contribute to slight differences where no solution can clearly be labeled as more correct than another. Of course, that makes our job harder, but it will definitely be an interesting ride.

Anyway, in order to try to understand exactly how DX9 PS2.0 is affecting each graphics card, we are doing two tests at each resolution (with and without AA). The first test, everything we don't need to see something and have PS2.0 functionality is disabled. For the second test the only thing we do is turn off PS2.0 and run the benchmark again. The scores we will be giving you are in the form of percent decrease in performance when PS2.0 is enabled. This should give us some idea of how this implementation of PS2.0 scales on each card, and give us a good solid glimpse into the implications of DX9 in TRAOD (as this is the only game that will ever use this engine).



Tomb Raider: Angel of Darkness IQ no AA

NVIDIA 52.14 (click to view full image)

ATI Catalyst 3.7

Even though we can't see it from these screenshots, the ATI card seemed to have problems rendering walls and water consistently. As Lara slides down the hill at the beginning of this demo, the walls flicker as they approach. I would have simply thought that this was due to AF not being enabled but for the fact that NVIDIAs card renders the walls smoothly and consistently giving a much better experience.

Also, at the bottom of the hill when Lara is running toward the water, the ATI card has some issues with the rendered texture on the water being consistent. The texture will turn off and on until she gets to a certain distance from the water. After this distance is reached, everything looks fine.

I really wanted to create some movies to show this, but time constraints and logistics have prevented us from doing so at this point.



Tomb Raider: Angel of Darkness Performance no AA

Since we are looking at percent decrease in performance with PS2.0, we should be able to get a good idea of how each card responds to the PS2.0 code in the game. I know everyone will want to take these numbers and say that they universally describe the DX9 performance hit on NVIDIA hardware, but we have had plenty of other benchmarks today that show very different results. In the end, so much of performance comes down to how the game was coded, and what is actually going on. Without further ado:




From this we can see that ATIs performance doesn't drop as much as NVIDIAs when PS2.0 is enabled. These are some very interesting numbers even though they have come out as we expected them too. Essentially, what these numbers have done is eliminated many of the outside factors that could have contributed to poor performance and focused on the PS2.0 code alone. We can now clearly see what everyone suspected: ATI handles rendering PS2.0 effects much more efficiently than NVIDIAs cards in TRAOD. Please remember that the context of the previous sentence is completely dependant on the inclusion of the final prepositional phrase.



Tomb Raider: Angel of Darkness IQ with AA and AF

NVIDIA 52.14

ATI Catalyst 3.7

Here we can see that the image quality on the stills doesn't show much difference between ATI and NVIDIA rendering. Unfortunately, not even anisotropic filtering being enabled in these screenshots is able to prevent the walls from flickering on ATIs card. The water still has the same issues this time around as well.



Tomb Raider: Angel of Darkness Performance with AA



Here, we see the same kind of performance trend that we did without AA on. It is very clear that the way ATI handles rendering TRAOD's PS2.0 code is more efficient.



Tron 2.0 Performance no AA/AF

Tron is a DX9 game, but I have been unable to find the extent to which it supports different pixel shaders. The game is actually pretty cool and incorporates some interesting effects. We benchmarked this game in one of the light cycle levels. Image quality wasn't an issue in this game at all.

The ATI card had consistently higher frame rates. The average, low, and high frame rates were all higher with the ATI card than on NVIDIA




Tron 2.0 Performance 4xAA/8xAF

Again, image quality wasn't an issue, everything looked as it should. It is worth noting that AF doesn't seem to be useful here since most textures are solid colors.


Again, we see that the ATI card pulls ahead of the NVIDIA at every level of comparison.



Unreal Tournament 2003 IQ no AA/AF

We did the same old benchmark with UT2003 we always do. Here are the screenshots we took.

NVIDIA 45.23

NVIDIA 52.14

ATI Catalyst 3.7



Unreal Tournament 2003 Performance no AA/AF

I ask that it be considered that we aren't looking so much for how the game will perform with the flyby demos, but how the engine stresses the GPUs. This is a popular engine, and lots of other games use it, so the general performance of the engine is a very relevant factor to gamepla in general.


We can see that NVIDIA takes this benchmark from ATI from old drivers to new.



Unreal Tournament 2003 IQ 4xAA/8xAF

Again, here are the shots.

NVIDIA 45.23

NVIDIA 52.14

ATI Catalyst 3.7



Unreal Tournament 2003 Performance 4xAA/8xAF



This time we see ATI pull ahead of the NVIDIA cards. No doubt the extra memory bandwidth the 9800XT received from its higher memory clock has contributed to these numbers.



Warcraft III: Frozen Throne IQ no AA/AF

Ah, Warcraft, the RTS connoisseur's game; the benchmark we ran was the same auto-camera replay at 8x speed we used in Part 1. Here are some screenshots for your viewing pleasure:

NVIDIA 45.23

NVIDIA 52.14

ATI Catalyst 3.7



Warcraft III: Frozen Throne Performance no AA/AF


As in part 1, even at higher resolutions, ATI pushes much higher frame rates in this game. The minimum fps are very similar for 1280x1024, but at 1600x1200, ATI has about a 25% lead.



Warcraft III: Frozen Throne IQ 4xAA/8xAF

Again, we didn't notice any problems with image quality during gameplay, but here are the screenshots so you can all take a look.

NVIDIA 45.23

NVIDIA 52.14

ATI Catalyst 3.7



Warcraft III: Frozen Throne Performance 4xAA/8xAF


Here we see that ATI maintains its lead. If you're an avid WC3 player, the 9800XT will definitely eliminate any kind of choppiness from your gameplay.



Wolfenstein: Enemy Territory IQ no AA/AF

This is an OpenGL game based on the Quake III engine. The entire thing is available for download if you have the patience/bandwidth to wait for all 250+ MBs to download. We used the same benchmark we used last time. Again, gameplay was fine on both cards. Here are the screenshots for your viewing pleasure.

NVIDIA 45.23

NVIDIA 52.14

ATI Catalyst 3.7



Wolfenstein: Enemy Territory Performance no AA/AF


We see the ATI card is ahead of the NVIDIA card for this part of the testing. The difference is not significant at 1280x1024, but the gap widens at 1600x1200.



Wolfenstein: Enemy Territory IQ 4xAA/AF

More of the same with a little frosting on top; here are the screenshots.

NVIDIA 45.23

NVIDIA 52.14

ATI Catalyst 3.7



Wolfenstein: Enemy Territory Performance 4xAA/AF


Again, the numbers are close at 1280x1024. But this time we see the NVIDIA card pulling ahead at the higher resolution. This is a very interesting aspect to note, and might have something to do with the fact that the FX 5950 still has more memory bandwidth than the 9800XT.



X2: The Threat IQ no AA/AF

X2 is a DX9 game slated for release in November. X2 is definitely a pretty game with all kinds of particle and lighting effects for nebulas and engines in space. We used this demo in Part 1 as well and mentioned that there were some smoothness issues with NVIDIA's cards. The problem seems to be that every couple seconds, when something overly graphically intensive is going on, the card will drop a few frames and move on to the next ones.

Other than the jerkiness/motion issues of the NVIDIA card, we didn't notice any visual quality problems on either card for this test. Here are the shots.

NVIDIA 45.23

NVIDIA 52.14

ATI Catalyst 3.7



X2: The Threat Performance no AA/AF

It is very important to note that even if it took longer to run on the ATI cards, the effect was much higher quality as NVIDIA didn't seem to be able to maintain a constant frame rate. Please take that into account when looking at these numbers.

The 5950 Ultra showed some solid improvements over the 5900 (showing this game's benefit from the NV38 clock and memory clock speed). All the NVIDIA drivers had the same problems, so it's not really useful to point fingers at the unreleased driver as the culprit. Of course, this is also an unreleased game, so we are hoping all parties involved will work quickly to resolve these issues.



X2: The Threat IQ 4xAA/8xAF

For this test the jittery choppy motion of the NVIDIA card was more pronounced. Also, it looked to us as if the NVIDIA card wasn't doing AA as well as it should have. Of course, here are the shots, and we'll let you be the judge.

NVIDIA 45.23

NVIDIA 52.14

ATI Catalyst 3.7



X2: The Threat Performance 4xAA/8xAF


Again we see NVIDIA leading, but again, we can't call this a good representation of gaming performance as the quality of the gaming experience is severely wounded by the problems this game has on NVIDIA hardware.



Final Words

If anyone actually made it this far without skipping around, please let me express my sincere appreciation to your dedication. This article has definitely been an entity with a mind of its own, and it continued to grow regardless how much we hacked at it. There are benchmarks we had to leave out, and there is still so much more I want to do with these cards and games.

The 5950 hasn't been shown to perform much better than the 5900, but it definitely has an acceptable performance increase for a Fall refresh product. So far, we like what we have seen from the 9800XT, and we are anxious to test out ATIs OverDriver feature.

The new 52.14 drivers are much better than either the 51.xx or the 45.xx series. The image quality issues are corrected from 51.xx, and a lot of speed has been inked out over the 45.xx drivers. We have actually been very impressed with the speed, image quality, and playability enhancements we have seen. As long as NVIDIA doesn't take a step backwards before the official 50 series drivers are released, we think everyone who owns a GeForce FX card will be very pleased with what they get. NVIDIA should have never pushed the press to benchmark with the 51 series as no one used it for Half Life 2 and in the end the bugs in the drivers did nothing more than tarnish NVIDIA's name. Regaining the credibility they have lost will definitely take NVIDIA some time.

If you made it all the way through the section on TRAOD, you'll remember the miniboss named compilers. The very large performance gains we saw in Halo, Aquamark3, X2 and Tomb Raider can be attributed to the enhancements of NVIDIAs compiler technology in the 52.xx series of drivers. Whether a developer writes code in HLSL or Cg, NVIDIAs goal is to be able to take that code and find the optimum way to achieve the desired result on their hardware. Eliminating the need for developers to spend extra time hand optimizing code specifically for NVIDIA hardware is in everyone's best interest. If NVIDIA can continue to extract the kinds of performance gains from unoptimized DX9 code as they have done with the 52.14 drivers (without sacrificing image quality), they will be well on their way to taking the performance crown back from ATI by the time NV40 and R400 drop. NVIDIAs GPU architecture is a solid one, but it just needs to be treated the right way. From our angle, at this point, compiler technology is NVIDIAs wildcard. Depending on what they are able to do with it, things could go either way.

Right now NVIDIA is at a disadvantage; ATI's hardware is much easier to code for and the performance on Microsoft's HLSL compiler clearly favors the R3x0 over the NV3x. NVIDIA has a long road ahead of them in order to improve their compilers to the point where game developers won't have to hand-code special NV3x codepaths, but for now ATI seems to have won the battle. Next year will be the year of DX9 titles, and it will be under the next generation of games that we will finally be able to crown a true DX9 winner. Until then, anyone's guess is fair game.

ATI is still the recommendation, but NVIDIA is not a bad card to have by any stretch of the imagination. We still urge our readers not to buy a card until the game they want to play shows up on the street. For those of you who need a card now, we'll be doing a value card round up as part of this series as well.

Keep in mind that ATI's Catalyst 3.8 drivers are coming out this week, and rest assured that we will be doing a follow up as quickly as possible to fill in the gaps. To say this has been a very interesting month in the graphics world would be a definite understatement. If this hasn't been an overload of information, stay tuned, because there is so much more to come.

Log in

Don't have an account? Sign up now