Super7 Video Accelerator Comparison - Part 2: Mid-Range Gaming Solutions (June 99)
by Anand Lal Shimpi on July 2, 1999 12:51 AM EST- Posted in
- GPUs
As with all other computer users, Super7 owners can be classified into two distinct categories from a gaming perspective, high-end gamers and mid-range gamers. The reason for leaving out the classification low-end is simple, the slowest 3DNow! Super7 processor is the K6-2 300, which hardly meets the low-end classification that is pretty much reserved to K6 200 and K6 233 users. Therefore the next logical step in the performance comparison would be the more commonplace mid-range Super7 systems.
While Part 1 of AnandTechs coverage was reserved for the bleeding edge of video cards for the fastest Super7 systems, Part 2 will tailor to the majority of the Super7 population covering all previous and current generation video cards on mid-range as well as high-end systems. The Super7 market was initially intended for use as a "value" alternative to Intel based systems, however since the release of the K6-3 the term Super7 hasnt been reserved only for low-end systems. Unfortunately a limiting factor, until recently, has been poor attention to driver support for Super7 systems as discussed in Part 1 of this article. Part 1 addressed quite a few important topics that are worth reading over before proceeding to the next step in this progression.
Armed with the results from the first article and a few new test configurations, lets proceed to the second part, Mid-Range Gaming performance.
Graphics Clock vs CPU Clock
One of the most obvious deductions you can make from the results of the high end testing is that the graphics clock (core speed) started to become an influential factor as the resolution increased beyond 1024 x 768. Although the same is true once you drop the CPU speed, the performance boost achieved from an increased graphics clock is not as great. With the limiting factor being the CPU, overclocking your graphics card on a slower system wont yield nearly as great of a performance increase as doing the same on a higher end system would.
Z-Buffering: Trick or no Trick?
There is much debate as to whether or not 16-bit Z-Buffering, in combination with 32-bit color rendering, is used as a performance illusion by manufacturers to decrease the noticeable performance drop when moving from 16 to 32-bit color. The "trick" first truly came into the eyes of the public and the media with the release of the first ATI Rage 128 reviews which all pretty much indicated the same conclusion, a very small performance drop when going from 16-bit to 32-bit color.
Upon further inspection, the fact that the drivers were using a 16-bit Z-Buffer with 32-bit color instead of a 32-bit Z-Buffer made a few people do a double take. After 32-bit Z-Buffering support was enabled, the performance drop did grow to be something more than the virtually non-existent drop that was experienced with the first set of drivers for the card. Theoretically, a manufacturer could disable 32-bit Z-Buffering to achieve a greater impact in a review when their particular card exhibits a very small or nonexistent drop in performance when moving from 16 to 32-bit color. Not to say that any current manufacturers would dream of doing such a thing, but it is always a possibility. Now that more people look for the Z depth before dropping their jaw at performance numbers the "trick," assuming that it is one, isnt as effective as it once was/might have been. This brings us to the next question, does it really matter?
It always seems like everyone is quick to attack yet very few are willing to understand why theyre attacking, or for what goals. So what does 32-bit Z-Buffering buy you? The closest parallel one can make to the argument of whether or not 32-bit Z is important when rendering in 32-bit color (in games) is the classic 16-bit vs 32-bit color debate. Some can notice the difference and it bothers them tremendously, others either cant, or dont care. Which brings us to the next argument
0 Comments
View All Comments