Compare any two graphics cards:
GeForce 8800 GTX vs Radeon HD 5770
IntroThe GeForce 8800 GTX makes use of a 90 nm design. nVidia has set the core speed at 575 MHz. The GDDR3 memory works at a frequency of 900 MHz on this card. It features 128 SPUs as well as 64 TAUs and 24 ROPs.
Compare those specifications to the Radeon HD 5770, which uses a 40 nm design. AMD has set the core frequency at 850 MHz. The GDDR5 RAM is set to run at a frequency of 1200 MHz on this model. It features 800(160x5) SPUs along with 40 Texture Address Units and 16 Rasterization Operator Units.
Power Usage and Theoretical Benchmarks
Power Consumption (Max TDP)
Theoretically, the GeForce 8800 GTX should perform just a bit faster than the Radeon HD 5770 overall. (explain)
Texel RateThe GeForce 8800 GTX should be a small bit (about 8%) more effective at AF than the Radeon HD 5770. (explain)
Pixel RateThe GeForce 8800 GTX should be a small bit (more or less 1%) faster with regards to AA than the Radeon HD 5770, and will be capable of handling higher screen resolutions while still performing well. (explain)
Please note that the above 'benchmarks' are all just theoretical - the results were calculated based on the card's specifications, and real-world performance may (and probably will) vary at least a bit.
Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.
Memory Bandwidth: Bandwidth is the max amount of information (in units of MB per second) that can be transferred past the external memory interface in a second. It is worked out by multiplying the card's interface width by the speed of its memory. If the card has DDR type memory, it should be multiplied by 2 once again. If DDR5, multiply by ANOTHER 2x. The higher the bandwidth is, the faster the card will be in general. It especially helps with anti-aliasing, HDR and higher screen resolutions.
Texel Rate: Texel rate is the maximum amount of texture map elements (texels) that are applied in one second. This number is calculated by multiplying the total amount of texture units by the core clock speed of the chip. The higher the texel rate, the better the video card will be at texture filtering (anisotropic filtering - AF). It is measured in millions of texels processed in one second.
Pixel Rate: Pixel rate is the maximum number of pixels that the graphics chip can possibly record to its local memory in one second - measured in millions of pixels per second. The figure is calculated by multiplying the number of ROPs by the the card's clock speed. ROPs (Raster Operations Pipelines - also sometimes called Render Output Units) are responsible for outputting the pixels (image) to the screen. The actual pixel output rate also depends on lots of other factors, most notably the memory bandwidth - the lower the memory bandwidth is, the lower the ability to reach the maximum fill rate.