Compare any two graphics cards:
GeForce 8800 GTS (G92) vs Radeon HD 5770
IntroThe GeForce 8800 GTS (G92) has core speeds of 650 MHz on the GPU, and 970 MHz on the 512 MB of GDDR3 memory. It features 128 SPUs along with 64 TAUs and 16 ROPs.
Compare that to the Radeon HD 5770, which comes with GPU clock speed of 850 MHz, and 1024 MB of GDDR5 memory set to run at 1200 MHz through a 128-bit bus. It also is made up of 800(160x5) SPUs, 40 TAUs, and 16 ROPs.
Power Usage and Theoretical Benchmarks
Power Consumption (Max TDP)
Performance-wise, the Radeon HD 5770 should in theory be quite a bit superior to the GeForce 8800 GTS (G92) in general. (explain)
Texel RateThe GeForce 8800 GTS (G92) will be a lot (approximately 22%) better at anisotropic filtering than the Radeon HD 5770. (explain)
Pixel RateThe Radeon HD 5770 is a lot (more or less 31%) better at FSAA than the GeForce 8800 GTS (G92), and also should be able to handle higher screen resolutions while still performing well. (explain)
Please note that the above 'benchmarks' are all just theoretical - the results were calculated based on the card's specifications, and real-world performance may (and probably will) vary at least a bit.
Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.
Memory Bandwidth: Memory bandwidth is the maximum amount of data (in units of MB per second) that can be moved over the external memory interface in a second. The number is worked out by multiplying the interface width by its memory speed. If it uses DDR RAM, it must be multiplied by 2 again. If it uses DDR5, multiply by ANOTHER 2x. The higher the memory bandwidth, the better the card will be in general. It especially helps with AA, HDR and high resolutions.
Texel Rate: Texel rate is the maximum texture map elements (texels) that can be processed in one second. This number is calculated by multiplying the total number of texture units of the card by the core speed of the chip. The higher this number, the better the graphics card will be at handling texture filtering (anisotropic filtering - AF). It is measured in millions of texels processed in one second.
Pixel Rate: Pixel rate is the most pixels that the graphics card could possibly write to the local memory in a second - measured in millions of pixels per second. The figure is calculated by multiplying the number of Render Output Units by the the core speed of the card. ROPs (Raster Operations Pipelines - aka Render Output Units) are responsible for outputting the pixels (image) to the screen. The actual pixel fill rate also depends on lots of other factors, especially the memory bandwidth of the card - the lower the bandwidth is, the lower the potential to reach the max fill rate.