Compare any two graphics cards:
GeForce 8800 GTS (G92) vs Radeon HD 5770
IntroThe GeForce 8800 GTS (G92) has clock speeds of 650 MHz on the GPU, and 970 MHz on the 512 MB of GDDR3 RAM. It features 128 SPUs as well as 64 TAUs and 16 Rasterization Operator Units.
Compare those specs to the Radeon HD 5770, which features a GPU core clock speed of 850 MHz, and 1024 MB of GDDR5 RAM running at 1200 MHz through a 128-bit bus. It also is made up of 800(160x5) SPUs, 40 Texture Address Units, and 16 ROPs.
Power Usage and Theoretical Benchmarks
Power Consumption (Max TDP)
The Radeon HD 5770, in theory, should be much faster than the GeForce 8800 GTS (G92) in general. (explain)
Texel RateThe GeForce 8800 GTS (G92) will be much (about 22%) more effective at texture filtering than the Radeon HD 5770. (explain)
Pixel RateIf running with a high resolution is important to you, then the Radeon HD 5770 is a better choice, and very much so. (explain)
Please note that the above 'benchmarks' are all just theoretical - the results were calculated based on the card's specifications, and real-world performance may (and probably will) vary at least a bit.
Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.
Memory Bandwidth: Bandwidth is the maximum amount of data (measured in megabytes per second) that can be transported past the external memory interface within a second. The number is calculated by multiplying the interface width by the speed of its memory. If it uses DDR type memory, the result should be multiplied by 2 once again. If it uses DDR5, multiply by 4 instead. The higher the bandwidth is, the faster the card will be in general. It especially helps with AA, High Dynamic Range and high resolutions.
Texel Rate: Texel rate is the maximum texture map elements (texels) that can be processed in one second. This number is calculated by multiplying the total amount of texture units of the card by the core clock speed of the chip. The better this number, the better the video card will be at texture filtering (anisotropic filtering - AF). It is measured in millions of texels processed in one second.
Pixel Rate: Pixel rate is the maximum amount of pixels the graphics card can possibly record to its local memory in a second - measured in millions of pixels per second. Pixel rate is calculated by multiplying the amount of Render Output Units by the the card's clock speed. ROPs (Raster Operations Pipelines - also called Render Output Units) are responsible for drawing the pixels (image) on the screen. The actual pixel output rate is also dependant on quite a few other factors, most notably the memory bandwidth - the lower the bandwidth is, the lower the ability to get to the maximum fill rate.