Compare any two graphics cards:
GeForce GTX 560 Ti vs Radeon HD 5770
IntroThe GeForce GTX 560 Ti features core speeds of 822 MHz on the GPU, and 1002 MHz on the 1024 MB of GDDR5 memory. It features 384 SPUs as well as 64 TAUs and 32 ROPs.
Compare that to the Radeon HD 5770, which features GPU core speed of 850 MHz, and 1024 MB of GDDR5 RAM running at 1200 MHz through a 128-bit bus. It also is made up of 800(160x5) SPUs, 40 Texture Address Units, and 16 ROPs.
(No game benchmarks for this combination yet.)
Power Usage and Theoretical Benchmarks
Power Consumption (Max TDP)
Theoretically, the GeForce GTX 560 Ti should be a lot faster than the Radeon HD 5770 overall. (explain)
Texel RateThe GeForce GTX 560 Ti will be quite a bit (about 55%) more effective at anisotropic filtering than the Radeon HD 5770. (explain)
Pixel RateIf using a high screen resolution is important to you, then the GeForce GTX 560 Ti is the winner, by far. (explain)
Please note that the above 'benchmarks' are all just theoretical - the results were calculated based on the card's specifications, and real-world performance may (and probably will) vary at least a bit.
Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.
Memory Bandwidth: Memory bandwidth is the maximum amount of information (measured in megabytes per second) that can be transferred across the external memory interface within a second. It is worked out by multiplying the card's bus width by its memory speed. In the case of DDR type memory, the result should be multiplied by 2 again. If it uses DDR5, multiply by 4 instead. The higher the card's memory bandwidth, the better the card will be in general. It especially helps with anti-aliasing, High Dynamic Range and high resolutions.
Texel Rate: Texel rate is the maximum texture map elements (texels) that can be processed in one second. This figure is calculated by multiplying the total number of texture units by the core clock speed of the chip. The better this number, the better the graphics card will be at texture filtering (anisotropic filtering - AF). It is measured in millions of texels applied in one second.
Pixel Rate: Pixel rate is the most pixels the video card could possibly record to its local memory in a second - measured in millions of pixels per second. Pixel rate is calculated by multiplying the amount of Raster Operations Pipelines by the the core clock speed. ROPs (Raster Operations Pipelines - sometimes also referred to as Render Output Units) are responsible for drawing the pixels (image) on the screen. The actual pixel output rate also depends on quite a few other factors, most notably the memory bandwidth - the lower the memory bandwidth is, the lower the potential to reach the maximum fill rate.