Compare any two graphics cards:
GeForce GTX 560 vs Radeon HD 7770
IntroThe GeForce GTX 560 has core clock speeds of 810 MHz on the GPU, and 1001 MHz on the 1024 MB of GDDR5 memory. It features 336 SPUs along with 56 TAUs and 32 ROPs.
Compare those specs to the Radeon HD 7770, which has core speeds of 1000 MHz on the GPU, and 1125 MHz on the 1024 MB of GDDR5 memory. It features 640 SPUs as well as 40 TAUs and 16 Rasterization Operator Units.
(No game benchmarks for this combination yet.)
Power Usage and Theoretical Benchmarks
Power Consumption (Max TDP)
The GeForce GTX 560 should theoretically be much faster than the Radeon HD 7770 in general. (explain)
Texel RateThe GeForce GTX 560 is a bit (more or less 13%) better at AF than the Radeon HD 7770. (explain)
Pixel RateThe GeForce GTX 560 is quite a bit (approximately 62%) better at full screen anti-aliasing than the Radeon HD 7770, and should be capable of handling higher resolutions better. (explain)
Please note that the above 'benchmarks' are all just theoretical - the results were calculated based on the card's specifications, and real-world performance may (and probably will) vary at least a bit.
Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.
Memory Bandwidth: Memory bandwidth is the maximum amount of data (in units of megabytes per second) that can be transported over the external memory interface in one second. It is calculated by multiplying the card's interface width by the speed of its memory. If the card has DDR type memory, it should be multiplied by 2 again. If DDR5, multiply by 4 instead. The better the bandwidth is, the better the card will be in general. It especially helps with anti-aliasing, High Dynamic Range and high resolutions.
Texel Rate: Texel rate is the maximum amount of texture map elements (texels) that are applied per second. This is worked out by multiplying the total number of texture units by the core clock speed of the chip. The better this number, the better the video card will be at texture filtering (anisotropic filtering - AF). It is measured in millions of texels in one second.
Pixel Rate: Pixel rate is the maximum number of pixels the graphics card can possibly record to the local memory in a second - measured in millions of pixels per second. Pixel rate is worked out by multiplying the amount of ROPs by the clock speed of the card. ROPs (Raster Operations Pipelines - aka Render Output Units) are responsible for drawing the pixels (image) on the screen. The actual pixel output rate also depends on quite a few other factors, most notably the memory bandwidth of the card - the lower the memory bandwidth is, the lower the potential to get to the maximum fill rate.