Compare any two graphics cards:
GeForce GTX 280 vs Radeon HD 5770
IntroThe GeForce GTX 280 features a clock speed of 602 MHz and a GDDR3 memory frequency of 1107 MHz. It also features a 512-bit bus, and uses a 65 nm design. It is comprised of 240 SPUs, 80 Texture Address Units, and 32 ROPs.
Compare those specifications to the Radeon HD 5770, which has GPU clock speed of 850 MHz, and 1024 MB of GDDR5 memory set to run at 1200 MHz through a 128-bit bus. It also is made up of 800(160x5) Stream Processors, 40 TAUs, and 16 Raster Operation Units.
Power Usage and Theoretical Benchmarks
Power Consumption (Max TDP)
As far as performance goes, the GeForce GTX 280 should in theory be a lot better than the Radeon HD 5770 in general. (explain)
Texel RateThe GeForce GTX 280 is much (approximately 42%) better at anisotropic filtering than the Radeon HD 5770. (explain)
Pixel RateIf running with lots of anti-aliasing is important to you, then the GeForce GTX 280 is a better choice, by far. (explain)
Please note that the above 'benchmarks' are all just theoretical - the results were calculated based on the card's specifications, and real-world performance may (and probably will) vary at least a bit.
Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.
Memory Bandwidth: Bandwidth is the max amount of data (counted in MB per second) that can be moved across the external memory interface in one second. The number is worked out by multiplying the interface width by the speed of its memory. If the card has DDR type memory, it must be multiplied by 2 once again. If it uses DDR5, multiply by 4 instead. The higher the card's memory bandwidth, the faster the card will be in general. It especially helps with AA, HDR and high resolutions.
Texel Rate: Texel rate is the maximum number of texture map elements (texels) that can be processed per second. This number is calculated by multiplying the total number of texture units of the card by the core speed of the chip. The higher this number, the better the graphics card will be at handling texture filtering (anisotropic filtering - AF). It is measured in millions of texels processed in one second.
Pixel Rate: Pixel rate is the maximum number of pixels the graphics card could possibly record to the local memory in a second - measured in millions of pixels per second. Pixel rate is calculated by multiplying the number of Raster Operations Pipelines by the clock speed of the card. ROPs (Raster Operations Pipelines - also sometimes called Render Output Units) are responsible for filling the screen with pixels (the image). The actual pixel output rate also depends on quite a few other factors, most notably the memory bandwidth of the card - the lower the bandwidth is, the lower the ability to get to the max fill rate.