Compare any two graphics cards:
Radeon HD 5770 vs Radeon HD 6770
IntroThe Radeon HD 5770 has a clock frequency of 850 MHz and a GDDR5 memory speed of 1200 MHz. It also uses a 128-bit bus, and uses a 40 nm design. It features 800(160x5) SPUs, 40 Texture Address Units, and 16 Raster Operation Units.
Compare those specs to the Radeon HD 6770, which features a GPU core clock speed of 900 MHz, and 512 MB of GDDR5 memory running at 1050 MHz through a 128-bit bus. It also is comprised of 800 Stream Processors, 40 TAUs, and 16 ROPs.
Power Usage and Theoretical BenchmarksBoth cards have the same power consumption.
The Radeon HD 5770 should in theory perform just a bit faster than the Radeon HD 6770 overall. (explain)
Texel RateThe Radeon HD 6770 is just a bit (approximately 6%) more effective at anisotropic filtering than the Radeon HD 5770. (explain)
Pixel RateIf running with lots of anti-aliasing is important to you, then the Radeon HD 6770 is the winner, though only just barely. (explain)
Please note that the above 'benchmarks' are all just theoretical - the results were calculated based on the card's specifications, and real-world performance may (and probably will) vary at least a bit.
Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.
Memory Bandwidth: Bandwidth is the max amount of data (measured in megabytes per second) that can be transported past the external memory interface in a second. The number is calculated by multiplying the interface width by the speed of its memory. In the case of DDR memory, the result should be multiplied by 2 once again. If DDR5, multiply by 4 instead. The higher the bandwidth is, the faster the card will be in general. It especially helps with AA, HDR and higher screen resolutions.
Texel Rate: Texel rate is the maximum texture map elements (texels) that can be applied per second. This is calculated by multiplying the total texture units by the core clock speed of the chip. The better this number, the better the card will be at texture filtering (anisotropic filtering - AF). It is measured in millions of texels processed per second.
Pixel Rate: Pixel rate is the maximum number of pixels that the graphics chip could possibly write to the local memory in one second - measured in millions of pixels per second. Pixel rate is calculated by multiplying the amount of Render Output Units by the the core clock speed. ROPs (Raster Operations Pipelines - aka Render Output Units) are responsible for outputting the pixels (image) to the screen. The actual pixel output rate also depends on many other factors, most notably the memory bandwidth - the lower the memory bandwidth is, the lower the ability to get to the max fill rate.