Compare any two graphics cards:
GeForce GTX 580 vs Radeon HD 6770
IntroThe GeForce GTX 580 uses a 40 nm design. nVidia has set the core speed at 772 MHz. The GDDR5 RAM is set to run at a frequency of 1002 MHz on this specific card. It features 512 SPUs as well as 64 TAUs and 48 Rasterization Operator Units.
Compare those specs to the Radeon HD 6770, which features core speeds of 900 MHz on the GPU, and 1050 MHz on the 512 MB of GDDR5 memory. It features 800 SPUs as well as 40 TAUs and 16 Rasterization Operator Units.
BenchmarksThese are real-world performance benchmarks that were submitted by Hardware Compare users. The scores seen here are the average of all benchmarks submitted for each respective test and hardware.
3DMark Fire Strike Graphics Score
Power Usage and Theoretical Benchmarks
Power Consumption (Max TDP)
Performance-wise, the GeForce GTX 580 should theoretically be a lot better than the Radeon HD 6770 in general. (explain)
Texel RateThe GeForce GTX 580 will be much (about 37%) better at AF than the Radeon HD 6770. (explain)
Pixel RateIf running with high levels of AA is important to you, then the GeForce GTX 580 is a better choice, by far. (explain)
Please note that the above 'benchmarks' are all just theoretical - the results were calculated based on the card's specifications, and real-world performance may (and probably will) vary at least a bit.
Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.
Memory Bandwidth: Memory bandwidth is the max amount of data (in units of megabytes per second) that can be moved across the external memory interface in a second. The number is calculated by multiplying the bus width by its memory clock speed. If it uses DDR memory, the result should be multiplied by 2 once again. If it uses DDR5, multiply by 4 instead. The higher the card's memory bandwidth, the better the card will be in general. It especially helps with anti-aliasing, HDR and higher screen resolutions.
Texel Rate: Texel rate is the maximum number of texture map elements (texels) that can be applied in one second. This number is calculated by multiplying the total texture units by the core speed of the chip. The better the texel rate, the better the card will be at texture filtering (anisotropic filtering - AF). It is measured in millions of texels processed per second.
Pixel Rate: Pixel rate is the maximum amount of pixels that the graphics chip could possibly record to the local memory in a second - measured in millions of pixels per second. The figure is calculated by multiplying the amount of colour ROPs by the the card's clock speed. ROPs (Raster Operations Pipelines - aka Render Output Units) are responsible for drawing the pixels (image) on the screen. The actual pixel output rate also depends on quite a few other factors, especially the memory bandwidth - the lower the bandwidth is, the lower the potential to get to the max fill rate.