Compare any two graphics cards:
GeForce 8500 GT vs Radeon HD 5770
IntroThe GeForce 8500 GT has a GPU clock speed of 450 MHz, and the 512 MB of DDR2 RAM is set to run at 400 MHz through a 128-bit bus. It also is made up of 16 SPUs, 8 Texture Address Units, and 4 Raster Operation Units.
Compare all that to the Radeon HD 5770, which comes with a clock frequency of 850 MHz and a GDDR5 memory frequency of 1200 MHz. It also uses a 128-bit bus, and uses a 40 nm design. It is made up of 800(160x5) SPUs, 40 Texture Address Units, and 16 Raster Operation Units.
Power Usage and Theoretical Benchmarks
Power Consumption (Max TDP)
As far as performance goes, the Radeon HD 5770 should theoretically be much better than the GeForce 8500 GT in general. (explain)
Texel RateThe Radeon HD 5770 should be much (more or less 844%) better at anisotropic filtering than the GeForce 8500 GT. (explain)
Pixel RateThe Radeon HD 5770 is a lot (more or less 656%) better at full screen anti-aliasing than the GeForce 8500 GT, and will be able to handle higher screen resolutions without slowing down too much. (explain)
Please note that the above 'benchmarks' are all just theoretical - the results were calculated based on the card's specifications, and real-world performance may (and probably will) vary at least a bit.
Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.
Memory Bandwidth: Memory bandwidth is the max amount of data (in units of megabytes per second) that can be moved across the external memory interface in one second. It's calculated by multiplying the card's interface width by its memory clock speed. In the case of DDR type RAM, it should be multiplied by 2 again. If DDR5, multiply by 4 instead. The better the memory bandwidth, the better the card will be in general. It especially helps with anti-aliasing, HDR and high resolutions.
Texel Rate: Texel rate is the maximum texture map elements (texels) that can be applied in one second. This number is worked out by multiplying the total number of texture units by the core clock speed of the chip. The better this number, the better the graphics card will be at handling texture filtering (anisotropic filtering - AF). It is measured in millions of texels in a second.
Pixel Rate: Pixel rate is the maximum number of pixels that the graphics card can possibly record to its local memory per second - measured in millions of pixels per second. The number is worked out by multiplying the number of Render Output Units by the the core clock speed. ROPs (Raster Operations Pipelines - sometimes also referred to as Render Output Units) are responsible for drawing the pixels (image) on the screen. The actual pixel output rate is also dependant on lots of other factors, especially the memory bandwidth of the card - the lower the memory bandwidth is, the lower the potential to reach the max fill rate.