Compare any two graphics cards:
GeForce GTS 250 512MB vs Radeon HD 5770
IntroThe GeForce GTS 250 512MB makes use of a 65/55 nm design. nVidia has clocked the core speed at 738 MHz. The GDDR3 RAM is set to run at a frequency of 1100 MHz on this specific card. It features 128 SPUs along with 64 TAUs and 16 Rasterization Operator Units.
Compare that to the Radeon HD 5770, which comes with a clock speed of 850 MHz and a GDDR5 memory speed of 1200 MHz. It also features a 128-bit bus, and uses a 40 nm design. It is made up of 800(160x5) SPUs, 40 Texture Address Units, and 16 ROPs.
Power Usage and Theoretical Benchmarks
Power Consumption (Max TDP)
In theory, the Radeon HD 5770 should be a bit faster than the GeForce GTS 250 512MB overall. (explain)
Texel RateThe GeForce GTS 250 512MB is much (about 39%) better at texture filtering than the Radeon HD 5770. (explain)
Pixel RateThe Radeon HD 5770 should be just a bit (more or less 15%) better at anti-aliasing than the GeForce GTS 250 512MB, and also able to handle higher screen resolutions without losing too much performance. (explain)
Please note that the above 'benchmarks' are all just theoretical - the results were calculated based on the card's specifications, and real-world performance may (and probably will) vary at least a bit.
Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.
Memory Bandwidth: Memory bandwidth is the maximum amount of data (in units of megabytes per second) that can be transported across the external memory interface in a second. It's worked out by multiplying the bus width by its memory clock speed. If the card has DDR type RAM, the result should be multiplied by 2 again. If it uses DDR5, multiply by ANOTHER 2x. The better the card's memory bandwidth, the better the card will be in general. It especially helps with AA, HDR and high resolutions.
Texel Rate: Texel rate is the maximum amount of texture map elements (texels) that can be processed in one second. This number is worked out by multiplying the total texture units by the core clock speed of the chip. The higher the texel rate, the better the card will be at handling texture filtering (anisotropic filtering - AF). It is measured in millions of texels applied in a second.
Pixel Rate: Pixel rate is the maximum number of pixels the graphics card could possibly record to its local memory per second - measured in millions of pixels per second. The number is calculated by multiplying the number of Render Output Units by the the core clock speed. ROPs (Raster Operations Pipelines - sometimes also referred to as Render Output Units) are responsible for filling the screen with pixels (the image). The actual pixel output rate also depends on many other factors, especially the memory bandwidth of the card - the lower the memory bandwidth is, the lower the potential to reach the max fill rate.