Compare any two graphics cards:
GeForce 8800 GTX vs Radeon HD 5770
IntroThe GeForce 8800 GTX features clock speeds of 575 MHz on the GPU, and 900 MHz on the 768 MB of GDDR3 memory. It features 128 SPUs along with 64 Texture Address Units and 24 Rasterization Operator Units.
Compare all that to the Radeon HD 5770, which comes with a core clock speed of 850 MHz and a GDDR5 memory speed of 1200 MHz. It also features a 128-bit memory bus, and makes use of a 40 nm design. It is comprised of 800(160x5) SPUs, 40 TAUs, and 16 Raster Operation Units.
Power Usage and Theoretical Benchmarks
Power Consumption (Max TDP)
Performance-wise, the GeForce 8800 GTX should in theory be a small bit superior to the Radeon HD 5770 in general. (explain)
Texel RateThe GeForce 8800 GTX is a small bit (about 8%) faster with regards to texture filtering than the Radeon HD 5770. (explain)
Pixel RateThe GeForce 8800 GTX is just a bit (more or less 1%) more effective at anti-aliasing than the Radeon HD 5770, and also capable of handling higher resolutions while still performing well. (explain)
Please note that the above 'benchmarks' are all just theoretical - the results were calculated based on the card's specifications, and real-world performance may (and probably will) vary at least a bit.
Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.
Memory Bandwidth: Bandwidth is the maximum amount of data (measured in MB per second) that can be transferred over the external memory interface within a second. The number is worked out by multiplying the interface width by its memory speed. In the case of DDR RAM, it should be multiplied by 2 once again. If it uses DDR5, multiply by ANOTHER 2x. The higher the memory bandwidth, the better the card will be in general. It especially helps with AA, High Dynamic Range and high resolutions.
Texel Rate: Texel rate is the maximum number of texture map elements (texels) that can be processed per second. This figure is worked out by multiplying the total texture units of the card by the core speed of the chip. The higher this number, the better the graphics card will be at texture filtering (anisotropic filtering - AF). It is measured in millions of texels applied in one second.
Pixel Rate: Pixel rate is the maximum amount of pixels the video card could possibly record to its local memory per second - measured in millions of pixels per second. Pixel rate is worked out by multiplying the amount of ROPs by the the card's clock speed. ROPs (Raster Operations Pipelines - sometimes also referred to as Render Output Units) are responsible for drawing the pixels (image) on the screen. The actual pixel output rate is also dependant on quite a few other factors, especially the memory bandwidth - the lower the memory bandwidth is, the lower the ability to get to the max fill rate.