Compare any two graphics cards:
GeForce 8500 GT vs Radeon HD 5450
IntroThe GeForce 8500 GT comes with clock speeds of 450 MHz on the GPU, and 400 MHz on the 512 MB of DDR2 memory. It features 16 SPUs along with 8 Texture Address Units and 4 Rasterization Operator Units.
Compare all that to the Radeon HD 5450, which features a GPU core clock speed of 650 MHz, and 512 MB of DDR3 RAM set to run at 800 MHz through a 64-bit bus. It also is made up of 80(16x5) SPUs, 8 TAUs, and 4 Raster Operation Units.
(No game benchmarks for this combination yet.)
Power Usage and Theoretical Benchmarks
Power Consumption (Max TDP)
Both cards have exactly the same memory bandwidth, so theoretically they should have the same performance. (explain)
Texel RateThe Radeon HD 5450 should be quite a bit (more or less 44%) better at AF than the GeForce 8500 GT. (explain)
Pixel RateThe Radeon HD 5450 should be a lot (more or less 44%) faster with regards to anti-aliasing than the GeForce 8500 GT, and capable of handling higher screen resolutions without losing too much performance. (explain)
Please note that the above 'benchmarks' are all just theoretical - the results were calculated based on the card's specifications, and real-world performance may (and probably will) vary at least a bit.
Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.
Memory Bandwidth: Bandwidth is the maximum amount of data (measured in MB per second) that can be transferred over the external memory interface in one second. It is calculated by multiplying the interface width by the speed of its memory. If the card has DDR memory, it should be multiplied by 2 once again. If DDR5, multiply by 4 instead. The higher the card's memory bandwidth, the faster the card will be in general. It especially helps with anti-aliasing, HDR and higher screen resolutions.
Texel Rate: Texel rate is the maximum number of texture map elements (texels) that are applied in one second. This figure is worked out by multiplying the total amount of texture units by the core clock speed of the chip. The higher the texel rate, the better the card will be at handling texture filtering (anisotropic filtering - AF). It is measured in millions of texels processed in one second.
Pixel Rate: Pixel rate is the maximum amount of pixels the graphics card can possibly write to its local memory in one second - measured in millions of pixels per second. The figure is worked out by multiplying the number of colour ROPs by the clock speed of the card. ROPs (Raster Operations Pipelines - also sometimes called Render Output Units) are responsible for filling the screen with pixels (the image). The actual pixel output rate also depends on lots of other factors, most notably the memory bandwidth of the card - the lower the memory bandwidth is, the lower the potential to get to the max fill rate.