Compare any two graphics cards:
GeForce GT 220 GDDR3 vs Radeon HD 5570
IntroThe GeForce GT 220 GDDR3 has core speeds of 625 MHz on the GPU, and 1012 MHz on the 512 MB of GDDR3 RAM. It features 48 SPUs along with 16 TAUs and 8 Rasterization Operator Units.
Compare those specs to the Radeon HD 5570, which features a GPU core clock speed of 650 MHz, and 512 MB of DDR3 memory running at 900 MHz through a 128-bit bus. It also is made up of 400(80x5) SPUs, 20 TAUs, and 8 Raster Operation Units.
Power Usage and Theoretical Benchmarks
Power Consumption (Max TDP)
The GeForce GT 220 GDDR3 should theoretically perform a small bit faster than the Radeon HD 5570 overall. (explain)
Texel RateThe Radeon HD 5570 is a lot (approximately 30%) better at anisotropic filtering than the GeForce GT 220 GDDR3. (explain)
Pixel RateThe Radeon HD 5570 should be a small bit (about 4%) better at AA than the GeForce GT 220 GDDR3, and able to handle higher screen resolutions while still performing well. (explain)
Please note that the above 'benchmarks' are all just theoretical - the results were calculated based on the card's specifications, and real-world performance may (and probably will) vary at least a bit.
Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.
Memory Bandwidth: Bandwidth is the max amount of information (in units of MB per second) that can be transferred over the external memory interface within a second. It's calculated by multiplying the interface width by its memory clock speed. If the card has DDR type RAM, it should be multiplied by 2 once again. If DDR5, multiply by 4 instead. The better the bandwidth is, the faster the card will be in general. It especially helps with AA, HDR and higher screen resolutions.
Texel Rate: Texel rate is the maximum amount of texture map elements (texels) that are processed in one second. This is worked out by multiplying the total number of texture units by the core speed of the chip. The higher this number, the better the graphics card will be at texture filtering (anisotropic filtering - AF). It is measured in millions of texels applied in a second.
Pixel Rate: Pixel rate is the most pixels that the graphics card can possibly write to the local memory in one second - measured in millions of pixels per second. The figure is worked out by multiplying the number of Raster Operations Pipelines by the clock speed of the card. ROPs (Raster Operations Pipelines - sometimes also referred to as Render Output Units) are responsible for outputting the pixels (image) to the screen. The actual pixel fill rate also depends on lots of other factors, most notably the memory bandwidth - the lower the bandwidth is, the lower the potential to get to the max fill rate.