Compare any two graphics cards:
GeForce GT 440 3GB vs Radeon HD 5570
IntroThe GeForce GT 440 3GB uses a 40 nm design. nVidia has set the core frequency at 594 MHz. The GDDR3 RAM is set to run at a frequency of 900 MHz on this card. It features 144 SPUs along with 24 TAUs and 24 ROPs.
Compare that to the Radeon HD 5570, which has GPU clock speed of 650 MHz, and 512 MB of DDR3 RAM set to run at 900 MHz through a 128-bit bus. It also features 400(80x5) Stream Processors, 20 TAUs, and 8 Raster Operation Units.
(No game benchmarks for this combination yet.)
Power Usage and Theoretical Benchmarks
Power Consumption (Max TDP)
In theory, the GeForce GT 440 3GB should be a lot faster than the Radeon HD 5570 in general. (explain)
Texel RateThe GeForce GT 440 3GB will be just a bit (approximately 10%) better at anisotropic filtering than the Radeon HD 5570. (explain)
Pixel RateThe GeForce GT 440 3GB should be quite a bit (more or less 174%) faster with regards to full screen anti-aliasing than the Radeon HD 5570, and should be able to handle higher resolutions more effectively. (explain)
Please note that the above 'benchmarks' are all just theoretical - the results were calculated based on the card's specifications, and real-world performance may (and probably will) vary at least a bit.
Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.
Memory Bandwidth: Memory bandwidth is the largest amount of information (in units of megabytes per second) that can be transferred across the external memory interface in one second. The number is worked out by multiplying the bus width by the speed of its memory. If it uses DDR type RAM, it must be multiplied by 2 again. If it uses DDR5, multiply by 4 instead. The higher the bandwidth is, the better the card will be in general. It especially helps with anti-aliasing, HDR and higher screen resolutions.
Texel Rate: Texel rate is the maximum texture map elements (texels) that are applied per second. This is worked out by multiplying the total texture units of the card by the core speed of the chip. The better the texel rate, the better the card will be at handling texture filtering (anisotropic filtering - AF). It is measured in millions of texels per second.
Pixel Rate: Pixel rate is the maximum number of pixels the video card could possibly write to the local memory in a second - measured in millions of pixels per second. The figure is worked out by multiplying the number of Raster Operations Pipelines by the the core speed of the card. ROPs (Raster Operations Pipelines - also called Render Output Units) are responsible for drawing the pixels (image) on the screen. The actual pixel output rate also depends on lots of other factors, especially the memory bandwidth - the lower the bandwidth is, the lower the ability to get to the maximum fill rate.