Compare any two graphics cards:
GeForce GTS 250 512MB vs Radeon HD 5770
IntroThe GeForce GTS 250 512MB comes with a GPU clock speed of 738 MHz, and the 512 MB of GDDR3 memory is set to run at 1100 MHz through a 256-bit bus. It also features 128 SPUs, 64 Texture Address Units, and 16 Raster Operation Units.
Compare those specs to the Radeon HD 5770, which has clock speeds of 850 MHz on the GPU, and 1200 MHz on the 1024 MB of GDDR5 memory. It features 800(160x5) SPUs as well as 40 TAUs and 16 Rasterization Operator Units.
Power Usage and Theoretical Benchmarks
Power Consumption (Max TDP)
Theoretically, the Radeon HD 5770 should be a small bit faster than the GeForce GTS 250 512MB overall. (explain)
Texel RateThe GeForce GTS 250 512MB should be quite a bit (about 39%) more effective at AF than the Radeon HD 5770. (explain)
Pixel RateIf using lots of anti-aliasing is important to you, then the Radeon HD 5770 is a better choice, but only just. (explain)
Please note that the above 'benchmarks' are all just theoretical - the results were calculated based on the card's specifications, and real-world performance may (and probably will) vary at least a bit.
Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.
Memory Bandwidth: Memory bandwidth is the maximum amount of data (counted in megabytes per second) that can be transferred across the external memory interface in one second. It's worked out by multiplying the interface width by the speed of its memory. If it uses DDR RAM, the result should be multiplied by 2 once again. If it uses DDR5, multiply by ANOTHER 2x. The better the card's memory bandwidth, the faster the card will be in general. It especially helps with anti-aliasing, High Dynamic Range and high resolutions.
Texel Rate: Texel rate is the maximum number of texture map elements (texels) that can be processed per second. This is calculated by multiplying the total amount of texture units of the card by the core speed of the chip. The better this number, the better the graphics card will be at handling texture filtering (anisotropic filtering - AF). It is measured in millions of texels in a second.
Pixel Rate: Pixel rate is the maximum amount of pixels the graphics card can possibly record to the local memory in a second - measured in millions of pixels per second. The number is worked out by multiplying the number of ROPs by the the core clock speed. ROPs (Raster Operations Pipelines - also sometimes called Render Output Units) are responsible for drawing the pixels (image) on the screen. The actual pixel fill rate is also dependant on many other factors, especially the memory bandwidth of the card - the lower the bandwidth is, the lower the ability to reach the maximum fill rate.