Compare any two graphics cards:
GeForce GTX 550 Ti vs Radeon HD 5770
IntroThe GeForce GTX 550 Ti uses a 40 nm design. nVidia has clocked the core speed at 900 MHz. The GDDR5 RAM runs at a speed of 1026 MHz on this particular card. It features 192 SPUs as well as 32 Texture Address Units and 24 Rasterization Operator Units.
Compare those specs to the Radeon HD 5770, which comes with clock speeds of 850 MHz on the GPU, and 1200 MHz on the 1024 MB of GDDR5 RAM. It features 800(160x5) SPUs as well as 40 TAUs and 16 Rasterization Operator Units.
(No game benchmarks for this combination yet.)
Power Usage and Theoretical Benchmarks
Power Consumption (Max TDP)
Theoretically, the GeForce GTX 550 Ti should perform a lot faster than the Radeon HD 5770 in general. (explain)
Texel RateThe Radeon HD 5770 should be just a bit (approximately 18%) faster with regards to AF than the GeForce GTX 550 Ti. (explain)
Pixel RateThe GeForce GTX 550 Ti should be much (approximately 59%) better at full screen anti-aliasing than the Radeon HD 5770, and will be able to handle higher screen resolutions without losing too much performance. (explain)
Please note that the above 'benchmarks' are all just theoretical - the results were calculated based on the card's specifications, and real-world performance may (and probably will) vary at least a bit.
Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.
Memory Bandwidth: Bandwidth is the largest amount of data (in units of MB per second) that can be moved over the external memory interface in a second. It's worked out by multiplying the bus width by the speed of its memory. If the card has DDR RAM, it must be multiplied by 2 once again. If it uses DDR5, multiply by ANOTHER 2x. The higher the bandwidth is, the better the card will be in general. It especially helps with anti-aliasing, HDR and higher screen resolutions.
Texel Rate: Texel rate is the maximum amount of texture map elements (texels) that can be applied per second. This number is calculated by multiplying the total number of texture units by the core speed of the chip. The higher this number, the better the card will be at texture filtering (anisotropic filtering - AF). It is measured in millions of texels applied in one second.
Pixel Rate: Pixel rate is the maximum amount of pixels the video card could possibly record to its local memory in a second - measured in millions of pixels per second. Pixel rate is worked out by multiplying the number of Render Output Units by the clock speed of the card. ROPs (Raster Operations Pipelines - aka Render Output Units) are responsible for drawing the pixels (image) on the screen. The actual pixel output rate also depends on many other factors, especially the memory bandwidth of the card - the lower the bandwidth is, the lower the potential to get to the maximum fill rate.