Compare any two graphics cards:
GeForce GT 340 1GB vs Radeon HD 4850 512MB
IntroThe GeForce GT 340 1GB makes use of a 40 nm design. nVidia has clocked the core frequency at 550 MHz. The GDDR5 RAM is set to run at a frequency of 850 MHz on this card. It features 96 SPUs as well as 32 TAUs and 8 ROPs.
Compare all of that to the Radeon HD 4850 512MB, which has core speeds of 625 MHz on the GPU, and 993 MHz on the 512 MB of GDDR3 RAM. It features 800(160x5) SPUs as well as 40 TAUs and 16 Rasterization Operator Units.
Power Usage and Theoretical Benchmarks
Power Consumption (Max TDP)
The Radeon HD 4850 512MB should in theory perform a bit faster than the GeForce GT 340 1GB in general. (explain)
Texel RateThe Radeon HD 4850 512MB is quite a bit (about 42%) better at AF than the GeForce GT 340 1GB. (explain)
Pixel RateIf using high levels of AA is important to you, then the Radeon HD 4850 512MB is the winner, by far. (explain)
Please note that the above 'benchmarks' are all just theoretical - the results were calculated based on the card's specifications, and real-world performance may (and probably will) vary at least a bit.
Memory Bandwidth: Memory bandwidth is the largest amount of data (measured in megabytes per second) that can be transported over the external memory interface within a second. It's worked out by multiplying the card's interface width by its memory clock speed. If it uses DDR type RAM, it should be multiplied by 2 again. If DDR5, multiply by 4 instead. The better the card's memory bandwidth, the better the card will be in general. It especially helps with anti-aliasing, HDR and high resolutions.
Texel Rate: Texel rate is the maximum number of texture map elements (texels) that can be processed in one second. This figure is worked out by multiplying the total texture units by the core speed of the chip. The higher the texel rate, the better the graphics card will be at handling texture filtering (anisotropic filtering - AF). It is measured in millions of texels per second.
Pixel Rate: Pixel rate is the most pixels the graphics card can possibly record to its local memory in one second - measured in millions of pixels per second. The figure is worked out by multiplying the number of Render Output Units by the clock speed of the card. ROPs (Raster Operations Pipelines - aka Render Output Units) are responsible for outputting the pixels (image) to the screen. The actual pixel fill rate also depends on many other factors, most notably the memory bandwidth - the lower the memory bandwidth is, the lower the potential to reach the max fill rate.