Compare any two graphics cards:
GeForce GTX 650 vs Radeon HD 4350
IntroThe GeForce GTX 650 makes use of a 28 nm design. nVidia has set the core frequency at 1058 MHz. The GDDR5 memory is set to run at a frequency of 1250 MHz on this specific card. It features 384 SPUs as well as 32 TAUs and 16 ROPs.
Compare that to the Radeon HD 4350, which makes use of a 55 nm design. AMD has clocked the core frequency at 575 MHz. The DDR2 memory works at a frequency of 500 MHz on this particular card. It features 80(16x5) SPUs along with 8 TAUs and 4 Rasterization Operator Units.
Power Usage and Theoretical Benchmarks
Power Consumption (Max TDP)
Theoretically speaking, the GeForce GTX 650 will be 900% faster than the Radeon HD 4350 in general, due to its greater data rate. (explain)
Texel RateThe GeForce GTX 650 is a lot (approximately 636%) better at anisotropic filtering than the Radeon HD 4350. (explain)
Pixel RateThe GeForce GTX 650 is much (about 636%) faster with regards to full screen anti-aliasing than the Radeon HD 4350, and also capable of handling higher screen resolutions better. (explain)
Please note that the above 'benchmarks' are all just theoretical - the results were calculated based on the card's specifications, and real-world performance may (and probably will) vary at least a bit.
Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.
Memory Bandwidth: Memory bandwidth is the max amount of information (counted in megabytes per second) that can be transported across the external memory interface within a second. The number is calculated by multiplying the bus width by its memory speed. If the card has DDR RAM, it must be multiplied by 2 once again. If DDR5, multiply by 4 instead. The better the card's memory bandwidth, the better the card will be in general. It especially helps with anti-aliasing, HDR and higher screen resolutions.
Texel Rate: Texel rate is the maximum number of texture map elements (texels) that are processed in one second. This is calculated by multiplying the total number of texture units of the card by the core speed of the chip. The better the texel rate, the better the card will be at handling texture filtering (anisotropic filtering - AF). It is measured in millions of texels processed in a second.
Pixel Rate: Pixel rate is the maximum number of pixels the graphics card could possibly write to its local memory in one second - measured in millions of pixels per second. The figure is worked out by multiplying the amount of Raster Operations Pipelines by the the core speed of the card. ROPs (Raster Operations Pipelines - aka Render Output Units) are responsible for drawing the pixels (image) on the screen. The actual pixel rate also depends on quite a few other factors, especially the memory bandwidth of the card - the lower the memory bandwidth is, the lower the ability to get to the maximum fill rate.