Compare any two graphics cards:
GeForce GTX Titan vs Radeon HD 5870
IntroThe GeForce GTX Titan makes use of a 28 nm design. nVidia has clocked the core speed at 837 MHz. The GDDR5 memory is set to run at a speed of 1502 MHz on this specific card. It features 2688 SPUs as well as 224 TAUs and 48 Rasterization Operator Units.
Compare all of that to the Radeon HD 5870, which has clock speeds of 850 MHz on the GPU, and 1200 MHz on the 1024 MB of GDDR5 memory. It features 1600(320x5) SPUs along with 80 TAUs and 32 Rasterization Operator Units.
(No game benchmarks for this combination yet.)
Power Usage and Theoretical Benchmarks
Power Consumption (Max TDP)
The GeForce GTX Titan should in theory be a lot faster than the Radeon HD 5870 overall. (explain)
Texel RateThe GeForce GTX Titan should be much (about 176%) faster with regards to anisotropic filtering than the Radeon HD 5870. (explain)
Pixel RateThe GeForce GTX Titan will be much (about 48%) more effective at AA than the Radeon HD 5870, and should be capable of handling higher resolutions without losing too much performance. (explain)
Please note that the above 'benchmarks' are all just theoretical - the results were calculated based on the card's specifications, and real-world performance may (and probably will) vary at least a bit.
Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.
Memory Bandwidth: Bandwidth is the max amount of data (counted in MB per second) that can be transported over the external memory interface in a second. It's calculated by multiplying the card's bus width by the speed of its memory. If the card has DDR type memory, the result should be multiplied by 2 once again. If it uses DDR5, multiply by ANOTHER 2x. The higher the memory bandwidth, the better the card will be in general. It especially helps with anti-aliasing, High Dynamic Range and high resolutions.
Texel Rate: Texel rate is the maximum number of texture map elements (texels) that can be applied in one second. This is worked out by multiplying the total amount of texture units by the core clock speed of the chip. The better this number, the better the graphics card will be at handling texture filtering (anisotropic filtering - AF). It is measured in millions of texels processed per second.
Pixel Rate: Pixel rate is the most pixels the video card could possibly write to the local memory in a second - measured in millions of pixels per second. The figure is calculated by multiplying the amount of Raster Operations Pipelines by the the core speed of the card. ROPs (Raster Operations Pipelines - aka Render Output Units) are responsible for outputting the pixels (image) to the screen. The actual pixel output rate also depends on quite a few other factors, most notably the memory bandwidth of the card - the lower the bandwidth is, the lower the ability to reach the maximum fill rate.