Join Us On Facebook

Compare any two graphics cards:
VS

GeForce GTX Titan vs Radeon HD 5870

Intro

The GeForce GTX Titan makes use of a 28 nm design. nVidia has set the core speed at 837 MHz. The GDDR5 RAM runs at a speed of 1502 MHz on this particular model. It features 2688 SPUs along with 224 Texture Address Units and 48 Rasterization Operator Units.

Compare that to the Radeon HD 5870, which uses a 40 nm design. AMD has clocked the core frequency at 850 MHz. The GDDR5 RAM is set to run at a speed of 1200 MHz on this specific card. It features 1600(320x5) SPUs along with 80 Texture Address Units and 32 Rasterization Operator Units.

(No game benchmarks for this combination yet.)

Power Usage and Theoretical Benchmarks

Power Consumption (Max TDP)

Radeon HD 5870 188 Watts
GeForce GTX Titan 250 Watts
Difference: 62 Watts (33%)

Memory Bandwidth

Performance-wise, the GeForce GTX Titan should theoretically be much better than the Radeon HD 5870 overall. (explain)

GeForce GTX Titan 288384 MB/sec
Radeon HD 5870 153600 MB/sec
Difference: 134784 (88%)

Texel Rate

The GeForce GTX Titan is a lot (more or less 176%) faster with regards to anisotropic filtering than the Radeon HD 5870. (explain)

GeForce GTX Titan 187488 Mtexels/sec
Radeon HD 5870 68000 Mtexels/sec
Difference: 119488 (176%)

Pixel Rate

The GeForce GTX Titan is quite a bit (about 48%) better at AA than the Radeon HD 5870, and also will be able to handle higher resolutions without losing too much performance. (explain)

GeForce GTX Titan 40176 Mpixels/sec
Radeon HD 5870 27200 Mpixels/sec
Difference: 12976 (48%)

Please note that the above 'benchmarks' are all just theoretical - the results were calculated based on the card's specifications, and real-world performance may (and probably will) vary at least a bit.

Price Comparison

GeForce GTX Titan

Amazon.com

Radeon HD 5870

Amazon.com

Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.

Specifications

Model GeForce GTX Titan Radeon HD 5870
Manufacturer nVidia AMD
Year February 2013 September 23, 2009
Code Name GK110 Cypress XT
Fab Process 28 nm 40 nm
Bus PCIe 3.0 x16 PCIe 2.1 x16
Memory 6144 MB 1024 MB
Core Speed 837 MHz 850 MHz
Shader Speed 837 MHz (N/A) MHz
Memory Speed 1502 MHz (6008 MHz effective) 1200 MHz (4800 MHz effective)
Unified Shaders 2688 1600(320x5)
Texture Mapping Units 224 80
Render Output Units 48 32
Bus Type GDDR5 GDDR5
Bus Width 384-bit 256-bit
DirectX Version DirectX 11.0 DirectX 11
OpenGL Version OpenGL 4.3 OpenGL 3.2
Power (Max TDP) 250 watts 188 watts
Shader Model 5.0 5.0
Bandwidth 288384 MB/sec 153600 MB/sec
Texel Rate 187488 Mtexels/sec 68000 Mtexels/sec
Pixel Rate 40176 Mpixels/sec 27200 Mpixels/sec

Memory Bandwidth: Bandwidth is the max amount of information (counted in MB per second) that can be transported over the external memory interface in a second. It's calculated by multiplying the card's bus width by its memory speed. If the card has DDR type memory, the result should be multiplied by 2 again. If it uses DDR5, multiply by 4 instead. The better the memory bandwidth, the better the card will be in general. It especially helps with AA, HDR and higher screen resolutions.

Texel Rate: Texel rate is the maximum amount of texture map elements (texels) that are processed in one second. This figure is worked out by multiplying the total number of texture units of the card by the core speed of the chip. The better this number, the better the card will be at handling texture filtering (anisotropic filtering - AF). It is measured in millions of texels processed in a second.

Pixel Rate: Pixel rate is the maximum number of pixels that the graphics card could possibly write to its local memory in a second - measured in millions of pixels per second. Pixel rate is worked out by multiplying the number of colour ROPs by the the card's clock speed. ROPs (Raster Operations Pipelines - also sometimes called Render Output Units) are responsible for outputting the pixels (image) to the screen. The actual pixel fill rate is also dependant on many other factors, especially the memory bandwidth - the lower the bandwidth is, the lower the ability to get to the maximum fill rate.

Comments

Be the first to leave a comment!

Your email address will not be published.


You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Spam Protection by WP-SpamFree