Compare any two graphics cards:
VS

GeForce GTX 1070 vs Radeon HD 5970

Intro

The GeForce GTX 1070 uses a 16 nm design. nVidia has clocked the core speed at 1506 MHz. The GDDR5 memory is set to run at a speed of 2000 MHz on this specific card. It features 1920 SPUs as well as 120 Texture Address Units and 64 Rasterization Operator Units.

Compare those specifications to the Radeon HD 5970, which makes use of a 40 nm design. AMD has clocked the core frequency at 725 MHz. The GDDR5 RAM is set to run at a speed of 1000 MHz on this specific model. It features 1600 SPUs as well as 160 TAUs and 64 Rasterization Operator Units.

Display Graphs

Hide Graphs

Power Usage and Theoretical Benchmarks

Power Consumption (Max TDP)

GeForce GTX 1070 150 Watts
Radeon HD 5970 294 Watts
Difference: 144 Watts (96%)

Memory Bandwidth

Theoretically speaking, the GeForce GTX 1070 should be a bit faster than the Radeon HD 5970 in general. (explain)

GeForce GTX 1070 262144 MB/sec
Radeon HD 5970 256000 MB/sec
Difference: 6144 (2%)

Texel Rate

The Radeon HD 5970 is much (about 28%) better at texture filtering than the GeForce GTX 1070. (explain)

Radeon HD 5970 232000 Mtexels/sec
GeForce GTX 1070 180720 Mtexels/sec
Difference: 51280 (28%)

Pixel Rate

The GeForce GTX 1070 will be just a bit (about 4%) faster with regards to full screen anti-aliasing than the Radeon HD 5970, and will be capable of handling higher screen resolutions better. (explain)

GeForce GTX 1070 96384 Mpixels/sec
Radeon HD 5970 92800 Mpixels/sec
Difference: 3584 (4%)

Please note that the above 'benchmarks' are all just theoretical - the results were calculated based on the card's specifications, and real-world performance may (and probably will) vary at least a bit.

One or more cards in this comparison are multi-core. This means that their bandwidth, texel and pixel rates are theoretically doubled - this does not mean the card will actually perform twice as fast, but only that it should in theory be able to. Actual game benchmarks will give a more accurate idea of what it's capable of.

Price Comparison

Display Prices

Hide Prices

GeForce GTX 1070

Amazon.com

Check prices at:

Radeon HD 5970

Amazon.com

Check prices at:

Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.

Specifications

Display Specifications

Hide Specifications

Model GeForce GTX 1070 Radeon HD 5970
Manufacturer nVidia AMD
Year June 2016 November 2009
Code Name GP104-200 Hemlock XT
Memory 8192 MB 1024 MB (x2)
Core Speed 1506 MHz 725 MHz (x2)
Memory Speed 8000 MHz 4000 MHz (x2)
Power (Max TDP) 150 watts 294 watts
Bandwidth 262144 MB/sec 256000 MB/sec
Texel Rate 180720 Mtexels/sec 232000 Mtexels/sec
Pixel Rate 96384 Mpixels/sec 92800 Mpixels/sec
Unified Shaders 1920 1600 (x2)
Texture Mapping Units 120 160 (x2)
Render Output Units 64 64 (x2)
Bus Type GDDR5 GDDR5
Bus Width 256-bit 256-bit (x2)
Fab Process 16 nm 40 nm
Transistors 7200 million 2154 million
Bus PCIe 3.0 x16 PCIe x16
DirectX Version DirectX 12.0 DirectX 11
OpenGL Version OpenGL 4.5 OpenGL 4.1

Memory Bandwidth: Memory bandwidth is the maximum amount of information (measured in MB per second) that can be transported over the external memory interface in a second. The number is worked out by multiplying the interface width by its memory speed. If the card has DDR RAM, the result should be multiplied by 2 again. If DDR5, multiply by ANOTHER 2x. The higher the card's memory bandwidth, the better the card will be in general. It especially helps with AA, High Dynamic Range and high resolutions.

Texel Rate: Texel rate is the maximum amount of texture map elements (texels) that can be applied per second. This is worked out by multiplying the total texture units by the core clock speed of the chip. The higher this number, the better the card will be at texture filtering (anisotropic filtering - AF). It is measured in millions of texels in one second.

Pixel Rate: Pixel rate is the maximum number of pixels the graphics card could possibly write to the local memory in one second - measured in millions of pixels per second. Pixel rate is worked out by multiplying the amount of ROPs by the the core speed of the card. ROPs (Raster Operations Pipelines - aka Render Output Units) are responsible for outputting the pixels (image) to the screen. The actual pixel rate is also dependant on quite a few other factors, most notably the memory bandwidth of the card - the lower the bandwidth is, the lower the ability to reach the max fill rate.

Display Prices

Hide Prices

GeForce GTX 1070

Amazon.com

Check prices at:

Radeon HD 5970

Amazon.com

Check prices at:

Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.

Comments

Be the first to leave a comment!

Your email address will not be published. Required fields are marked *


*

WordPress Anti Spam by WP-SpamShield