Compare any two graphics cards:
VS

GeForce GTX 1050 vs Radeon HD 5970

Intro

The GeForce GTX 1050 makes use of a 14 nm design. nVidia has set the core frequency at 1354 MHz. The GDDR5 memory runs at a speed of 1750 MHz on this particular card. It features 640 SPUs as well as 40 Texture Address Units and 32 Rasterization Operator Units.

Compare all that to the Radeon HD 5970, which makes use of a 40 nm design. AMD has set the core frequency at 725 MHz. The GDDR5 memory is set to run at a speed of 1000 MHz on this specific model. It features 1600 SPUs along with 160 Texture Address Units and 64 Rasterization Operator Units.

Display Graphs

Hide Graphs

Power Usage and Theoretical Benchmarks

Power Consumption (Max TDP)

GeForce GTX 1050 75 Watts
Radeon HD 5970 294 Watts
Difference: 219 Watts (292%)

Memory Bandwidth

As far as performance goes, the Radeon HD 5970 should in theory be much superior to the GeForce GTX 1050 in general. (explain)

Radeon HD 5970 256000 MB/sec
GeForce GTX 1050 114688 MB/sec
Difference: 141312 (123%)

Texel Rate

The Radeon HD 5970 will be much (more or less 328%) more effective at anisotropic filtering than the GeForce GTX 1050. (explain)

Radeon HD 5970 232000 Mtexels/sec
GeForce GTX 1050 54160 Mtexels/sec
Difference: 177840 (328%)

Pixel Rate

If running with high levels of AA is important to you, then the Radeon HD 5970 is the winner, and very much so. (explain)

Radeon HD 5970 92800 Mpixels/sec
GeForce GTX 1050 43328 Mpixels/sec
Difference: 49472 (114%)

Please note that the above 'benchmarks' are all just theoretical - the results were calculated based on the card's specifications, and real-world performance may (and probably will) vary at least a bit.

One or more cards in this comparison are multi-core. This means that their bandwidth, texel and pixel rates are theoretically doubled - this does not mean the card will actually perform twice as fast, but only that it should in theory be able to. Actual game benchmarks will give a more accurate idea of what it's capable of.

Price Comparison

Display Prices

Hide Prices

GeForce GTX 1050

Amazon.com

Check prices at:

Radeon HD 5970

Amazon.com

Check prices at:

Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.

Specifications

Display Specifications

Hide Specifications

Model GeForce GTX 1050 Radeon HD 5970
Manufacturer nVidia AMD
Year October 2016 November 2009
Code Name GP107-300 Hemlock XT
Memory 2048 MB 1024 MB (x2)
Core Speed 1354 MHz 725 MHz (x2)
Memory Speed 7000 MHz 4000 MHz (x2)
Power (Max TDP) 75 watts 294 watts
Bandwidth 114688 MB/sec 256000 MB/sec
Texel Rate 54160 Mtexels/sec 232000 Mtexels/sec
Pixel Rate 43328 Mpixels/sec 92800 Mpixels/sec
Unified Shaders 640 1600 (x2)
Texture Mapping Units 40 160 (x2)
Render Output Units 32 64 (x2)
Bus Type GDDR5 GDDR5
Bus Width 128-bit 256-bit (x2)
Fab Process 14 nm 40 nm
Transistors 3300 million 2154 million
Bus PCIe 3.0 x16 PCIe x16
DirectX Version DirectX 12.0 DirectX 11
OpenGL Version OpenGL 4.5 OpenGL 4.1

Memory Bandwidth: Bandwidth is the max amount of information (in units of megabytes per second) that can be transported across the external memory interface in one second. It is worked out by multiplying the card's interface width by its memory speed. If the card has DDR type memory, the result should be multiplied by 2 once again. If DDR5, multiply by 4 instead. The better the card's memory bandwidth, the faster the card will be in general. It especially helps with AA, High Dynamic Range and higher screen resolutions.

Texel Rate: Texel rate is the maximum texture map elements (texels) that can be processed in one second. This is worked out by multiplying the total number of texture units of the card by the core speed of the chip. The higher the texel rate, the better the video card will be at handling texture filtering (anisotropic filtering - AF). It is measured in millions of texels applied per second.

Pixel Rate: Pixel rate is the maximum number of pixels that the graphics chip could possibly write to the local memory in a second - measured in millions of pixels per second. The figure is calculated by multiplying the number of Raster Operations Pipelines by the the core speed of the card. ROPs (Raster Operations Pipelines - aka Render Output Units) are responsible for filling the screen with pixels (the image). The actual pixel fill rate also depends on quite a few other factors, most notably the memory bandwidth - the lower the memory bandwidth is, the lower the potential to reach the maximum fill rate.

Display Prices

Hide Prices

GeForce GTX 1050

Amazon.com

Check prices at:

Radeon HD 5970

Amazon.com

Check prices at:

Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.

Comments

Be the first to leave a comment!

Your email address will not be published. Required fields are marked *


*

WordPress Anti Spam by WP-SpamShield