Compare any two graphics cards:
VS

GeForce 8500 GT vs Radeon HD 5970

Intro

The GeForce 8500 GT uses a 80 nm design. nVidia has set the core speed at 450 MHz. The DDR2 memory is set to run at a frequency of 400 MHz on this particular card. It features 16 SPUs along with 8 TAUs and 4 ROPs.

Compare those specs to the Radeon HD 5970, which has GPU clock speed of 725 MHz, and 1024 MB of GDDR5 memory set to run at 1000 MHz through a 256-bit bus. It also is comprised of 1600 SPUs, 160 TAUs, and 64 Raster Operation Units.

Display Graphs

Hide Graphs

Power Usage and Theoretical Benchmarks

Power Consumption (Max TDP)

GeForce 8500 GT 45 Watts
Radeon HD 5970 294 Watts
Difference: 249 Watts (553%)

Memory Bandwidth

Theoretically speaking, the Radeon HD 5970 will be 1900% faster than the GeForce 8500 GT overall, due to its greater data rate. (explain)

Radeon HD 5970 256000 MB/sec
GeForce 8500 GT 12800 MB/sec
Difference: 243200 (1900%)

Texel Rate

The Radeon HD 5970 is a lot (approximately 6344%) better at texture filtering than the GeForce 8500 GT. (explain)

Radeon HD 5970 232000 Mtexels/sec
GeForce 8500 GT 3600 Mtexels/sec
Difference: 228400 (6344%)

Pixel Rate

The Radeon HD 5970 should be quite a bit (more or less 5056%) more effective at full screen anti-aliasing than the GeForce 8500 GT, and capable of handling higher resolutions without losing too much performance. (explain)

Radeon HD 5970 92800 Mpixels/sec
GeForce 8500 GT 1800 Mpixels/sec
Difference: 91000 (5056%)

Please note that the above 'benchmarks' are all just theoretical - the results were calculated based on the card's specifications, and real-world performance may (and probably will) vary at least a bit.

One or more cards in this comparison are multi-core. This means that their bandwidth, texel and pixel rates are theoretically doubled - this does not mean the card will actually perform twice as fast, but only that it should in theory be able to. Actual game benchmarks will give a more accurate idea of what it's capable of.

Price Comparison

Display Prices

Hide Prices

GeForce 8500 GT

Amazon.com

Check prices at:

Radeon HD 5970

Amazon.com

Check prices at:

Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.

Specifications

Display Specifications

Hide Specifications

Model GeForce 8500 GT Radeon HD 5970
Manufacturer nVidia AMD
Year April 2007 November 2009
Code Name G86 Hemlock XT
Memory 512 MB 1024 MB (x2)
Core Speed 450 MHz 725 MHz (x2)
Memory Speed 800 MHz 4000 MHz (x2)
Power (Max TDP) 45 watts 294 watts
Bandwidth 12800 MB/sec 256000 MB/sec
Texel Rate 3600 Mtexels/sec 232000 Mtexels/sec
Pixel Rate 1800 Mpixels/sec 92800 Mpixels/sec
Unified Shaders 16 1600 (x2)
Texture Mapping Units 8 160 (x2)
Render Output Units 4 64 (x2)
Bus Type DDR2 GDDR5
Bus Width 128-bit 256-bit (x2)
Fab Process 80 nm 40 nm
Transistors 210 million 2154 million
Bus PCIe x16, PCI, PCIe x16 2.0 PCIe x16
DirectX Version DirectX 10 DirectX 11
OpenGL Version OpenGL 3.0 OpenGL 4.1

Memory Bandwidth: Bandwidth is the maximum amount of information (in units of megabytes per second) that can be transferred over the external memory interface within a second. It's worked out by multiplying the card's interface width by the speed of its memory. If it uses DDR memory, it should be multiplied by 2 once again. If it uses DDR5, multiply by 4 instead. The higher the bandwidth is, the better the card will be in general. It especially helps with AA, HDR and high resolutions.

Texel Rate: Texel rate is the maximum number of texture map elements (texels) that are processed per second. This number is calculated by multiplying the total number of texture units by the core clock speed of the chip. The better this number, the better the video card will be at handling texture filtering (anisotropic filtering - AF). It is measured in millions of texels applied in a second.

Pixel Rate: Pixel rate is the maximum number of pixels that the graphics card could possibly record to its local memory in one second - measured in millions of pixels per second. Pixel rate is worked out by multiplying the number of Render Output Units by the the core clock speed. ROPs (Raster Operations Pipelines - aka Render Output Units) are responsible for drawing the pixels (image) on the screen. The actual pixel output rate also depends on quite a few other factors, most notably the memory bandwidth - the lower the bandwidth is, the lower the ability to reach the max fill rate.

Display Prices

Hide Prices

GeForce 8500 GT

Amazon.com

Check prices at:

Radeon HD 5970

Amazon.com

Check prices at:

Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.

Comments

Be the first to leave a comment!

Your email address will not be published. Required fields are marked *


*

WordPress Anti Spam by WP-SpamShield