Submit Benchmarks!

Submit SSD Benchmark
Submit GPU Benchmark

Compare any two graphics cards:
VS

GeForce GTX Titan vs Radeon HD 6790

Intro

The GeForce GTX Titan has a core clock frequency of 837 MHz and a GDDR5 memory frequency of 1502 MHz. It also features a 384-bit bus, and uses a 28 nm design. It is made up of 2688 SPUs, 224 TAUs, and 48 Raster Operation Units.

Compare all of that to the Radeon HD 6790, which features a clock frequency of 840 MHz and a GDDR5 memory speed of 1050 MHz. It also features a 256-bit memory bus, and makes use of a 40 nm design. It is comprised of 800 SPUs, 40 Texture Address Units, and 16 ROPs.

Display Graphs

Hide Graphs

Benchmarks

These are real-world performance benchmarks that were submitted by Hardware Compare users. The scores seen here are the average of all benchmarks submitted for each respective test and hardware.

3DMark Fire Strike Graphics Score

GeForce GTX Titan 10162 points
Radeon HD 6790 2150 points
Difference: 8012 (373%)

Power Usage and Theoretical Benchmarks

Power Consumption (Max TDP)

Radeon HD 6790 150 Watts
GeForce GTX Titan 250 Watts
Difference: 100 Watts (67%)

Memory Bandwidth

Theoretically speaking, the GeForce GTX Titan will be 115% quicker than the Radeon HD 6790 in general, due to its greater data rate. (explain)

GeForce GTX Titan 288384 MB/sec
Radeon HD 6790 134400 MB/sec
Difference: 153984 (115%)

Texel Rate

The GeForce GTX Titan will be quite a bit (about 458%) better at texture filtering than the Radeon HD 6790. (explain)

GeForce GTX Titan 187488 Mtexels/sec
Radeon HD 6790 33600 Mtexels/sec
Difference: 153888 (458%)

Pixel Rate

If using high levels of AA is important to you, then the GeForce GTX Titan is a better choice, by a large margin. (explain)

GeForce GTX Titan 40176 Mpixels/sec
Radeon HD 6790 13440 Mpixels/sec
Difference: 26736 (199%)

Please note that the above 'benchmarks' are all just theoretical - the results were calculated based on the card's specifications, and real-world performance may (and probably will) vary at least a bit.

Price Comparison

Display Prices

Hide Prices

GeForce GTX Titan

Amazon.com

Check prices at:

Radeon HD 6790

Amazon.com

Check prices at:

Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.

Specifications

Display Specifications

Hide Specifications

Model GeForce GTX Titan Radeon HD 6790
Manufacturer nVidia AMD
Year February 2013 April 2011
Code Name GK110 Barts LE
Memory 6144 MB 1024 MB
Core Speed 837 MHz 840 MHz
Memory Speed 6008 MHz 4200 MHz
Power (Max TDP) 250 watts 150 watts
Bandwidth 288384 MB/sec 134400 MB/sec
Texel Rate 187488 Mtexels/sec 33600 Mtexels/sec
Pixel Rate 40176 Mpixels/sec 13440 Mpixels/sec
Unified Shaders 2688 800
Texture Mapping Units 224 40
Render Output Units 48 16
Bus Type GDDR5 GDDR5
Bus Width 384-bit 256-bit
Fab Process 28 nm 40 nm
Transistors 7080 million 1700 million
Bus PCIe 3.0 x16 PCIe 2.1 x16
DirectX Version DirectX 11.0 DirectX 11
OpenGL Version OpenGL 4.3 OpenGL 4.1

Memory Bandwidth: Memory bandwidth is the max amount of data (in units of megabytes per second) that can be moved over the external memory interface in one second. The number is worked out by multiplying the card's bus width by the speed of its memory. In the case of DDR memory, the result should be multiplied by 2 once again. If DDR5, multiply by ANOTHER 2x. The better the memory bandwidth, the faster the card will be in general. It especially helps with AA, HDR and high resolutions.

Texel Rate: Texel rate is the maximum amount of texture map elements (texels) that are applied in one second. This figure is calculated by multiplying the total amount of texture units of the card by the core clock speed of the chip. The better this number, the better the video card will be at texture filtering (anisotropic filtering - AF). It is measured in millions of texels processed per second.

Pixel Rate: Pixel rate is the maximum number of pixels that the graphics chip can possibly record to its local memory in a second - measured in millions of pixels per second. The figure is worked out by multiplying the amount of Render Output Units by the the core speed of the card. ROPs (Raster Operations Pipelines - sometimes also referred to as Render Output Units) are responsible for outputting the pixels (image) to the screen. The actual pixel fill rate also depends on lots of other factors, most notably the memory bandwidth - the lower the bandwidth is, the lower the potential to get to the max fill rate.

Display Prices

Hide Prices

GeForce GTX Titan

Amazon.com

Check prices at:

Radeon HD 6790

Amazon.com

Check prices at:

Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.

Comments

Be the first to leave a comment!

Your email address will not be published. Required fields are marked *


*

WordPress Anti Spam by WP-SpamShield