Compare any two graphics cards:
VS

GeForce GTX Titan X vs GeForce RTX 2080

Intro

The GeForce GTX Titan X makes use of a 28 nm design. nVidia has clocked the core speed at 1000 MHz. The GDDR5 memory is set to run at a speed of 1750 MHz on this particular model. It features 3072 SPUs along with 192 TAUs and 96 ROPs.

Compare those specs to the GeForce RTX 2080, which features a clock frequency of 1515 MHz and a GDDR6 memory frequency of 1750 MHz. It also uses a 256-bit bus, and makes use of a 12 nm design. It is comprised of 2944 SPUs, 184 TAUs, and 64 Raster Operation Units.

Display Graphs

Hide Graphs

Benchmarks

These are real-world performance benchmarks that were submitted by Hardware Compare users. The scores seen here are the average of all benchmarks submitted for each respective test and hardware.

3DMark Fire Strike Graphics Score

GeForce RTX 2080 26155 points
GeForce GTX Titan X 17879 points
Difference: 8276 (46%)

Power Usage and Theoretical Benchmarks

Power Consumption (Max TDP)

GeForce RTX 2080 215 Watts
GeForce GTX Titan X 250 Watts
Difference: 35 Watts (16%)

Memory Bandwidth

Theoretically speaking, the GeForce RTX 2080 is 37% faster than the GeForce GTX Titan X in general, due to its higher data rate. (explain)

GeForce RTX 2080 458752 MB/sec
GeForce GTX Titan X 336000 MB/sec
Difference: 122752 (37%)

Texel Rate

The GeForce RTX 2080 is quite a bit (about 45%) faster with regards to anisotropic filtering than the GeForce GTX Titan X. (explain)

GeForce RTX 2080 278760 Mtexels/sec
GeForce GTX Titan X 192000 Mtexels/sec
Difference: 86760 (45%)

Pixel Rate

If running with lots of anti-aliasing is important to you, then the GeForce RTX 2080 is a better choice, but only just. (explain)

GeForce RTX 2080 96960 Mpixels/sec
GeForce GTX Titan X 96000 Mpixels/sec
Difference: 960 (1%)

Please note that the above 'benchmarks' are all just theoretical - the results were calculated based on the card's specifications, and real-world performance may (and probably will) vary at least a bit.

Price Comparison

Display Prices

Hide Prices

GeForce GTX Titan X

Amazon.com

Check prices at:

GeForce RTX 2080

Amazon.com

Check prices at:

Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.

Specifications

Display Specifications

Hide Specifications

Model GeForce GTX Titan X GeForce RTX 2080
Manufacturer nVidia nVidia
Year March 2015 September 2018
Code Name GM200 TU104-400A-A1
Memory 12288 MB 8192 MB
Core Speed 1000 MHz 1515 MHz
Memory Speed 7000 MHz 3500 GB/s
Power (Max TDP) 250 watts 215 watts
Bandwidth 336000 MB/sec 458752 MB/sec
Texel Rate 192000 Mtexels/sec 278760 Mtexels/sec
Pixel Rate 96000 Mpixels/sec 96960 Mpixels/sec
Unified Shaders 3072 2944
Texture Mapping Units 192 184
Render Output Units 96 64
Bus Type GDDR5 GDDR6
Bus Width 384-bit 256-bit
Fab Process 28 nm 12 nm
Transistors 8000 million (Unknown) million
Bus PCIe 3.0 x16 PCIe 3.0 x16
DirectX Version DirectX 12.0 DirectX 12
OpenGL Version OpenGL 4.5 OpenGL 4.6

Memory Bandwidth: Memory bandwidth is the largest amount of data (in units of MB per second) that can be moved across the external memory interface within a second. It's calculated by multiplying the card's bus width by the speed of its memory. If the card has DDR type RAM, the result should be multiplied by 2 once again. If DDR5, multiply by 4 instead. The better the card's memory bandwidth, the better the card will be in general. It especially helps with AA, HDR and higher screen resolutions.

Texel Rate: Texel rate is the maximum texture map elements (texels) that can be processed per second. This is worked out by multiplying the total number of texture units by the core clock speed of the chip. The better the texel rate, the better the video card will be at handling texture filtering (anisotropic filtering - AF). It is measured in millions of texels processed per second.

Pixel Rate: Pixel rate is the maximum number of pixels the video card could possibly record to its local memory in a second - measured in millions of pixels per second. The figure is worked out by multiplying the number of Raster Operations Pipelines by the the card's clock speed. ROPs (Raster Operations Pipelines - also sometimes called Render Output Units) are responsible for outputting the pixels (image) to the screen. The actual pixel rate also depends on lots of other factors, most notably the memory bandwidth - the lower the bandwidth is, the lower the potential to reach the max fill rate.

Display Prices

Hide Prices

GeForce GTX Titan X

Amazon.com

Check prices at:

GeForce RTX 2080

Amazon.com

Check prices at:

Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.

Comments

Be the first to leave a comment!

Your email address will not be published. Required fields are marked *


*

WordPress Anti Spam by WP-SpamShield