Compare any two graphics cards:
VS

GeForce GTX 750 vs GeForce GTX Titan X

Intro

The GeForce GTX 750 uses a 28 nm design. nVidia has clocked the core speed at 1020 MHz. The GDDR5 RAM is set to run at a frequency of 1250 MHz on this particular card. It features 512 SPUs as well as 32 Texture Address Units and 16 Rasterization Operator Units.

Compare all of that to the GeForce GTX Titan X, which comes with a GPU core clock speed of 1000 MHz, and 12288 MB of GDDR5 RAM running at 1750 MHz through a 384-bit bus. It also is comprised of 3072 Stream Processors, 192 TAUs, and 96 ROPs.

Display Graphs

Hide Graphs

Benchmarks

These are real-world performance benchmarks that were submitted by Hardware Compare users. The scores seen here are the average of all benchmarks submitted for each respective test and hardware.

3DMark Fire Strike Graphics Score

GeForce GTX Titan X 17879 points
GeForce GTX 750 3958 points
Difference: 13921 (352%)

Power Usage and Theoretical Benchmarks

Power Consumption (Max TDP)

GeForce GTX 750 55 Watts
GeForce GTX Titan X 250 Watts
Difference: 195 Watts (355%)

Memory Bandwidth

In theory, the GeForce GTX Titan X should perform quite a bit faster than the GeForce GTX 750 overall. (explain)

GeForce GTX Titan X 336000 MB/sec
GeForce GTX 750 80000 MB/sec
Difference: 256000 (320%)

Texel Rate

The GeForce GTX Titan X will be a lot (approximately 488%) more effective at anisotropic filtering than the GeForce GTX 750. (explain)

GeForce GTX Titan X 192000 Mtexels/sec
GeForce GTX 750 32640 Mtexels/sec
Difference: 159360 (488%)

Pixel Rate

The GeForce GTX Titan X will be a lot (approximately 488%) faster with regards to anti-aliasing than the GeForce GTX 750, and also able to handle higher screen resolutions while still performing well. (explain)

GeForce GTX Titan X 96000 Mpixels/sec
GeForce GTX 750 16320 Mpixels/sec
Difference: 79680 (488%)

Please note that the above 'benchmarks' are all just theoretical - the results were calculated based on the card's specifications, and real-world performance may (and probably will) vary at least a bit.

Price Comparison

Display Prices

Hide Prices

GeForce GTX 750

Amazon.com

Check prices at:

GeForce GTX Titan X

Amazon.com

Check prices at:

Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.

Specifications

Display Specifications

Hide Specifications

Model GeForce GTX 750 GeForce GTX Titan X
Manufacturer nVidia nVidia
Year February 2014 March 2015
Code Name GM107 GM200
Memory 1024 MB 12288 MB
Core Speed 1020 MHz 1000 MHz
Memory Speed 5000 MHz 7000 MHz
Power (Max TDP) 55 watts 250 watts
Bandwidth 80000 MB/sec 336000 MB/sec
Texel Rate 32640 Mtexels/sec 192000 Mtexels/sec
Pixel Rate 16320 Mpixels/sec 96000 Mpixels/sec
Unified Shaders 512 3072
Texture Mapping Units 32 192
Render Output Units 16 96
Bus Type GDDR5 GDDR5
Bus Width 128-bit 384-bit
Fab Process 28 nm 28 nm
Transistors 1870 million 8000 million
Bus PCIe 3.0 x16 PCIe 3.0 x16
DirectX Version DirectX 11.0 DirectX 12.0
OpenGL Version OpenGL 4.4 OpenGL 4.5

Memory Bandwidth: Bandwidth is the max amount of data (measured in megabytes per second) that can be transferred over the external memory interface in one second. It is worked out by multiplying the interface width by its memory speed. If the card has DDR memory, the result should be multiplied by 2 again. If it uses DDR5, multiply by ANOTHER 2x. The better the bandwidth is, the faster the card will be in general. It especially helps with anti-aliasing, High Dynamic Range and high resolutions.

Texel Rate: Texel rate is the maximum texture map elements (texels) that are processed in one second. This figure is calculated by multiplying the total number of texture units of the card by the core speed of the chip. The higher this number, the better the card will be at texture filtering (anisotropic filtering - AF). It is measured in millions of texels processed per second.

Pixel Rate: Pixel rate is the maximum amount of pixels that the graphics card could possibly record to the local memory in one second - measured in millions of pixels per second. Pixel rate is worked out by multiplying the number of Render Output Units by the clock speed of the card. ROPs (Raster Operations Pipelines - also called Render Output Units) are responsible for outputting the pixels (image) to the screen. The actual pixel output rate also depends on lots of other factors, most notably the memory bandwidth - the lower the memory bandwidth is, the lower the ability to get to the max fill rate.

Display Prices

Hide Prices

GeForce GTX 750

Amazon.com

Check prices at:

GeForce GTX Titan X

Amazon.com

Check prices at:

Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.

Comments

Be the first to leave a comment!

Your email address will not be published. Required fields are marked *


*

WordPress Anti Spam by WP-SpamShield