Compare any two graphics cards:
VS

GeForce GTX 560 vs GeForce GTX 750

Intro

The GeForce GTX 560 features a core clock speed of 810 MHz and a GDDR5 memory frequency of 1001 MHz. It also makes use of a 256-bit bus, and makes use of a 40 nm design. It is made up of 336 SPUs, 56 Texture Address Units, and 32 Raster Operation Units.

Compare those specs to the GeForce GTX 750, which uses a 28 nm design. nVidia has set the core speed at 1020 MHz. The GDDR5 memory runs at a speed of 1250 MHz on this particular card. It features 512 SPUs along with 32 Texture Address Units and 16 ROPs.

Display Graphs

Hide Graphs

Benchmarks

These are real-world performance benchmarks that were submitted by Hardware Compare users. The scores seen here are the average of all benchmarks submitted for each respective test and hardware.

3DMark Fire Strike Graphics Score

GeForce GTX 750 3958 points
GeForce GTX 560 3030 points
Difference: 928 (31%)

Power Usage and Theoretical Benchmarks

Power Consumption (Max TDP)

GeForce GTX 750 55 Watts
GeForce GTX 560 150 Watts
Difference: 95 Watts (173%)

Memory Bandwidth

Theoretically speaking, the GeForce GTX 560 should be 60% faster than the GeForce GTX 750 overall, due to its higher bandwidth. (explain)

GeForce GTX 560 128128 MB/sec
GeForce GTX 750 80000 MB/sec
Difference: 48128 (60%)

Texel Rate

The GeForce GTX 560 will be a lot (approximately 39%) more effective at texture filtering than the GeForce GTX 750. (explain)

GeForce GTX 560 45360 Mtexels/sec
GeForce GTX 750 32640 Mtexels/sec
Difference: 12720 (39%)

Pixel Rate

The GeForce GTX 560 will be much (about 59%) faster with regards to AA than the GeForce GTX 750, and should be capable of handling higher resolutions while still performing well. (explain)

GeForce GTX 560 25920 Mpixels/sec
GeForce GTX 750 16320 Mpixels/sec
Difference: 9600 (59%)

Please note that the above 'benchmarks' are all just theoretical - the results were calculated based on the card's specifications, and real-world performance may (and probably will) vary at least a bit.

Price Comparison

Display Prices

Hide Prices

GeForce GTX 560

Amazon.com

Check prices at:

GeForce GTX 750

Amazon.com

Check prices at:

Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.

Specifications

Display Specifications

Hide Specifications

Model GeForce GTX 560 GeForce GTX 750
Manufacturer nVidia nVidia
Year May 2011 February 2014
Code Name GF114 GM107
Memory 1024 MB 1024 MB
Core Speed 810 MHz 1020 MHz
Memory Speed 4004 MHz 5000 MHz
Power (Max TDP) 150 watts 55 watts
Bandwidth 128128 MB/sec 80000 MB/sec
Texel Rate 45360 Mtexels/sec 32640 Mtexels/sec
Pixel Rate 25920 Mpixels/sec 16320 Mpixels/sec
Unified Shaders 336 512
Texture Mapping Units 56 32
Render Output Units 32 16
Bus Type GDDR5 GDDR5
Bus Width 256-bit 128-bit
Fab Process 40 nm 28 nm
Transistors 1950 million 1870 million
Bus PCIe 2.0 x16 PCIe 3.0 x16
DirectX Version DirectX 11 DirectX 11.0
OpenGL Version OpenGL 4.1 OpenGL 4.4

Memory Bandwidth: Memory bandwidth is the maximum amount of information (counted in megabytes per second) that can be transferred past the external memory interface in a second. It is worked out by multiplying the card's bus width by the speed of its memory. If it uses DDR RAM, the result should be multiplied by 2 again. If it uses DDR5, multiply by ANOTHER 2x. The better the memory bandwidth, the better the card will be in general. It especially helps with anti-aliasing, HDR and high resolutions.

Texel Rate: Texel rate is the maximum amount of texture map elements (texels) that can be processed per second. This figure is calculated by multiplying the total amount of texture units of the card by the core clock speed of the chip. The higher the texel rate, the better the card will be at texture filtering (anisotropic filtering - AF). It is measured in millions of texels applied in one second.

Pixel Rate: Pixel rate is the most pixels that the graphics chip can possibly record to its local memory per second - measured in millions of pixels per second. Pixel rate is worked out by multiplying the number of ROPs by the the core speed of the card. ROPs (Raster Operations Pipelines - sometimes also referred to as Render Output Units) are responsible for outputting the pixels (image) to the screen. The actual pixel rate is also dependant on quite a few other factors, especially the memory bandwidth of the card - the lower the memory bandwidth is, the lower the potential to reach the maximum fill rate.

Display Prices

Hide Prices

GeForce GTX 560

Amazon.com

Check prices at:

GeForce GTX 750

Amazon.com

Check prices at:

Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.

Comments

Be the first to leave a comment!

Your email address will not be published. Required fields are marked *


*

WordPress Anti Spam by WP-SpamShield