Join Us On Facebook

Compare any two graphics cards:

GeForce GTX 780 Ti vs Radeon R9 290X


The GeForce GTX 780 Ti has a clock frequency of 875 MHz and a GDDR5 memory frequency of 1750 MHz. It also features a 384-bit memory bus, and makes use of a 28 nm design. It is made up of 2880 SPUs, 240 Texture Address Units, and 48 ROPs.

Compare those specs to the Radeon R9 290X, which comes with GPU core speed of 800 MHz, and 4096 MB of GDDR5 memory set to run at 1250 MHz through a 512-bit bus. It also features 2816 Stream Processors, 176 Texture Address Units, and 64 Raster Operation Units.

(No game benchmarks for this combination yet.)

Power Usage and Theoretical Benchmarks

Power Consumption (Max TDP)

GeForce GTX 780 Ti 250 Watts
Radeon R9 290X 300 Watts
Difference: 50 Watts (20%)

Memory Bandwidth

The GeForce GTX 780 Ti, in theory, should perform a small bit faster than the Radeon R9 290X overall. (explain)

GeForce GTX 780 Ti 336000 MB/sec
Radeon R9 290X 320000 MB/sec
Difference: 16000 (5%)

Texel Rate

The GeForce GTX 780 Ti is a lot (approximately 49%) better at texture filtering than the Radeon R9 290X. (explain)

GeForce GTX 780 Ti 210000 Mtexels/sec
Radeon R9 290X 140800 Mtexels/sec
Difference: 69200 (49%)

Pixel Rate

The Radeon R9 290X should be a lot (approximately 22%) faster with regards to anti-aliasing than the GeForce GTX 780 Ti, and also able to handle higher screen resolutions better. (explain)

Radeon R9 290X 51200 Mpixels/sec
GeForce GTX 780 Ti 42000 Mpixels/sec
Difference: 9200 (22%)

Please note that the above 'benchmarks' are all just theoretical - the results were calculated based on the card's specifications, and real-world performance may (and probably will) vary at least a bit.

Price Comparison

Please note that the price comparisons are based on search keywords, and might not be the exact same card listed on this page. We have no control over the accuracy of their search results.

GeForce GTX 780 Ti

Other US-based stores

Radeon R9 290X

Other US-based stores


Model GeForce GTX 780 Ti Radeon R9 290X
Manufacturer nVidia ATi
Year November 2013 October 2013
Code Name GK110 Hawaii XT
Fab Process 28 nm 28 nm
Bus PCIe 3.0 x16 PCIe 3.0 x16
Memory 3072 MB 4096 MB
Core Speed 875 MHz 800 MHz
Shader Speed N/A MHz (N/A) MHz
Memory Speed 1750 MHz (7000 MHz effective) 1250 MHz (5000 MHz effective)
Unified Shaders 2880 2816
Texture Mapping Units 240 176
Render Output Units 48 64
Bus Type GDDR5 GDDR5
Bus Width 384-bit 512-bit
DirectX Version DirectX 11.1 DirectX 11.2
OpenGL Version OpenGL 4.4 OpenGL 4.3
Power (Max TDP) 250 watts 300 watts
Shader Model 5.0 5.0
Bandwidth 336000 MB/sec 320000 MB/sec
Texel Rate 210000 Mtexels/sec 140800 Mtexels/sec
Pixel Rate 42000 Mpixels/sec 51200 Mpixels/sec

Memory Bandwidth: Memory bandwidth is the max amount of information (measured in megabytes per second) that can be moved across the external memory interface within a second. It's calculated by multiplying the card's bus width by its memory clock speed. If the card has DDR type memory, it should be multiplied by 2 once again. If it uses DDR5, multiply by ANOTHER 2x. The better the bandwidth is, the better the card will be in general. It especially helps with AA, HDR and high resolutions.

Texel Rate: Texel rate is the maximum amount of texture map elements (texels) that can be processed per second. This figure is worked out by multiplying the total texture units by the core speed of the chip. The better this number, the better the video card will be at handling texture filtering (anisotropic filtering - AF). It is measured in millions of texels applied per second.

Pixel Rate: Pixel rate is the most pixels the video card can possibly write to its local memory in a second - measured in millions of pixels per second. The figure is calculated by multiplying the amount of ROPs by the the card's clock speed. ROPs (Raster Operations Pipelines - also sometimes called Render Output Units) are responsible for outputting the pixels (image) to the screen. The actual pixel rate also depends on many other factors, especially the memory bandwidth - the lower the bandwidth is, the lower the ability to get to the maximum fill rate.


37 Responses to “GeForce GTX 780 Ti vs Radeon R9 290X”
Pantelos says:
Yan says:
really? go 780ti vs HD 7990 nab
Joshua A. says:
It's quite funny to be honest, the 780 ti beats an AMD Radeon R9 290x 1 GB of less ram than the AMD Card.
junction says:
Ya know whats even funnier? The 250 dollar price difference. If you want to pay out the a$$ for slightly better performance, Nvidia is your goto card.
Pantelos says:
Ya know what's even funnier, r9 290x costs 549$ and gtx ti costs 600 $ so i think for +10$ i think it's worth the difference

it's really funny to say 250 $ difference lol...

Scott says:
Ya know what's even funnier, That is actually a $50 difference not a $10 but the price varies in different places. I live in Australia and the R9 290x costs $699 and the 780 Ti costs $849.
chisoi says:
I`m form Romania ... GTX 870TI =3.049,99 RON end R9 290x = 2.285,77 RON ; R9 290 = 1.683,08 RON. Nice try NVIDIA !
Gadiel says:
Both cards are close in 4K gaming, it ends in technologies both companies have good ones you have to choose the right for you
FElipe says:
gabriel says:
but in games the r9 290x will be best, because de r9 290 is best than a gtx 780 ti in benchmarks
Mohannad says:
I have done a long research about this. The outcome is screaming 290x. Anyways, AMD and Nvidia set together and set specifications and pricing after consulting with their marketing departments to reach to the mutual goal of squeezing as much money out of us. Do not be a fan boy; be the person who want the best for himself. Pursue happiness without blinding agenda!
G Hanson says:
Seriously titan ti got 7000MHz memory speed..............
ya digg says:
LOL! 5000mhz on such an expensive card... 7000mhz titan just eats 290x alive.
lacrate says:
Na boa AMD n é tao mercenaria...compraria r9 290x ...sobre qualquer outra placa ... custo beneficio acima de td... valorizo meu dinheiro!!!
jimmx says:
look at the memory bandwidth
780 ti 384-bit r9 290x 512
beat that nvidia.!
Raspy says:
I see a lot of ppl saying about memory speed bla bla bla thats only part of the complete card and makes littel to no differince. Ok so you want a graphics card. what rez 1080p/2k/4k
if you have lots of dosh wast it on a 290x/780ti neither of them will run 4k very well thats dual card setup domain atm just look a reviews DoHhhhh.
If 1080p/2k get 780/290 max they will do your job for years.
Forget synthetic benchmarks look for real world game fps then choose what you need. To many fanboys who aint got a clue posting here. Remember most screens are running at 60Mhz so wont display any fps above that. If you are lucky enought to have a higher spec 120mhz screen chances are you wont be worried about coast of a gfx and get the best.
Bottom line is dont be a fanboy just get what makes scence for YOUR OWN use.
Jack says:
The first processor with 64-bit technology. AMD.
The first 2 Core Processor. AMD.
The first 4-core processor. AMD.
The first 6-core processor. AMD.
The first 8-core processor. AMD.
The first Accelerator with GDDR3. ATI.
The first Accelerator with GDDR5. ATI.
The first Accelerator DX10. ATI.
The first Accelerator DX11. ATI.
The first Accelerator with 512 bit memory bus. ATI.
The first dual GPU Accelerator. ATI
The first Accelerator with 4k technology.
History teaches us that AMD and ATI innovate all the time.
Where are Nvidia and Intel?
John says:
really funny all this fanboy fights, nvidia has always launched expensive top notch gpus 1000 titan, but what kinda really matters the most is the mid range 760 770, 280x 290, when AMD totally rapes nvidia by alot including price tag
Jackson says:
Just get the best bang for your bucks!
Vicious says:
Memory Clock isn't important..? It's about the most important thing for GPUs. 5000mhz is pathetic on such a card with such a price tag.. 780 TI wins all around!
Kitten says:
Memory clock isn't everything. Effective bandwidth is, 290X has a wider bus (512bit vs 384bit) than the 780 Ti. 320GB/s vs 336GB/s isn't as massive a difference as the 2GHz difference in memory clock would lead you to believe.

290x is better value. Custom cards will reinforce this. 780Ti is a more efficient, cooler-running card that is slightly faster. All comes down to the price gap between them, which can be as much as £100 here in the UK.

Anonymous says:
@Jack Half of this is false and the lets just look back in the days when AMD and ATI didn't even exist and only Nvidia and Intel were...
GDDR3 technology and GDDR5 are both from intel/Nvidia ATI just made an accelerator for it same with DX10 and DX11, altough I admit 512 bit memory bus is pretty good but it doesn't make a big difference compared to 384 bit,,
And why didn't you mention
"The first Accelerator with 256 bit memory bus. Nvidia.??
Stop fanboying

The fact is fact GTX 780 Ti won here
but Nvidia+Intel is more of a high performance "heavy tank" style for various tasks while AMD+ATI (AMD+AMD nowadays) is for specially for gaming.
This doesn't mean Nvidia+Intel is not good for gaming.
Also Intel is using the factory strenght against the AMD "OC Mania", AMDs are better for overclocking but Intel will function for way longer time.
I mean an AMD/ATI will work for like 2-3 years while Nvidia/Intel will still work after 5-6+ years

So keep that in mind and about price, there is not much to it they are about the same range in both Intel VS AMD and Nvidia VS ATI
the 8-core CPUs of AMD doesnt cost too much less than the i7s


P.S.:Choose for your own desires...

anon says:
9300M is better
anon says:
The Zx Spectrum is better
Anonymous says:
you are wrong.
C64 is better!!!
Márcio Monteiro says:
It's amazing the performance of a graphic that costs almost half the GTX 780 TI and almost has the same performance. Always had Nvidias from the RivaTnT (16-32) but the truth is that AMD is currently by far the best choice. The level of processors is that I still think that is not the height of Intel.
broman says:
For all AMD fans out there:
nVidia could crush AMD. They have probably already designed the chips for 8xx and 9xx series and they could release them even in a month from now. Proof? The 780Ti had been in golden status for months, but they waited for AMD to respond to the Titan. When that happened, all of the sudden they have introduced it. Maxwell (8xx series architecture) has been announced in 2010. 4 years before they will have used it in GPUs. They are just waiting and watching AMD struggle to beat their 8-month-old card. They finally achieve it by running at 95C constantly and sounding like a vacuum Conclusion? AMD is a nooby tryhard, nVidia is a lazy pro. You have to decide which one of them suits you.
shoot-|-here says:
Watercooling the 290X shows what it is truly capable of (as with any card, really).
It really is a matter of taste these days... I have used both; and when it comes down to it I choose to save 20%, and miss out on 5-10% performance?
That is OK with me. Others will disagree, that is their opinion and they are entitled to it. :)
When it comes to cyrpto-currency mining, AMD has it all over Nvidia...if that matters.
Simple says:


m says:
OLA pessoal se vocês olharem bem
como ficaria a R9 290x com a memoria em 7000mhz so de ver isso e logico que a R9 290x ganha da GTX 780 Ti pois 5000mhz
contra 7000mhz e a R9 290x esta ali uma passo atras da GTX 780 Ti...
wmp007 says:
@Anonymous says:
January 11, 2014 at 8:59 pm

"I mean an AMD/ATI will work for like 2-3 years while Nvidia/Intel will still work after 5-6+ years"

Really? I got a 10 year old AMD system with an ATI GPU and it still works.

Makes no sense that you would post that. Because hardware, no matter where produced, or what company makes it, it can still fail.

Go look at reviews on newegg and see how many people get DOA hardware, from both Nvidia and AMD.

LOLZ says:
@Jack, ATI first dual GPU??? LOLZ, get your facts straight. 3DFX Voodoo 5500 is the first to bring you dual GPU and dual videocard with SLI. Nvidia bought 3DFX's tech when they went bankrupt.
yucatan says:
Nvidia sucks! You nvidia fags probably don't know about Mantle API yet. Bitch please, world's two gaming console are running on AMD chip (PS4 & Xbox One). And yet still able to render those graphics buttery smooth. That's because AMD GCN architecture is far more efficient than whatever-crap Nvidia is using.
Nope says:
Mantle API is not faster than DX12 and good drivers. It is, in fact, a huge waste of time. Next.

NVidia is stupidly overpriced, AMD is stupidly underpowered. I wish this baby steps bullshit between the gpu manufacturers would stop so we can get a palpable increase in graphical power.

Papa Zane says:
Holy shit this is hilarious, guys, they're both good cards, and, frankly, this choice comes down to preference, fanboying either is ridiculous. yes, Nvidia ridiculously overprices their products, we've known this for years, and yes, AMD doesn't hold the same benchmark power, but the thing is they'll both hold up on any game for years to come, so just pick which one ya' like.
Sam says:
Z80 is better!!!!!!!

Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Spam Protection by WP-SpamFree