Compare any two graphics cards:
GeForce 9800 GTX+ vs Radeon HD 5750 1GB
IntroThe GeForce 9800 GTX+ uses a 55 nm design. nVidia has clocked the core speed at 738 MHz. The GDDR3 RAM works at a speed of 1100 MHz on this specific card. It features 128 SPUs along with 64 TAUs and 16 Rasterization Operator Units.Compare those specifications to the Radeon HD 5750 1GB, which makes use of a 40 nm design. AMD has set the core frequency at 700 MHz. The GDDR5 memory is set to run at a speed of 1150 MHz on this particular card. It features 720(144x5) SPUs as well as 36 TAUs and 16 Rasterization Operator Units.
Display Graphs
Power Usage and Theoretical BenchmarksPower Consumption (Max TDP)
Memory BandwidthThe Radeon HD 5750 1GB should in theory perform a little bit faster than the GeForce 9800 GTX+ in general. (explain)
Texel RateThe GeForce 9800 GTX+ is much (more or less 87%) faster with regards to anisotropic filtering than the Radeon HD 5750 1GB. (explain)
Pixel RateIf using high levels of AA is important to you, then the GeForce 9800 GTX+ is the winner, not by a very large margin though. (explain)
Please note that the above 'benchmarks' are all just theoretical - the results were calculated based on the card's specifications, and real-world performance may (and probably will) vary at least a bit. Price Comparison
Display Prices
Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though. Specifications
Display Specifications
Memory Bandwidth: Bandwidth is the max amount of data (counted in MB per second) that can be moved across the external memory interface within a second. The number is calculated by multiplying the interface width by the speed of its memory. If the card has DDR RAM, it should be multiplied by 2 again. If it uses DDR5, multiply by ANOTHER 2x. The better the bandwidth is, the better the card will be in general. It especially helps with anti-aliasing, HDR and higher screen resolutions. Texel Rate: Texel rate is the maximum amount of texture map elements (texels) that are processed per second. This is worked out by multiplying the total number of texture units of the card by the core speed of the chip. The better this number, the better the card will be at texture filtering (anisotropic filtering - AF). It is measured in millions of texels per second. Pixel Rate: Pixel rate is the most pixels the graphics card can possibly record to its local memory in a second - measured in millions of pixels per second. The figure is calculated by multiplying the number of ROPs by the clock speed of the card. ROPs (Raster Operations Pipelines - aka Render Output Units) are responsible for outputting the pixels (image) to the screen. The actual pixel output rate also depends on many other factors, most notably the memory bandwidth - the lower the memory bandwidth is, the lower the potential to reach the maximum fill rate.
Display Prices
Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.
|
Comments
One Response to “GeForce 9800 GTX+ vs Radeon HD 5750 1GB”[...] http://www.hwcompare.com/1351/geforc...n-hd-5750-1gb/ 9800 GTX+ Comes close but uses way more power. GREAT price for it though I use this in my HTPC and game on it no probs just don't expect max settings. __________________ I build computers to play games on, and spend so much time tweaking I end up not playing games. Helpppp [...]