Compare any two graphics cards:
GeForce 9800 GTX+ vs Radeon HD 5770
IntroThe GeForce 9800 GTX+ uses a 55 nm design. nVidia has set the core speed at 738 MHz. The GDDR3 RAM is set to run at a speed of 1100 MHz on this particular model. It features 128 SPUs as well as 64 Texture Address Units and 16 ROPs.Compare those specs to the Radeon HD 5770, which comes with GPU core speed of 850 MHz, and 1024 MB of GDDR5 memory set to run at 1200 MHz through a 128-bit bus. It also is comprised of 800(160x5) SPUs, 40 TAUs, and 16 Raster Operation Units.
Display Graphs
Power Usage and Theoretical BenchmarksPower Consumption (Max TDP)
Memory BandwidthTheoretically speaking, the Radeon HD 5770 will be 9% faster than the GeForce 9800 GTX+ overall, due to its higher data rate. (explain)
Texel RateThe GeForce 9800 GTX+ will be much (more or less 39%) better at anisotropic filtering than the Radeon HD 5770. (explain)
Pixel RateIf running with lots of anti-aliasing is important to you, then the Radeon HD 5770 is a better choice, but not by far. (explain)
Please note that the above 'benchmarks' are all just theoretical - the results were calculated based on the card's specifications, and real-world performance may (and probably will) vary at least a bit. Price Comparison
Display Prices
Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though. Specifications
Display Specifications
Memory Bandwidth: Bandwidth is the max amount of information (measured in megabytes per second) that can be transferred past the external memory interface in a second. The number is calculated by multiplying the interface width by its memory clock speed. If it uses DDR RAM, it should be multiplied by 2 once again. If DDR5, multiply by 4 instead. The higher the memory bandwidth, the better the card will be in general. It especially helps with anti-aliasing, High Dynamic Range and higher screen resolutions. Texel Rate: Texel rate is the maximum number of texture map elements (texels) that can be applied in one second. This figure is calculated by multiplying the total number of texture units by the core speed of the chip. The higher this number, the better the graphics card will be at handling texture filtering (anisotropic filtering - AF). It is measured in millions of texels in a second. Pixel Rate: Pixel rate is the maximum number of pixels that the graphics chip can possibly record to the local memory in a second - measured in millions of pixels per second. The figure is worked out by multiplying the amount of colour ROPs by the the core clock speed. ROPs (Raster Operations Pipelines - also called Render Output Units) are responsible for drawing the pixels (image) on the screen. The actual pixel fill rate also depends on many other factors, especially the memory bandwidth - the lower the memory bandwidth is, the lower the ability to reach the max fill rate.
Display Prices
Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.
|
Comments
2 Responses to “GeForce 9800 GTX+ vs Radeon HD 5770”[...] twice as good. It performs smilarly (5% better overall) than a 5770, which is about overall about 15% better than a 9800GTX. The 550 is a pretty low end card, better yes, twice as good - definitely no. [...]
Actually according to Textural rate the 9800GTX+ beats the 5770. If the 2 cards were tested at lower resolution like 1280x1024 the 9800 GTX+ would take it in fps in almost all games above. And if you have a 17inch monitor thats the highest res you can go anyway.