Compare any two graphics cards:
GeForce 9800 GTX+ vs Radeon HD 5770
IntroThe GeForce 9800 GTX+ makes use of a 55 nm design. nVidia has set the core speed at 738 MHz. The GDDR3 memory runs at a speed of 1100 MHz on this particular model. It features 128 SPUs as well as 64 Texture Address Units and 16 Rasterization Operator Units.Compare all of that to the Radeon HD 5770, which has core clock speeds of 850 MHz on the GPU, and 1200 MHz on the 1024 MB of GDDR5 RAM. It features 800(160x5) SPUs along with 40 TAUs and 16 Rasterization Operator Units.
Display Graphs
Power Usage and Theoretical BenchmarksPower Consumption (Max TDP)
Memory BandwidthIn theory, the Radeon HD 5770 will be 9% faster than the GeForce 9800 GTX+ in general, because of its greater bandwidth. (explain)
Texel RateThe GeForce 9800 GTX+ should be quite a bit (approximately 39%) faster with regards to anisotropic filtering than the Radeon HD 5770. (explain)
Pixel RateIf using lots of anti-aliasing is important to you, then the Radeon HD 5770 is superior to the GeForce 9800 GTX+, but it probably won't make a huge difference. (explain)
Please note that the above 'benchmarks' are all just theoretical - the results were calculated based on the card's specifications, and real-world performance may (and probably will) vary at least a bit. Price Comparison
Display Prices
Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though. Specifications
Display Specifications
Memory Bandwidth: Memory bandwidth is the largest amount of information (in units of megabytes per second) that can be transferred past the external memory interface in a second. The number is calculated by multiplying the card's bus width by its memory speed. If it uses DDR memory, it should be multiplied by 2 again. If it uses DDR5, multiply by 4 instead. The better the bandwidth is, the better the card will be in general. It especially helps with anti-aliasing, HDR and high resolutions. Texel Rate: Texel rate is the maximum texture map elements (texels) that can be applied in one second. This is worked out by multiplying the total texture units of the card by the core speed of the chip. The higher the texel rate, the better the video card will be at texture filtering (anisotropic filtering - AF). It is measured in millions of texels in one second. Pixel Rate: Pixel rate is the maximum amount of pixels the video card could possibly write to the local memory in a second - measured in millions of pixels per second. The figure is worked out by multiplying the number of Render Output Units by the the core speed of the card. ROPs (Raster Operations Pipelines - aka Render Output Units) are responsible for drawing the pixels (image) on the screen. The actual pixel fill rate is also dependant on quite a few other factors, most notably the memory bandwidth of the card - the lower the bandwidth is, the lower the potential to get to the max fill rate.
Display Prices
Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.
|
Comments
2 Responses to “GeForce 9800 GTX+ vs Radeon HD 5770”[...] twice as good. It performs smilarly (5% better overall) than a 5770, which is about overall about 15% better than a 9800GTX. The 550 is a pretty low end card, better yes, twice as good - definitely no. [...]
Actually according to Textural rate the 9800GTX+ beats the 5770. If the 2 cards were tested at lower resolution like 1280x1024 the 9800 GTX+ would take it in fps in almost all games above. And if you have a 17inch monitor thats the highest res you can go anyway.