Compare any two graphics cards:
GeForce GT 320 vs Radeon HD 5550
IntroThe GeForce GT 320 uses a 40 nm design. nVidia has set the core speed at 540 MHz. The GDDR3 RAM runs at a frequency of 790 MHz on this specific card. It features 72 SPUs along with 24 Texture Address Units and 8 Rasterization Operator Units.Compare those specs to the Radeon HD 5550, which has core speeds of 550 MHz on the GPU, and 400 MHz on the 512 MB of DDR2 memory. It features 320(64x5) SPUs along with 16 Texture Address Units and 8 ROPs.
Display Graphs
Power Usage and Theoretical BenchmarksMemory BandwidthThe GeForce GT 320 should theoretically perform quite a bit faster than the Radeon HD 5550 in general. (explain)
Texel RateThe GeForce GT 320 is a lot (about 47%) faster with regards to anisotropic filtering than the Radeon HD 5550. (explain)
Pixel RateIf using high levels of AA is important to you, then the Radeon HD 5550 is superior to the GeForce GT 320, not by a very large margin though. (explain)
Please note that the above 'benchmarks' are all just theoretical - the results were calculated based on the card's specifications, and real-world performance may (and probably will) vary at least a bit. Price Comparison
Display Prices
Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though. Specifications
Display Specifications
Memory Bandwidth: Bandwidth is the maximum amount of information (in units of MB per second) that can be transported over the external memory interface in one second. It's calculated by multiplying the interface width by its memory speed. In the case of DDR RAM, it must be multiplied by 2 once again. If it uses DDR5, multiply by ANOTHER 2x. The better the card's memory bandwidth, the better the card will be in general. It especially helps with anti-aliasing, High Dynamic Range and higher screen resolutions. Texel Rate: Texel rate is the maximum number of texture map elements (texels) that can be applied per second. This is worked out by multiplying the total number of texture units of the card by the core clock speed of the chip. The higher the texel rate, the better the card will be at handling texture filtering (anisotropic filtering - AF). It is measured in millions of texels applied in one second. Pixel Rate: Pixel rate is the maximum number of pixels that the graphics card could possibly write to the local memory in one second - measured in millions of pixels per second. Pixel rate is calculated by multiplying the number of Render Output Units by the the core clock speed. ROPs (Raster Operations Pipelines - also sometimes called Render Output Units) are responsible for filling the screen with pixels (the image). The actual pixel output rate also depends on many other factors, especially the memory bandwidth - the lower the memory bandwidth is, the lower the potential to get to the maximum fill rate.
Display Prices
Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.
|
Comments
Be the first to leave a comment!