Compare any two graphics cards:
GeForce GT 340 1GB vs Radeon HD 5570
IntroThe GeForce GT 340 1GB has core speeds of 550 MHz on the GPU, and 850 MHz on the 1024 MB of GDDR5 RAM. It features 96 SPUs as well as 32 Texture Address Units and 8 Rasterization Operator Units.Compare those specifications to the Radeon HD 5570, which features a core clock speed of 650 MHz and a DDR3 memory speed of 900 MHz. It also uses a 128-bit bus, and makes use of a 40 nm design. It features 400(80x5) SPUs, 20 Texture Address Units, and 8 Raster Operation Units.
Display Graphs
Power Usage and Theoretical BenchmarksPower Consumption (Max TDP)
Memory BandwidthTheoretically, the GeForce GT 340 1GB should be much faster than the Radeon HD 5570 overall. (explain)
Texel RateThe GeForce GT 340 1GB should be a lot (about 35%) faster with regards to anisotropic filtering than the Radeon HD 5570. (explain)
Pixel RateIf using a high resolution is important to you, then the Radeon HD 5570 is a better choice, but only just. (explain)
Please note that the above 'benchmarks' are all just theoretical - the results were calculated based on the card's specifications, and real-world performance may (and probably will) vary at least a bit. Price Comparison
Display Prices
Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though. Specifications
Display Specifications
Memory Bandwidth: Memory bandwidth is the largest amount of data (counted in MB per second) that can be moved over the external memory interface in one second. It's worked out by multiplying the interface width by the speed of its memory. If the card has DDR memory, it should be multiplied by 2 once again. If DDR5, multiply by ANOTHER 2x. The higher the card's memory bandwidth, the better the card will be in general. It especially helps with AA, High Dynamic Range and higher screen resolutions. Texel Rate: Texel rate is the maximum amount of texture map elements (texels) that are applied per second. This is calculated by multiplying the total amount of texture units by the core speed of the chip. The better the texel rate, the better the video card will be at handling texture filtering (anisotropic filtering - AF). It is measured in millions of texels per second. Pixel Rate: Pixel rate is the maximum amount of pixels that the graphics card could possibly write to its local memory in a second - measured in millions of pixels per second. Pixel rate is worked out by multiplying the number of Render Output Units by the the core speed of the card. ROPs (Raster Operations Pipelines - also sometimes called Render Output Units) are responsible for drawing the pixels (image) on the screen. The actual pixel output rate is also dependant on lots of other factors, most notably the memory bandwidth of the card - the lower the bandwidth is, the lower the ability to get to the maximum fill rate.
Display Prices
Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.
|
Comments
Be the first to leave a comment!