Compare any two graphics cards:
GeForce GTS 450 1GB vs Radeon HD 5570
IntroThe GeForce GTS 450 1GB comes with a core clock speed of 783 MHz and a GDDR5 memory frequency of 902 MHz. It also makes use of a 128-bit bus, and makes use of a 40 nm design. It is made up of 192 SPUs, 32 Texture Address Units, and 16 Raster Operation Units.Compare those specs to the Radeon HD 5570, which comes with core clock speeds of 650 MHz on the GPU, and 900 MHz on the 512 MB of DDR3 memory. It features 400(80x5) SPUs along with 20 Texture Address Units and 8 Rasterization Operator Units.
Display Graphs
Power Usage and Theoretical BenchmarksPower Consumption (Max TDP)
Memory BandwidthIn theory, the GeForce GTS 450 1GB is 100% faster than the Radeon HD 5570 in general, because of its higher data rate. (explain)
Texel RateThe GeForce GTS 450 1GB should be much (more or less 93%) better at AF than the Radeon HD 5570. (explain)
Pixel RateIf using a high resolution is important to you, then the GeForce GTS 450 1GB is a better choice, and very much so. (explain)
Please note that the above 'benchmarks' are all just theoretical - the results were calculated based on the card's specifications, and real-world performance may (and probably will) vary at least a bit. Price Comparison
Display Prices
Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though. Specifications
Display Specifications
Memory Bandwidth: Bandwidth is the maximum amount of information (measured in megabytes per second) that can be transferred past the external memory interface in a second. The number is worked out by multiplying the interface width by its memory speed. If the card has DDR RAM, it should be multiplied by 2 again. If DDR5, multiply by ANOTHER 2x. The higher the card's memory bandwidth, the faster the card will be in general. It especially helps with AA, HDR and high resolutions. Texel Rate: Texel rate is the maximum number of texture map elements (texels) that are applied per second. This number is worked out by multiplying the total amount of texture units by the core clock speed of the chip. The better this number, the better the video card will be at handling texture filtering (anisotropic filtering - AF). It is measured in millions of texels processed per second. Pixel Rate: Pixel rate is the maximum number of pixels that the graphics chip could possibly write to the local memory per second - measured in millions of pixels per second. The figure is calculated by multiplying the amount of Render Output Units by the the core clock speed. ROPs (Raster Operations Pipelines - sometimes also referred to as Render Output Units) are responsible for drawing the pixels (image) on the screen. The actual pixel output rate also depends on many other factors, most notably the memory bandwidth of the card - the lower the memory bandwidth is, the lower the ability to get to the max fill rate.
Display Prices
Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.
|
Comments
Be the first to leave a comment!