Compare any two graphics cards:
GeForce GT 340 vs Radeon HD 4770
IntroThe GeForce GT 340 makes use of a 40 nm design. nVidia has clocked the core frequency at 550 MHz. The GDDR5 memory runs at a frequency of 850 MHz on this specific model. It features 96 SPUs along with 32 Texture Address Units and 8 Rasterization Operator Units.Compare those specifications to the Radeon HD 4770, which comes with core clock speeds of 750 MHz on the GPU, and 800 MHz on the 512 MB of GDDR5 memory. It features 640(128x5) SPUs as well as 32 TAUs and 16 Rasterization Operator Units.
Display Graphs
Power Usage and Theoretical BenchmarksPower Consumption (Max TDP)
Memory BandwidthAs far as performance goes, the GeForce GT 340 should theoretically be just a bit better than the Radeon HD 4770 overall. (explain)
Texel RateThe Radeon HD 4770 should be much (about 36%) better at texture filtering than the GeForce GT 340. (explain)
Pixel RateIf running with high levels of AA is important to you, then the Radeon HD 4770 is a better choice, by far. (explain)
Please note that the above 'benchmarks' are all just theoretical - the results were calculated based on the card's specifications, and real-world performance may (and probably will) vary at least a bit. Price Comparison
Display Prices
Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though. Specifications
Display Specifications
Memory Bandwidth: Memory bandwidth is the maximum amount of information (in units of MB per second) that can be moved past the external memory interface within a second. The number is worked out by multiplying the card's interface width by its memory clock speed. If it uses DDR RAM, it must be multiplied by 2 again. If it uses DDR5, multiply by 4 instead. The higher the bandwidth is, the better the card will be in general. It especially helps with AA, High Dynamic Range and higher screen resolutions. Texel Rate: Texel rate is the maximum number of texture map elements (texels) that can be applied in one second. This is calculated by multiplying the total texture units by the core clock speed of the chip. The better the texel rate, the better the graphics card will be at handling texture filtering (anisotropic filtering - AF). It is measured in millions of texels in one second. Pixel Rate: Pixel rate is the maximum number of pixels that the graphics chip could possibly record to its local memory per second - measured in millions of pixels per second. The figure is worked out by multiplying the number of ROPs by the clock speed of the card. ROPs (Raster Operations Pipelines - aka Render Output Units) are responsible for drawing the pixels (image) on the screen. The actual pixel rate also depends on many other factors, most notably the memory bandwidth - the lower the bandwidth is, the lower the ability to get to the max fill rate.
Display Prices
Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.
|
Comments
Be the first to leave a comment!