Compare any two graphics cards:
GeForce GTX 1050 vs GeForce GTX 275
IntroThe GeForce GTX 1050 comes with core clock speeds of 1354 MHz on the GPU, and 1750 MHz on the 2048 MB of GDDR5 memory. It features 640 SPUs along with 40 TAUs and 32 ROPs.
Compare those specs to the GeForce GTX 275, which makes use of a 55 nm design. nVidia has clocked the core speed at 633 MHz. The GDDR3 RAM runs at a frequency of 1134 MHz on this model. It features 240 SPUs along with 80 Texture Address Units and 28 Rasterization Operator Units.
Power Usage and Theoretical Benchmarks
Power Consumption (Max TDP)
As far as performance goes, the GeForce GTX 275 should theoretically be a little bit better than the GeForce GTX 1050 in general. (explain)
Texel RateThe GeForce GTX 1050 should be a bit (about 7%) faster with regards to texture filtering than the GeForce GTX 275. (explain)
Pixel RateThe GeForce GTX 1050 should be a lot (more or less 144%) more effective at AA than the GeForce GTX 275, and also should be able to handle higher screen resolutions better. (explain)
Please note that the above 'benchmarks' are all just theoretical - the results were calculated based on the card's specifications, and real-world performance may (and probably will) vary at least a bit.
Memory Bandwidth: Bandwidth is the maximum amount of information (measured in MB per second) that can be transferred over the external memory interface within a second. It's worked out by multiplying the interface width by its memory clock speed. If it uses DDR RAM, it must be multiplied by 2 once again. If DDR5, multiply by 4 instead. The better the bandwidth is, the faster the card will be in general. It especially helps with anti-aliasing, High Dynamic Range and higher screen resolutions.
Texel Rate: Texel rate is the maximum number of texture map elements (texels) that are processed per second. This is calculated by multiplying the total texture units of the card by the core clock speed of the chip. The higher this number, the better the card will be at texture filtering (anisotropic filtering - AF). It is measured in millions of texels in a second.
Pixel Rate: Pixel rate is the maximum number of pixels that the graphics card can possibly record to its local memory in one second - measured in millions of pixels per second. Pixel rate is calculated by multiplying the amount of Render Output Units by the the card's clock speed. ROPs (Raster Operations Pipelines - also called Render Output Units) are responsible for outputting the pixels (image) to the screen. The actual pixel output rate also depends on quite a few other factors, most notably the memory bandwidth of the card - the lower the memory bandwidth is, the lower the ability to get to the maximum fill rate.