This is an attempt to replace some of the common myths, urban legends and misunderstandings with some facts.
(last updated 20181103)
There are at least three physical issues and one biological that are interesting:
1) Rayleigh’s criterion and sensor width.
F * √N < W , (N number of sensors in Mpixels, W in mm, red light F numerical aperture)
2) The eye’s real life resolution
Resolution beyond 1/12000 of viewing distance is pointless for most
3) Thermal noise.
Relative noise increases by 1/d, (+)almost √n and √T, (N number of sensors in Mpixels, d sensor diameter, T in kelvin, i.e. °C+273.15)
4) Effective pixel count
The effective pixel count falls with the amount of light, the less the pixel size the worse.
5) Stacking images
Stacking can compensate, but can not perform miracles.
6) Thickness of device versus sensor size
The angle of light to the sensor edges binds max sensor size to thickness.
DISCLAIMER on the language this is a rough translation/rewrite and enhancement based on a google-translate of a Danish text.
I’ve tried to collect some formulas and numbers on what is really wrong with this megapixel-craze, especially with tiny lenses.
I’m looking at PHYSICAL and BIOLOGICAL limits, no matter how ideal electronics, optics and light conditions are.
I will continually seek to edit this post as I get more information, or to make something easier to read/understand.
Please let me know in the comments if something is incorrect or unclear, so we can gather some facts that people can use.
on 1) Rayleigh’s criterion and sensor width.
Have tried as best I can to do a little calculus on Rayleigh criterion
From eg http://en.wikipedia….ular_resolution we have a limit to the minimum pixel size of the sensor:
1.220 * λ * f / D < Δl
where f is the focal length, λ the wavelength and D is the aperture diameter. The 1.220 is an empirical constant related to the eye’s perception of the Airy disc, this figure can probably be reduced somewhat by electronic detection.
Calling the number of pixels in the wide direction Nw there must be a minimal image sensor width W given by:
Nw*Δl < W
If we combine the two and use the f/D is the aperture number F of the lens, we get
Nw * 1.220 * λ * F < W
Assuming red light 700nm (green and blue gives weaker requirements), we have:
Nw * 854nm * F < W
or
F < dg / 0.854μm , with dg the grid-distance centre to centre of the pixels (Almost equal pixel width in modern sensors)
And due to a numerical coincidence [*] it can be written as the following (that is QUITE a bit easier to remember) for the theoretically smallest possible sensor width:
F * √N < W , (N number of sensors in Mpixels, W in mm, red light and F aperture number)
within + 1.4% (4:3), respectively, -4.6% (3:2). And that is plenty accurate!
[*]For the nerds: Nw*854nm ≈ 1mm*√(N/Mpixels) since N=Nw*Nw/AspRat and 0.854*√AspRat≈1 since √(3/4) / 0.854 = 1.014 and √(2/3) / 0.854 = 0.956
A few examples:
F * 2.2mm < W , for 5 Mpixel
F * 3.5mm < W , for 12 Mpixel
F * 4.0mm < W , for 16 Mpixel
F * 4.4mm < W , for 20 Mpixel
a sensor with 1.2μm pixels should have a lens with F < 1.4/0.854 = 1.64
If we now look at the megapixel craze, and specifically mobile phones, with their ridiculously small lenses, and look at the now widespread 5MPixel (they have even started with 12MPixel ) I do not say that this is impossible, but it requires a REALLY, REALLY fast lens in the relevant dimensions.
An interesting claim from here:
This is under ideal condition, but if we do the calculus from the 275Mpixel [***] we get an upper limit.
Based on this table of sensor size http://www.dpreview….sensorsizes.asp I have expanded with a column of ratio to a full frame sensor area, and a scaling of the 275Mpix and F2.8. Has also added some intermediate rows (data via wiki)). (And removed some columns to make it less confusing)
(Note that the actual sensor diagonal is approximately 2/3 (63% -72%) of the advertising figures !! A 1/2″ sensor is thus actually only about 1/3″ or 8mm)
Thus we get:
Sensor Asp. Width Height x*Full MPixel MPix F2.8 1/3.6" 4:3 4.000 3.000 1/72.0 29.9/F² 3.8 1/3.2" 4:3 4.536 3.416 1/55.8 38.7/F² 4.9 1/3" 4:3 4.800 3.600 1/50.0 43.1/F² 5.5 1/2.7" 4:3 5.270 3.960 1/41.4 52.1/F² 6.6 1/2.5" 4:3 5.760 4.290 1/35.0 61.7/F² 7.9 1/2.3" 4:3 6.160 4.620 1/30.4 71.0/F² 9.1 1/2" 4:3 6.400 4.800 1/28.1 76.7/F² 9.7 1/1.8" 4:3 7.176 5.319 1/22.6 95.1/F² 12.1 1/1.7" 4:3 7.600 5.700 1/19.9 108.1/F² 13.8 1/1.6" 4:3 8.080 6.010 1/17.6 122.2/F² 15.6 2/3" 4:3 8.800 6.600 1/14.9 144.9/F² 18.2 1/1.2" 4:3 11.000 8.250 1/9.53 226.4/F² 28.9 CX 3:2 13.200 8,800 1/7.44 289.9/F² 37.0 1" 4:3 12.800 9.600 1/7.03 306.6/F² 39.1 μ4/3 4:3 17.300 13.000 1/3.84 561.2/F² 71.6 4/3" 4:3 18.000 13.500 1/3.56 606.4/F² 77.4 APS-C 3:2 25.100 16.700 1/2.06 1046.0/F² 133.4 APS-H 13:9 29.200 20.200 1/1.46 1471.9/F² 187.7 35 mm 3:2 36.000 24.000 1.0000 2156.0/F² 275.0 ◄-Starting point 645 4:3 56.000 41.500 2.6898 5799.2/F² 739.7
Of this you can see that based on the 275MPixel for full frame the ideal ratio and F2.8, the corresponding limits for a
1/2.5″ sensor is 7.9Mpixel
and the now very widespread
1/1.8″ sensor is 12Mpixel
and
1/1.7″ sensor is 14Mpixel
But remember that these figures are the ideal ratio and F2.8, so in reality the boundaries of meaningful Megapixel amount would be lower.
Note that the third to last column “x*Full” is approximate (within an adjustment for varying crop factor of a few percent of the 4:3) one over the square of the crop factor. A 1/3″ sensor with area-factor 1/50 corresponds to crop factor 7
[***] We can easily verify that these numbers matches the above formula from Rayleigh criterion
Nw * 1.220 * λ * F < W
using green light λ=520nm and F2.8 that gives
Nw * 1776nm < W
and W=36mm thus matches Nw<20267, and with 3:2 this means 274Mpixels (=Nw*Nw*2:3), so it’s probably the formula that has been used.
As the factor 1.220 is related to the eye’s perception of the Airy disc, this constant might be reduced slightly by electronic detection, but on the other hand the calculus is on green light where the red light is a harder limit, so the numbers in the table should hold fairly anyway, but should certainly not be taken as exact (as decimals might suggest).
Empirically take Ricoh CX1 as an example, who allegedly have F3.5 and a small 1/2.3″ sensor with 9.3Mpixel, by the table it should be 1.6 times the upper limit of 71.0 / 3.5² = 5.8Mpixel, but I can see that zoomed in then each pixel has sensible information. As an example, zoom into this this macro-crop where the individual pixels around the micro-writing is distinct, (one can even see the hologram foil is bent up of paper fibers and distorts the image)
But we are close! Looking at this crop with even smaller micro-writing, where there are pixels you’d expect should have a different colour.
CX1 is even just an ordinary “consumer camera” and not a “enthusiast camera”.
But we are getting very close, the CX3 SHOULD by pure numbers be able to show some finer details, as it can both make macro of smaller area and with higher resolution (see section on macro of my review of CX3 ), but it is only marginally better.
Wanted: Does anyone have a camera where even under the best lighting conditions and a sharp image it does not have distinct information in each pixel even at the best part of the image?
A good guess for one could be “Nikon Coolpix S6100″, with 1/2.3″ F3.7 who have sensor of 16Mpixel. That is 3.1 times limit of 71.0/3.7²=5.2Mpixel from the table.
Or maybe a phone ??
on 2) The eye’s real life resolution
There is a wide spread urban legend, among others pushed by aPple, that there is a limitation of the the absolute resolution a human can see.
This is utter nonsense, except at a specific distance, as the limit is on the resolution relative to the viewing distance.
For most people it lies around 1/12000 of the viewing distance. For my aged eyes it is about 1/7000 and a few have reported as good vision as 1/26000
See this eye resolution test here
Please note that this is NOT an limit of what is NEEDED, but a limit that it is pointless to go beyond. On a mobile device you most likely would accept a coarser resolution to save some battery.
The requirements for stills are higher than videos, as the eye/brain are very good at smoothing moving stuff out. Remember old TV-images.
The 1/12000 limit matches
6000dpi distance at least 2 inch or 5cm (few can probably focus so close …)
1200dpi distance at least 10 inches or 13cm
600dpi distance at least 20 inches or 50cm [**], suitable for ordinary A4 letters
150 dpi distance at least 80 inches or 2m, suitable for large posters, small banners
10 dpi distance of 1200 inches, or 30m, suitable for large outdoor banners
(Or by reverse lookup: Minimum resolution for a given viewing distance)
[**] This is confirmed by most people can tell the difference in quality of a laser print with 300dpi and 600dpi held at a normal reading distance of ½m.
So please help replace the 300dpi urban legend with
1/12000 of viewing the distance
This also explains why the images are so MISERABLE in digital cinema where they blow a mere 2048 pixels onto a large broad canvases. So we should all sit six (12000/2048) times the screen width from the screen to not be able to see individual pixels!!!! Digital 2K film should thus be seen in the rear ‘kissing rows’ only (Max 10° in width = 2Atan (1/12)), while we as film geeks must await 4K (so we can get ‘up’ at 37° of the visual field), and hopefuly soon 8K!!
👿 Or rejoice that as we get old(er), we can not really see the misery any more 👿
(2K film on a 5m wide canvas is 10dpi corresponding to the bottom of the list)
To fill all the ‘regular’ field of view (approximately 30° to each side, that is a little less distance to the subject than the width of the subject) and not be able to see individual pixels the limit is about 1/12000 of the width and somewhat less in height. But again, this is meant as an extreme ‘requirements’, beyond what is pointless. So, there is not much point in going beyond 8K for projectors. And for TV’s filling out a lesser portion of the field of view, there will not be much point in going beyond 4K.
Examples and more on Rayleigh, see here
on 3) Thermal noise
For a given sensor type and temperature (as far as I understand) the absolute value of the thermal noise is proportional to the square root of the area of the pixel sensor, while the signal is proportional to the area of each pixel sensor. Therefore, the relative noise is inversely proportional to the square root of the area of the pixel sensor (and therefore inversely proportional to the width of the pixel sensor). But since the area of each pixel sensor is inversely proportional to the number of pixels in an image sensor of a given area, we get: The relative noise is proportional to the square root of the number of pixels in an image sensor of a given area.
Or in plain words each time an image sensor of the same size and technology will have four times as many pixels, then the relative noise will be doubled.
As each pixel-sensor’s area of course is proportional to the area of the entire sensor for fixed N, the relative noise will also be inversely proportional with the total sensor diameter.
The noise also increases proportional to the square root of the absolute temperature. (i.e measured in Kelvin, Celcius-temperature + 273.15)
e.g. at -40°C the thermal noise will be 11% less than at +20°C. And at 100°C it is 13% greater than at 20°C. For example √ ((273.15+100)/(273.15+20))≈1.13
As can be seen the temperature dependency is quite moderate, even in intense frost or heavy use where the camera is about to overheat. (There are probably some plastic parts that succumb before the 100°C is reached …)
ADD: (+)
I found these very interesting articles from dpreview.com on the subject The effect of sensor and pixel sizes on noise page two on the pixel size inked
The interesting thing they add is that noise tends to grow a bit slower than √n as smaller sensors seems to produce less noise per area than larger ones – though it does not cancel out.
No explanation on the why is offered though. “read noise tends to be a little lower as you scale pixels down”
Also see these two other interesting dpreview,com articles on the different sources of noise part 1 and part 2
on 4) The effective pixel count falls with the amount of light, the less the pixel size the worse.
From this paper from Stanford, we got some interesting graphs on how the small pixel size makes the low light ability of the sensor bad, and really fast too. They have done the calculus for a 1/3.2″ and a very small 1/8″ (it was 2009).
(Where I took the liberty to change the labelling of fig2 to 106 too, to avoid confusion, and as Megapixels is the usually used unit)
And here what they would look if drawn to same scale (click to enlarge)
Unfortunately they have not done it with the a bit less tiny sensors we have in the better devices of 2015, nor with as small pixels, only down to 1.4µm, but the tendency is very clear. and the current 1.2µm or even worse, will just make the fall steeper, and the requirements for very much light to make sense even worse, even for outdoor usage….
The X axis goes from ‘direct sunlight’ 105 Lux, downto ‘very dark overcast day’ 102 Lux. Typical indoor living room would be even further down around 0.5×102 !! (See e.g. Wiki)
Here is a scale from here (mirrored from original to match direction of above)
on 5) Stacking images.
Many of the top flagship phablets in 2018 shoots a number of shoots in rapid succession and stack these to compensate for the tiny pixels. And this works wonderfully for a still motive, And they are getting better at tracking moving subjects too, but of course can not produce miracles. If something is moving relatively slowly in the frame it can work.
But of course you can apply the EXACT same technique to larger sensors and get even better pictures, so image stacking is not an argument, but a new technique to get better images in lo light from any sensor.
on 6) Thickness of device versus sensor size.
The current sensor technology puts a limit to the angle the light from the lens can hit a sensor to be registered. And be registered in the correct pixel. Some newer sensors has better separation between each sub-pixel, put the photon still has to be detected, and it will require new technology like severely curvature of the sensor, or some special micro lenses the further towards the edges & corners the pixels are, and thus the shallower the angle gets.
…And this is not even taking into consideration the ability of the lens to focus on the full sensor surface.
__________________________________
Original was posted here, back in 2008. It has been enhanced and corrected since