Currently the limitation in the amount of photons that can be caught versus the noise, set a limit for the dynamic range.
But what if the sensor output could not be over saturated?
The ideas are as follows.
Record time for saturation
Imaging if each sub pixel could EITHER register the number n of incoming photons during the shutter time T OR the time t it took to reach the saturation N, and could signal either n or t back with a flag for which it is.
That way all non saturated cells would have an n≤N as usual, but for all over saturated, we calculate n = N × T/t , note N < n since t<T
This should be great for HDR, or post adjusting the levels.
I imaging something like this: For a shutter time T an easily invertible signal ƒ(t) is supplied to all subpixels. Each subpixels have electronics that detects overrun (transistor?), and a way to store the height of ƒ (micro capacitor?) when this happens. A readout should then consist of either of the two, and a flag of which it is. For the calculated n = N × T/ƒ-1(ƒ(t))
A simple choice could be ƒ(t) =K × t/T and thus n = N × K /ƒ
Of course this would be more complex -and thus more expensive- than a traditional sensor, but if it could be done, we would get WAY more dynamic range, without any reduction in the quality of the signal for what is still below the saturation limit. There are no more noise introduced, as the normal signal is unchanged
Sure there is a limit on this as well, depending on the granulation/accuracy of the ƒ(t). If this is close to zero, or if discrete a few step-sizes, the calculation will be inaccurate – but still way better than current unknown level where we have no idea where between N and ∞ the output should have been.
If even more dynamic range than the linear function offers is wanted, another ƒ(t) could be used. If it grows quickly, the risk of division by a small integer or inaccurate value is reduced.
E.g. a scaled square root function could be used. ƒ(t) =K × √(t/T) , n = N × K2/ƒ2 – But I believe that would be overkill.
Different Bayer filter
Why not replace the standard passing R, G, G, B filters, with filters blocking R, G, B and InfraRed (Ir)?
So the recorded signal would be: (IrGB, IrRB, IrRG, RGB)
The sum of this will be 3 x IrRGB. Subtracting each from a third of this gives (R, G, B, Ir).
(The number that should be used is most likely a tad over 3 due to non-sharp filters, but that is a calibration issue – though poor filters would reduce the SNR)
The same interpolation patters using 4 cells as usual Bayer could be used
Though off course subtraction is a bad thing talking SNR, so this may render the whole idea problematic.
Apart from numeric rounding issues, it should keep the same dynamic range, as everything will be the same except twice as fast shutter speed to catch the same info.