There are several LED layouts in use, but how do they differ and does it matter?
You might be aware that I got a simple eye resolution test over here, I have used that to look a bit in details on how it appears on two non plain grid screens.
On all images below, click to enlarge
By a plain grid I mean like RGB like e.g. on this monitor:
(Almost all macro images in this post are taken with Ricoh CX5, and old compact camera from a line that excels at macros. And sorry have done nothing to select a good white balance, so do not interpret anything into different colour distortions of the images)
Note that as soon as the image get a little away from the centre line the pixels looks like they are out of grid, simply due to them sitting at different depths (any layout)
The first image above (full) is from the new F(x) Pro¹, with a display from Boe, and it uses a PenTile layout I have not seen before. Here a blow-up from the above, with helper lines indicating the logical pixels.
Here one from a microscope image of a nexus 1 I got from here:
Samsung S8- (x8), Diamond PenTile
BlackBerry Priv (x8), Diamond PenTile
And now to the slightly more interesting: What does it matter?
At first it should be noted that the PenTile producers seem to have agreed on cheating without the competitors/the market crying out too loud.
To make white we need R+G+B of course, and in a classic display we consider one pixel as a combination of the three subpixels.
So a classic display got trice as many subpixels as the number of pixels.
But on PenTile they are just counting the green ones, and sharing a red and a blue between two pixels. so here G+R+B+G is called TWO pixels!
This means that your PenTile display only got twice the number of subpixel, a classic one with the same alleged resolution has. They are simply overselling their displays pixel count by a factor 150%!!
Let me use the above mentioned eye resolution test as an example:
Se the original post for classic display.
Lets start by the Samsung S8-
First the complex combination producing white, that looks simple until you click it and see how complex it is composed…
And then the similar figure for R, G and B
Note how the two separated lines works fine in one direction for all three basic colours, though with holes.
But in the other direction, things gets a bit hairy for red and blue….
The two separated lines ought to be
1 0 1
But as pixels do not exist sufficiently dense it is output as
½ 0 ½ 0
0 ½ 0 ½
When we get to where we ought have
½ 0 ½
0 1 0
So the sharpness of figures in red and blue is not quite as sharp as the green, due to the lack of pixels.
This means that the lack of resolution will require you to be careful on what direction you are using if you do the test , as the subpixels in one direction is not as sharp as they ought to be for the test.
For the Boe display of the Pro¹ we get something somewhat similar
That looks almost the same.
What is the point of the Boe layout then?
If you look at the Samsung layout the greens can be said to be in a standard rectangular grid, but red and blue in a 41% larger diamond grid.
But if you look at Boe’s two greens as one unit, it can be seen they are all in the same size grid. My GUESS is that this can help against some digital artefacts, by doing better subpixel rendering.
I (accidentally) moved the image a non integral number of pixels, and the green looked like this odd image:
This was on the Pro¹ display (I can make the similar on the S8-).
Here they could clearly have done some much more clever sub-pixel rendering than just drawing four ‘full pixel lines’s at half brightness. But it might be smoothing done at a high level not aware of the subpixel layout.
….But at least the creative counting is not as bad as ALL camera colour sensors, where each R, G and B subpixel is each oversold as a full RGB pixel!! (The info of each guessed/interpolated from its neighbour sub-pixels)