Comment 3 for bug 1774119

Revision history for this message
Bernd Späth (bspaeth) wrote :

Fiddling around I was able to gather some further information.

Good news first.

Only the OpenGL video output module seems to be affected by the bug.
Switching 'Output' via 'Tools->Preferences->Video' from 'Automatic' to 'XVideo (XCB)' as a workaround restores video output to the full frame image.

Not using the OpenGL output module with any source working in single-plane mode such as the mentioned stk1160 seems to be advisable anyways.

Digging around in the sources revealed there is not only an issue with the way textures are uploaded to the GPU leading to the "only upper half showing" syndrome.
The algorithm implemented in the OpenGL output module also discards half of the luminance resolution leading to an output quality effectively cutting the luminance resolution down by a factor of two.

@Colin: I hope this information is enough to get video output running with VLC version 3.0.2. for you again.

Just in case the package maintainer or anybody involved in VLC development happens to come across this bug report, I will add more detailed information about the problem below.

Using the official VLC development git repository I was able to track down the problem to commit 2599e2f7394972ec595b2f14d03bf0fb8f808c46 .

The file affected is modules/video_output/opengl/converter_sw.c .

Here the PIXEL_UNPACK_ROWLENGHT of textures transfered to the GPU got changed replacing the calculation "i_pitch / i_pixel_pitch" with "i_pitch * tex_width / i_visible_pitch".

I strongly doubt the new version ever worked correctly except for corner-cases in which i_pitch and i_visible_pitch are the same essentially cancelling out each other in the calculation leaving just tex_width still working in multi-plane mode.

However for hardware running single-plane mode such as my stk1160 using UYVY 4:2:2 encoding the calculated rowlength will be off by a factor of i_pixel_pitch.

Yet this doesn't seem to be the only issue with the code.
Again talking about single-plane 4:2:2 encoded data, buffer data gets uploaded to a RGBA texture.
As the coresponding comment in fragment_shaders.c correctly states "Y1 U Y2 V fits in R G B A".
Unfortunately the generated fragment shader never makes use of the Y2 value stored in the alpha component, reducing luminance resolution by a factor of 2.

IMHO it could be a good idea to just store the buffer handed over by hardware twice.
Once to a 4 component RGBA texture and once to a 2 component say RG texture.

This way we could get both Y values from our RG texture. And corresponding Cb and Cr values from our RGBA texture.
Meaning our fragment shader would look something like this.

---
uniform sampler2D Texture0;
uniform sampler2D Texture1;

smooth in vec2 TexCoord0;

void main() {
     float y = texture2D(Texture0, TexCoord0).g * 255;

     float cb = texture2D(Texture1, TexCoord0).r * 255;
     float cr = texture2D(Texture1, TexCoord0).b * 255;

     float r = y + 1.402 * (cr - 128);
     float g = y - 0.34 * (cb - 128) - 0.71 * (cr - 128);
     float b = y + 1.77 * (cb - 128);

      gl_FragColor = vec4(r/255, g/255, b/255, 1);
}
---

I quick-hacked together a stand-alone program using this strategy to verify the mentionend algorithm shows expected results.
But alas I got totally stuck in ifdef mazes, fragment shader code generation, etc. to integrate any of it in the current VLC source code.

Don't forget to contact me, if anybody with enough knowledge about the OpenGL video output module would be willing to give me some hint on how to go about integrating the mentioned strategy without breaking other things, or be in need of more information.

Best regards
  Bernd.