Hi Jesse,
Thanks for responding, and sorry for the late reply. I've been trying to find a time to test out what you said.
What you said is true. Configs with stencil != 0 have a -1000 (UNACCEPTABLE_COMPONENT_PENALTY), and configs with stencil == 0 receive 32 (MAXIMUM_COMPONENT_SCORE). Unfortunately, just changing the stencil scale from 0 to 1 as you suggested doesn't affect these two score values.
For my depth != 0 configs, all of the RGB ones have stencil=8. Only the first RGBA config has a 0 stencil size.
So, all the RGB configs have a -1000 penalty. However, the RGBA configs also have a -1000 penalty, since I am requesting a=0.
So, the tie-breaker is the "buffer size" field.
For the RGB, buffer size is 24, which gives a score of (24-1) = 23.
For the RGBA, buffer size is 32, which gives a score of (32-1) = 31.
This is where I think the bug is. The EGL spec says that the configs should be sorted in *increasing* order of buffer size, so 24 bit RGB buffers should be preferred over the larger 32-bit buffers. But, the scoring for buffer size is producing the opposite sort order.
By the way, why is glmark2 asking for an RGBA X visual by default, anyway?
Does it actually use the alpha channel for any of the tests?
Hi Jesse,
Thanks for responding, and sorry for the late reply. I've been trying to find a time to test out what you said.
What you said is true. Configs with stencil != 0 have a -1000 (UNACCEPTABLE_ COMPONENT_ PENALTY) , and configs with stencil == 0 receive 32 (MAXIMUM_ COMPONENT_ SCORE). Unfortunately, just changing the stencil scale from 0 to 1 as you suggested doesn't affect these two score values.
localhost ~ # glmark2-es2 --validate --debug -b build:use-vbo=false --visual-config 'a=0'
Info: target rgba: {1 1 1 0} depth: 1 stencil: 0 buffer: 1
Info: config rgba: {8 8 8 0} depth: 24 stencil: 8 buffer: 24 => score: = {28 28 28 32} 23 -1000 23 == -838
Info: config rgba: {8 8 8 0} depth: 24 stencil: 8 buffer: 24 => score: = {28 28 28 32} 23 -1000 23 == -838
Info: config rgba: {8 8 8 0} depth: 24 stencil: 8 buffer: 24 => score: = {28 28 28 32} 23 -1000 23 == -838
Info: config rgba: {8 8 8 8} depth: 24 stencil: 0 buffer: 32 => score: = {28 28 28 -1000} 23 32 31 == -830
Info: config rgba: {8 8 8 8} depth: 24 stencil: 8 buffer: 32 => score: = {28 28 28 -1000} 23 -1000 31 == -1862
Info: config rgba: {8 8 8 8} depth: 24 stencil: 8 buffer: 32 => score: = {28 28 28 -1000} 23 -1000 31 == -1862
Info: config rgba: {8 8 8 8} depth: 24 stencil: 8 buffer: 32 => score: = {28 28 28 -1000} 23 -1000 31 == -1862
For my depth != 0 configs, all of the RGB ones have stencil=8. Only the first RGBA config has a 0 stencil size.
So, all the RGB configs have a -1000 penalty. However, the RGBA configs also have a -1000 penalty, since I am requesting a=0.
So, the tie-breaker is the "buffer size" field.
For the RGB, buffer size is 24, which gives a score of (24-1) = 23.
For the RGBA, buffer size is 32, which gives a score of (32-1) = 31.
This is where I think the bug is. The EGL spec says that the configs should be sorted in *increasing* order of buffer size, so 24 bit RGB buffers should be preferred over the larger 32-bit buffers. But, the scoring for buffer size is producing the opposite sort order.
By the way, why is glmark2 asking for an RGBA X visual by default, anyway?
Does it actually use the alpha channel for any of the tests?