Comment 3 for bug 1941854

Revision history for this message
Jeff Lane  (bladernr) wrote :

So thinking back on this, here are a few comments:

1: This test has existed for a long, long time. It was (and is) intended to check to see that the amount of memory the kernel sees is reasonably close to what is physically installed on the system (per lshw). Unfortunately, "reasonably close" is difficult to define, and difficult to check for.

2: 10% variance was, at least then, reasonable to account for physical memory reallocated for things like embedded graphics that the kernel never sees. Perhaps newer embedded GPUs are using more shared memory on occasion.

3: Using a percentage was the best way at the time to accomplish this because the amount of shared RAM varies from system to system, GPU to GPU. A hard limit like 256MB for example may be perfectly valid for 50% of systems, but then the other 50% may use 384MG or 512MB (those are arbitrary numbers just for example, they do not reflect actual amounts of shared RAM).

I sometimes think about this test and wonder if there is a better way to do this, because the problem with percentages (and this also bugs me with the ethernet testing too) is, as you've observed, the larger the number the bigger that percentage becomes (10% of 1GB is a lot smaller than 10% of 10GB).

As a thought, at least for this, is there a way to probe how much RAM is being consumed outside the OS by the graphics or other system overhead? That could be a good improvement if you can probe that and then subtract the amount of system shared RAM from what lshw says is installed before comparing it to what the kernel has addressed.

Anyway, just some thoughts. This is more an issue on client systems than servers as my stuff generally has very little shared ram so this test never fails.