Comment 5 for bug 1507921

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

The comment is back from 2004-04-1 http://git.kernel.org/cgit/linux/kernel/git/tglx/history.git/commit/?id=56d93842e4840f371cb9acc8e5a628496b615a96

I doubt that anybody thought about 1G hugepages back then.
Reading the referred doc over and over again I also realized they are referring to 2*alloc not 2*#hugepages.

Only other references I found were:
- some forums and howtos that set it to very high number for high memory sytems (high memory depending on the time of the post e.g. 64G in one example which today is normal for servers)
- hugepage.py charmhelper which got it from this bug
- DPDK issue with a lot of huge pages http://dpdk.org/ml/archives/dev/2014-September/005397.html

The latter being the only source close to what we discuss here.

Around rte_eal_hugepage_init/map_all_hugepages in the dpdk source one finds the chance of 2*mapping of all hugepages.
In fact those can be limited via -m / socket-mem or whatever EAL parm you prefer.
But lets assume up to #hugepages.
And there it does a mapping of hpi->hugepage_sz.
So it does up to 2* mappings for each hugepage, no matter what the size is.
And the padding is to add the normal system limit on top as application and dpdk do more than just handling the huge pages.

Ok, that summarized I think it makes sense to me now.
I hope that helped the next one getting by to understand it as well.