After the migration, i did:
md5sum -c test.md5
And the result was OK. (memory not corrupted).
I also modified the above test allocating chunks of 2M, this way:
for i in {0001..2000} ; do dd if=/dev/urandom of=/dev/shm/img_${i} bs=2M count=1 ; done
md5sum /dev/shm/* > test.md5
After the migration, i did:
md5sum -c test.md5
And the result was OK for every file. (memory not corrupted).
Conclusion:
- I have found no difference between patched and unpatched kernel during the tests.
- The memory after the migration seems fine, returning the same memory block (tested with md5sum)
Is there any other suggestion about how to reproduce the bug?
------- Comment From <email address hidden> 2019-01-04 06:12 EDT-------
I have tried the following test in order to reproduce the bug:
## mm/transparent_ hugepage/ enabled
root@localhost:~# uname -a
Linux localhost 4.15.0-20-generic #21-Ubuntu SMP Tue Apr 24 06:14:44 UTC 2018 ppc64le ppc64le ppc64le GNU/Linux
root@localhost:~# cat /sys/kernel/
[always] madvise never
##
dd if=/dev/urandom of=/dev/shm/img bs=2M count=2000
md5sum /dev/shm/img > test.md5
After the migration, i did:
md5sum -c test.md5
And the result was OK. (memory not corrupted).
I also modified the above test allocating chunks of 2M, this way:
for i in {0001..2000} ; do dd if=/dev/urandom of=/dev/ shm/img_ ${i} bs=2M count=1 ; done
md5sum /dev/shm/* > test.md5
After the migration, i did:
md5sum -c test.md5
And the result was OK for every file. (memory not corrupted).
Conclusion:
- I have found no difference between patched and unpatched kernel during the tests.
- The memory after the migration seems fine, returning the same memory block (tested with md5sum)
Is there any other suggestion about how to reproduce the bug?
Thanks!