but still got a nova-grenade-multinode failure [1] after some rechecks. This time, there is no warning message about page cleaner being behind. So that proves that the page cleaner message is not a fingerprint for this bug.
Recall that I am running tests on a patch that has enabled db connection debug logging via:
With that in mind, looking at the screen-n-sch.txt log on the failed nova-grenade-multinode run [1], I see one query result for compute_nodes [2] but then after that, I never see another query result for compute_nodes logged again.
Contrast with a passing grenade-py3 run [3], where there are a total of 22 query results for compute_nodes logged in screen-n-sch.txt.
This is showing that we are consistently not receiving rows back from the database in the fail case.
I tried some settings in devstack:
[mysqld] lru_scan_ depth 256 buffer_ pool_size 1G buffer_ pool_instances 2
innodb_
innodb_
innodb_
but still got a nova-grenade- multinode failure [1] after some rechecks. This time, there is no warning message about page cleaner being behind. So that proves that the page cleaner message is not a fingerprint for this bug.
Recall that I am running tests on a patch that has enabled db connection debug logging via:
[database]
use_db_reconnect "True"
connection_debug "100"
With that in mind, looking at the screen-n-sch.txt log on the failed nova-grenade- multinode run [1], I see one query result for compute_nodes [2] but then after that, I never see another query result for compute_nodes logged again.
Contrast with a passing grenade-py3 run [3], where there are a total of 22 query results for compute_nodes logged in screen-n-sch.txt.
This is showing that we are consistently not receiving rows back from the database in the fail case.
[1] https:/ /zuul.opendev. org/t/openstack /build/ 8c91fd21815148d 9894ac2bf60893a 9e paste.openstack .org/show/ 790679 /zuul.opendev. org/t/openstack /build/ 9de69db4ffba4ce eba73b959653618 e2/log/ controller/ logs/screen- n-sch.txt
[2] http://
[3] https:/