Actually when looking at logstash, the failures are mostly in the non-Ironic jobs which are using keystone v3, and those are failing because we create the compute_nodes record in the database from the resource tracker in the nova-compute service, but we fail to update the compute node record which sets the free_disk_gb value:
Why we're hitting the auth issue with keystone v3 I don't know, but it's when we're trying to talk to the placement API for the resource provider for the compute node, which should be handled with a safe_connect decorator...
Actually when looking at logstash, the failures are mostly in the non-Ironic jobs which are using keystone v3, and those are failing because we create the compute_nodes record in the database from the resource tracker in the nova-compute service, but we fail to update the compute node record which sets the free_disk_gb value:
http:// logs.openstack. org/65/ 416765/ 1/check/ gate-tempest- dsvm-neutron- identity- v3-only- full-ubuntu- xenial- nv/faf1363/ logs/screen- n-cpu.txt. gz#_2017- 01-04_23_ 15_06_674
And it looks like that's happening because of some keystone v3 auth issue when talking to the placement service.
As for the 1/2 date, ~1/3 is when we made the placement-api run in all jobs in devstack:
https:/ /review. openstack. org/#/c/ 412537/
Which would include the keystone v3 API job.
Why we're hitting the auth issue with keystone v3 I don't know, but it's when we're trying to talk to the placement API for the resource provider for the compute node, which should be handled with a safe_connect decorator...