Adding a note here because I was looking to understand why we didn't catch the ResourceProviderSyncFailed bug [1] where we weren't sending the needed microversion 1.26 in update_from_provider_tree.
If I'm understanding correctly, there are two different issues captured in this launchpad bug:
1. ResourceProviderSyncFailed because of not sending microversion 1.26 in all places it was needed
2. TripleO CI was running with core/ram/disk filters enabled and that made it fail
The CI failure was caused by use of the core/ram/disk filters and the only way to fix it was to discontinue use of the filters. And the ResourceProviderSyncFailed issue was caught by chance at the same time because the error was noticed in the logs. Otherwise, ResourceProviderSyncFailed alone did not cause any CI to fail.
Adding a note here because I was looking to understand why we didn't catch the ResourceProvide rSyncFailed bug [1] where we weren't sending the needed microversion 1.26 in update_ from_provider_ tree.
If I'm understanding correctly, there are two different issues captured in this launchpad bug:
1. ResourceProvide rSyncFailed because of not sending microversion 1.26 in all places it was needed
2. TripleO CI was running with core/ram/disk filters enabled and that made it fail
The CI failure was caused by use of the core/ram/disk filters and the only way to fix it was to discontinue use of the filters. And the ResourceProvide rSyncFailed issue was caught by chance at the same time because the error was noticed in the logs. Otherwise, ResourceProvide rSyncFailed alone did not cause any CI to fail.
[1] http:// lists.openstack .org/pipermail/ openstack- dev/2018- October/ 135494. html