It seems like we broke scenario001 and 002 with https://review.openstack.org/#/c/614827 - where cinder-backup container isn't found when we try to push a tag for Pacemaker:
fatal: [centos-7-inap-mtl01-0000334308]: FAILED! => {"changed": true, "cmd": "docker tag 192.168.24.1:8787/tripleomaster/centos-binary-cinder-backup:current-tripleo-updated-20181105150724 192.168.24.1:8787/tripleomaster/centos-binary-cinder-backup:pcmklatest", "delta": "0:00:00.031334", "end": "2018-11-05 16:45:13.277874", "msg": "non-zero return code", "rc": 1, "start": "2018-11-05 16:45:13.246540", "stderr": "Error response from daemon: no such id: 192.168.24.1:8787/tripleomaster/centos-binary-cinder-backup:current-tripleo-updated-20181105150724", "stderr_lines": ["Error response from daemon: no such id: 192.168.24.1:8787/tripleomaster/centos-binary-cinder-backup:current-tripleo-updated-20181105150724"], "stdout": "", "stdout_lines": []}
http://logs.openstack.org/27/614827/8/check/tripleo-ci-centos-7-scenario002-multinode-oooq-container/460d4fa/logs/undercloud/var/log/extra/logstash.txt#_2018-11-05_16_45_20
One idea. I don't believe the new "Ansible tag" role does a pull before it tries to tag things. As such we are inherently relying on docker-puppet.py to pull images for us ahead of time as this runs early. This works for most services fine I think but note that docker-puppet.py optimizes the config runs and collectively runs the config generation for some services. I believe Cinder is one of them.
So the fix here might be to have the tagging operation pull the image ahead of time?