So this is interesting due to the fact that we sync code from upstream master branches to corresponding master branched on review.fuel-infra.org regularly (every ~15 minutes), but we *don not* rebuild the packages, as the sync is done by the means of "git reset --hard" in the git directly, rather than by proposing a change request via Gerrit (what we have for stable branches in downstream, e.g. for 9.0).
I quickly checked Keystone packages versions on our mirror:
https://review.fuel-infra.org/#/c/19713/ is the corresponding change request. As you can see `master-pkg-systest-ubuntu` actually detected the problem and *failed*, but it's non-voting, so it was ignored.
We desperately need to make the job stable and voting again, otherwise people will keep ignoring these failures.
I suggest we downgrade the oslo.db package by the means of reverting the commit in question, if possible. Rebuilding of dependencies (alembic, oslo.i18n, etc) one by one is not a viable solution, IMO.
So this is interesting due to the fact that we sync code from upstream master branches to corresponding master branched on review. fuel-infra. org regularly (every ~15 minutes), but we *don not* rebuild the packages, as the sync is done by the means of "git reset --hard" in the git directly, rather than by proposing a change request via Gerrit (what we have for stable branches in downstream, e.g. for 9.0).
I quickly checked Keystone packages versions on our mirror:
keystone- doc_9.0. 0~b3-2~ u14.04+ mos59_all. deb 14-Mar-2016 16:10 211428 9.0.0~b3- 2~u14.04+ mos59.debian. tar.gz 14-Mar-2016 16:10 40837 9.0.0~b3- 2~u14.04+ mos59.dsc 14-Mar-2016 16:10 2712 9.0.0~b3- 2~u14.04+ mos59_all. deb 14-Mar-2016 16:10 84790 9.0.0~b3. orig.tar. gz 14-Mar-2016 16:10 1159991 keystone_ 9.0.0~b3- 2~u14.04+ mos59_all. deb 14-Mar-2016 16:10 626170
keystone_
keystone_
keystone_
keystone_
python-
^ and they haven't been updated since March, thus, it's not Keystone which triggered this problem.
As it was pointed out in https:/ /bugs.launchpad .net/fuel/ +bug/1577839/ comments/ 14 it's oslo.db requirements, which can't be satisfied:
ContextualVersi onConflict: (alembic 0.8.2.dev0 (/usr/lib/ python2. 7/dist- packages) , Requirement. parse(' alembic> =0.8.4' ), set(['oslo.db']))
Keystone just happens to trigger this requirements check on start.
As it turns out, oslo.db was actually rebuilt recently:
python- oslo-db- doc_4.6. 0-3~u14. 04+mos20_ all.deb 05-May-2016 14:44 3536 oslo-db_ 4.6.0-3~ u14.04+ mos20_all. deb 05-May-2016 14:44 3530 oslo.db- doc_4.6. 0-3~u14. 04+mos20_ all.deb 05-May-2016 14:44 42910 oslo.db_ 4.6.0-3~ u14.04+ mos20.debian. tar.gz 05-May-2016 14:44 4821 oslo.db_ 4.6.0-3~ u14.04+ mos20.dsc 05-May-2016 14:44 2851 oslo.db_ 4.6.0-3~ u14.04+ mos20_all. deb 05-May-2016 14:44 95778 oslo.db_ 4.6.0.orig. tar.gz 05-May-2016 14:44 138961 oslo.db_ 4.6.0-3~ u14.04+ mos20_all. deb 05-May-2016 14:44 95832
python-
python-
python-
python-
python-
python-
python3-
https:/ /review. fuel-infra. org/#/c/ 19713/ is the corresponding change request. As you can see `master- pkg-systest- ubuntu` actually detected the problem and *failed*, but it's non-voting, so it was ignored.
We desperately need to make the job stable and voting again, otherwise people will keep ignoring these failures.
I suggest we downgrade the oslo.db package by the means of reverting the commit in question, if possible. Rebuilding of dependencies (alembic, oslo.i18n, etc) one by one is not a viable solution, IMO.