ceph_client: on ubuntu 16.04 default Ceph version (hammer) causes conflicts

Bug #1661948 reported by Pas
14
This bug affects 3 people
Affects Status Importance Assigned to Milestone
OpenStack-Ansible
Expired
Undecided
Unassigned

Bug Description

As the summary states osa-ceph_client adds the /etc/apt/sources.list.d/download_ceph_com_debian_hammer.list file (which contains "deb http://download.ceph.com/debian-hammer/ xenial main", and adds a preference for pinning the packages with o=RedHat to 1001), and then the playbook proceeds to run apt-get install ceph-client, but that conflicts with librados2 (and some other package) from the base xenial repos.

Putting "ceph_stable_release: jewel" into user_variables.yml seems to be a workaround.

So, modifying the Ceph default to something less archaic (sorry for the hyperbole) would likely solve this. (Especially that Hammer is nearing EOL: https://github.com/ceph/ceph/pull/13069)

edit: oh, I forgot to add, this is using stable/newton

Pas (pasthelod)
description: updated
Revision history for this message
Logan V (loganv) wrote :

Hi, I was doing some debugging on Xenial tonight and I'm having a hard time replicating this. I took a clean Xenial install, added the hammer apt key, repo, and Redhat pin, then installed the packages from the ceph_client role: ceph ceph-common python-ceph.

I wonder if one of our other roles is pulling in ceph packages prior to the ceph_client role running, which would cause this conflict since the unpinned packages from UCA/xenial main would be installed already.

Could you post some debugging from apt-cache, apt-get install, dpkg -l, etc showing the broken setup when you hit the conflicts? An ansible run log would be handy too so we can make sure the apt cache is being refreshed after the repo & pins are added, etc. I was not able to find where we would be pulling in librados from in our other roles, and if it is being installed in ceph_client, we should have everything pinned to hammer before we do any installs.

Revision history for this message
Pas (pasthelod) wrote :

Hello,

Sorry for starting with bad news, but I don't have logs nor easy means to reproduce the problem.

But I think the problem could be that the host might have been in an unclean state, I've only purged the LXC containers and the apt-cacher-proxy preference file, so maybe there was already some packages installed (I've tried to run master, but encountered too many problems, so reverted back to stable/newton).

Changed in openstack-ansible:
status: New → Incomplete
Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for openstack-ansible because there has been no activity for 60 days.]

Changed in openstack-ansible:
status: Incomplete → Expired
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.