Please add support to allow booting instances into Ceph rbd

Bug #1286762 reported by Ramon Acedo
30
This bug affects 3 people
Affects Status Importance Assigned to Milestone
nova-compute (Juju Charms Collection)
Fix Released
Medium
Edward Hope-Morley

Bug Description

Starting with Havana, Ceph can now be setup as the default storage backend to boot instances on.

https://ceph.com/docs/master/rbd/rbd-openstack/#configuring-nova

Currently the Ceph charms don't support this.

Tags: openstack cts

Related branches

Revision history for this message
Edward Hope-Morley (hopem) wrote :

Ramon, if you mean support for booting instances from cinder (rbd) bootable volumes this is already supported. Nova compute is a ceph-client so when you do 'add-relation cinder ceph', 'add-relation nova-compute ceph' you should have what you need. There is also support in Nova itself to be able to download images directly from Ceph using the libvirt RBD image driver but i'm not sure that is supported yet.

Revision history for this message
Ramon Acedo (ramon-linux-labs) wrote :

We would like to have support to boot all the instances directly into rbd without Cinder (we are using boot from volumes backed by Ceph since Grizzly alternatively). Quoting the ceph description this is what we need:

"Guest Disks: Guest disks are guest operating system disks. By default, when you boot a virtual machine, its disk appears as a file on the filesystem of the hypervisor (usually under /var/lib/nova/instances/<uuid>/). Prior OpenStack Havana, the only way to boot a VM in Ceph was to use the boot from volume functionality from Cinder. However, now it is possible to directly boot every virtual machine inside Ceph without using Cinder. This is really handy because it allows us to easily perform maintenance operation with the live-migration process. On the other hand, if your hypervisor dies it is also really convenient to trigger nova evacuate and almost seamlessly run the virtual machine somewhere else."

I hope it makes sense.

Revision history for this message
Taylor Bertie (nightkhaos) wrote :

Okay, I believe I have a solution. However no modifications are required for the ceph or ceph-osd charms. All modifications that needed to be made were made on nova-compute charm.

I'm yet to test the patch to make sure it doesn't break other functionality (for example and specifically live-migration), however it can be used as a starting point. If any further modifications are required I'll update this bug.

The problem is that in the current configuration the nova-compute charm is not configured to use a remote block device for libvirt-bin, which was a fairly trivial modification once I established that the ceph relationship hook was incomplete (specifically, the libvirt process did not have access to the key because of a permissions issues and a pool was not created on ceph for nova to utilise).

This patch should be applied as patch -p1 < nova-compute-ceph-using-rbd-libvirt.patch

Revision history for this message
Ubuntu Foundations Team Bug Bot (crichton) wrote :

The attachment "Patch for nova-compute." seems to be a patch. If it isn't, please remove the "patch" flag from the attachment, remove the "patch" tag, and if you are a member of the ~ubuntu-reviewers, unsubscribe the team.

[This is an automated message performed by a Launchpad user owned by ~brian-murray, for any issues please contact him.]

tags: added: patch
Changed in nova (Ubuntu):
assignee: nobody → Taylor Bertie (taylor-bertie)
affects: nova (Ubuntu) → nova-compute (Juju Charms Collection)
Changed in nova-compute (Juju Charms Collection):
assignee: Taylor Bertie (taylor-bertie) → nobody
assignee: nobody → Taylor Bertie (taylor-bertie)
Revision history for this message
John McEleney (launchpa6) wrote :

I've tried this patch out, but I'm hitting a problem when creating a relationship between my nova-compute charm and the ceph charm (on trusty). This is in my unit-nova-compute-0.log file:
2014-07-07 09:38:16 INFO ceph-relation-changed raise CalledProcessError(retcode, cmd, output=output)
2014-07-07 09:38:16 INFO ceph-relation-changed subprocess.CalledProcessError: Command '['ceph', '--id', 'nova-compute', 'osd', 'ls', '--format=json']' returned non-zero exit status 1

Running that command from bash:
root@compute1:/var/log/juju# ceph --id nova-compute osd ls --format=json
no monitors specified to connect to.
Error connecting to cluster: ObjectNotFound

Looking in /etc/ceph/ceph.conf I see only this:
###############################################################################
# [ WARNING ]
# cinder configuration file maintained by Juju
# local changes may be overwritten.
###############################################################################
[global]
log to syslog =
 err to syslog =
 clog to syslog =

Have I missed a step somewhere?

Revision history for this message
Taylor Bertie (nightkhaos) wrote :

Thanks John, I'll have a look into this tomorrow when I have access to enough resources to run up a mini-cloud. To help me identify why my patch didn't work with you can you provide me with the output of juju status? Also if you're using any non standard charms (apart from mine) please specify which ones, and if applicable, provide the charm source in an attachment.

It could just be as simple as there being an upgrade to nova-compute since I wrote the patch that makes it incompatiable. :)

James Page (james-page)
Changed in nova-compute (Juju Charms Collection):
status: New → Triaged
importance: Undecided → Low
importance: Low → Wishlist
Changed in ceph-osd (Juju Charms Collection):
status: New → Invalid
Changed in ceph (Juju Charms Collection):
status: New → Invalid
tags: added: openstack
removed: patch
tags: added: cts
Revision history for this message
Edward Hope-Morley (hopem) wrote :

I actually worked on this a while back but never linked my patch. Adding RBD support to the Nova compute charm is somewhat trickier than with other charms such as cinder and glance since this current ceph-client model requires a 'leader' to handle the initialisation of the pool required by that client - an operation that must be performed by a single unit thus requiring units to work out a leader which in the case of nova-compute cannot be inferred from cluster status since we do not run nova-compute in HA. So, i'm posting that patch i originally used as a p.o.c. since it contains the necessary changes to configure nova to enable rbd imagebackend but when I get round to it I will post a new patch to try out a means of allowing nova-compute units to request resources from the ceph cluster (as opposed to creating them itself).

Changed in nova-compute (Juju Charms Collection):
assignee: Taylor Bertie (taylor-bertie) → Edward Hope-Morley (hopem)
importance: Wishlist → Medium
status: Triaged → In Progress
no longer affects: ceph (Juju Charms Collection)
no longer affects: ceph-osd (Juju Charms Collection)
Changed in nova-compute (Juju Charms Collection):
status: In Progress → Fix Committed
James Page (james-page)
Changed in nova-compute (Juju Charms Collection):
milestone: none → 15.01
James Page (james-page)
Changed in nova-compute (Juju Charms Collection):
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.