init initializes the backend vol service, except when it doesn't

Bug #1555370 reported by John Griffith
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Cinder
Fix Released
Undecided
John Griffith

Bug Description

In the Replication code we do an update of the service during manager init to set the capabilities based on the info returned by the driver. It turns out that we don't really initialize the backend like I thought here. Instead we now do it in bits and pieces. We have an __init__ routine, followed by an init_host routine that then goes to an init_host_with_rpc routine.

And the service module creates the service for a "new" backend when it doesn't exist. So, we need to move the replication initialization stuff out of the init method and make sure it's *after* service.py checks/creates the service. OR make init_host return the data we want updated in the service model.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to cinder (master)

Fix proposed to branch: master
Review: https://review.openstack.org/290917

Changed in cinder:
assignee: nobody → John Griffith (john-griffith)
status: New → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to cinder (master)

Reviewed: https://review.openstack.org/290917
Committed: https://git.openstack.org/cgit/openstack/cinder/commit/?id=af941066b32a0aa875739556f1d6360f6f058405
Submitter: Jenkins
Branch: master

commit af941066b32a0aa875739556f1d6360f6f058405
Author: John Griffith <email address hidden>
Date: Wed Mar 9 16:43:14 2016 -0700

    Move replication_status update to init_with_rpc

    We were using host_init to read replication_status from the
    driver and update the service entry in the DB. It turns out
    that on a fresh install this doesn't actually work, because
    while we have multiple init methods for the backend, the
    Service entry isn't actually created in a fresh deploy until
    AFTER init_host. The result was that in some cases we were
    trying to update a column on a non-existent Service in the
    DataBase.

    This patch moves the replication_status updates for the
    service into the init_with_rpc method. That method was
    just a noop stub in the parent manager class, so we just
    implement it in cinder.volume.manager and do what we need
    with the replication update info.

    Change-Id: I18b2658e2f93959f74377ccb86ce8b01b6970c60
    Closes-Bug: #1555370

Changed in cinder:
status: In Progress → Fix Released
Revision history for this message
Thierry Carrez (ttx) wrote : Fix included in openstack/cinder 8.0.0.0rc1

This issue was fixed in the openstack/cinder 8.0.0.0rc1 release candidate.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.