Add SIGHUP handlers to reset RPC version pins

Bug #1548308 reported by OpenStack Infra
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
openstack-manuals
Won't Fix
Medium
Unassigned

Bug Description

https://review.openstack.org/279039
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/cinder" is set up so that we directly report the documentation bugs against it. If this needs changing, the docimpact-group option needs to be added for the project. You can ask the OpenStack infra team (#openstack-infra on freenode) for help if you need to.

commit c9a55d852e3f56a955039e99b628ce0b1c1e95af
Author: Michał Dulko <email address hidden>
Date: Fri Feb 5 15:41:01 2016 +0100

    Add SIGHUP handlers to reset RPC version pins

    Adding SIGHUP handlers (by implementing reset from oslo.service) to
    cinder-scheduler, cinder-backup and cinder-volume that reset cached RPC
    version pins. This is to avoid the need to restart all the services when
    upgrade of the deployment is completed.

    Some changes go a little deep into the stack, because to reload all the
    pins we need to recreate <service>.rpcapi.<service>API objects that are
    stored in the memory.

    Please note that SIGHUP signal is handled by oslo.service only when
    service runs in daemon mode (without tty attached). To test this commit
    in DevStack you need to add "&" to the end of the command that starts
    the service.

    Situation is more complicated with the API service, so we're leaving it
    with restart required for now. In the deployments with HA cinder-api is
    typically behind a load balancer, so restarting individual nodes
    one-by-one should be easy.

    DocImpact: Add information on rolling upgrades procedures to the docs.
    Implements: blueprint rpc-object-compatibility

    Change-Id: I03ed74e17dc9a4b9aa2ddcfbeb36a106a0f035f8

Revision history for this message
Sean McGinnis (sean-mcginnis) wrote :

Michal, please add notes for what the documentation impact is and then we can reassign to openstack-manuals.

Changed in cinder:
assignee: nobody → Michal Dulko (michal-dulko-f)
Revision history for this message
Michal Dulko (michal-dulko-f) wrote :

Sure!

In general we should document how to execute a rolling upgrade of Cinder and a few Cinder behaviors that operators should be aware of and that weren't there before Mitaka.

There's this page [1] for Nova. In case of Cinder we're not specifying RPC version manually, but correct version is automatically detected (like in Nova when setting upgrade_level to auto). Exact version number is determined by a DB call and to limit performance impact response is cached. To invalidate the cache operator may now send SIGHUP signal to the process instead of restarting it (limitation - cinder-api service doesn't support that).

Moreover this means that operators should note that they need to be more careful when maintaining `services` table in the DB. A stale record there may pin RPC API version which may result in blocking of some functionalities. To remove old services operator may use "cinder-manage service remove" command.

Please feel free to hit me with any questions.

[1] http://docs.openstack.org/openstack-ops/content/ops_upgrades_upgrade_levels.html

Changed in cinder:
status: New → Triaged
Changed in openstack-manuals:
status: New → Confirmed
importance: Undecided → Medium
no longer affects: cinder
tags: added: ops-guide
removed: cinder doc
Changed in openstack-manuals:
assignee: nobody → Alexandra Settle (alexandra-settle)
milestone: none → ocata
tags: added: low-hanging-fruit
Changed in openstack-manuals:
assignee: Alexandra Settle (alexandra-settle) → nobody
Revision history for this message
Darren Chan (dazzachan) wrote :

Cinder rolling upgrade information is located in the developer documentation: http://docs.openstack.org/developer/cinder/upgrade.html

This link has been added to the Ops Guide.

Closing this bug.

Changed in openstack-manuals:
status: Confirmed → Won't Fix
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.