Bypass the dirty BDM enty no matter how it is produced

Bug #1681998 reported by Hua Zhang
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
In Progress
Medium
Unassigned

Bug Description

Sometimes the following dirty BDM enty (1.row) can be seen in the database that multiple BDMs with the same image_id and instance_uuid.

mysql> select * from block_device_mapping where volume_id='153bcab4-1f88-440c-9782-3c661a7502a8' \G
*************************** 1. row ***************************
           created_at: 2017-02-02 02:28:45
           updated_at: NULL
           deleted_at: NULL
                   id: 9754
          device_name: /dev/vdb
delete_on_termination: 0
          snapshot_id: NULL
            volume_id: 153bcab4-1f88-440c-9782-3c661a7502a8
          volume_size: NULL
            no_device: NULL
      connection_info: NULL
        instance_uuid: b52f9264-d8b3-406a-bf9b-d7d7471b13fc
              deleted: 0
          source_type: volume
     destination_type: volume
         guest_format: NULL
          device_type: NULL
             disk_bus: NULL
           boot_index: NULL
             image_id: NULL
*************************** 2. row ***************************
           created_at: 2017-02-02 02:29:31
           updated_at: 2017-02-27 10:59:42
           deleted_at: NULL
                   id: 9757
          device_name: /dev/vdc
delete_on_termination: 0
          snapshot_id: NULL
            volume_id: 153bcab4-1f88-440c-9782-3c661a7502a8
          volume_size: NULL
            no_device: NULL
      connection_info: {"driver_volume_type": "rbd", "serial": "153bcab4-1f88-440c-9782-3c661a7502a8", "data": {"secret_type": "ceph", "name": "cinder-ceph/volume-153bcab4-1f88-440c-9782-3c661a7502a8", "secret_uuid": null, "qos_specs": null, "hosts": ["10.7.1.202", "10.7.1.203", "10.7.1.204"], "auth_enabled": true, "access_mode": "rw", "auth_username": "cinder-ceph", "ports": ["6789", "6789", "6789"]}}
        instance_uuid: b52f9264-d8b3-406a-bf9b-d7d7471b13fc
              deleted: 0
          source_type: volume
     destination_type: volume
         guest_format: NULL
          device_type: disk
             disk_bus: virtio
           boot_index: NULL
             image_id: NULL

then it cause we fail to detach the volume and see the following error since connection_info of row 1 is NULL.

2017-03-23 13:28:05.360 1865733 TRACE oslo_messaging.rpc.dispatcher self._detach_volume(context, instance, bdm)
2017-03-23 13:28:05.360 1865733 TRACE oslo_messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 4801, in _detach_volume
2017-03-23 13:28:05.360 1865733 TRACE oslo_messaging.rpc.dispatcher connection_info = jsonutils.loads(bdm.connection_info)
2017-03-23 13:28:05.360 1865733 TRACE oslo_messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/oslo_serialization/jsonutils.py", line 215, in loads
2017-03-23 13:28:05.360 1865733 TRACE oslo_messaging.rpc.dispatcher return json.loads(encodeutils.safe_decode(s, encoding), **kwargs)
2017-03-23 13:28:05.360 1865733 TRACE oslo_messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/oslo_utils/encodeutils.py", line 33, in safe_decode
2017-03-23 13:28:05.360 1865733 TRACE oslo_messaging.rpc.dispatcher raise TypeError("%s can't be decoded" % type(text))
2017-03-23 13:28:05.360 1865733 TRACE oslo_messaging.rpc.dispatcher TypeError: <type 'NoneType'> can't be decoded

This kind of dirty data can be produced when happened to fail to run this line _attach_volume()#volume_bdm.destroy() [1], I think these conditions may cause it to happen:
1, lose the database during the operation volume_bdm.destroy()
2, lose an MQ connection or RPC timing out during the operation volume_bdm.destroy()

If you lose the database during any operation, things are going to be bad, so in general I'm not sure how realistic guarding for that case is. Losing an MQ connection or RPC timing out is probably more realistic. Seems the fix [2] is trying to solve the point 2.

However, I'm thinking if we can bypass the dirty BDM entry according to the condition that connection_info is NULL no matter how it is produced.

[1] https://github.com/openstack/nova/blob/master/nova/compute/api.py#L3724
[2] https://review.openstack.org/#/c/290793

Revision history for this message
Lee Yarwood (lyarwood) wrote :

Yeah we did attempt to fix this with [2] but couldn't find a reasonable way to handle >1 bdm with the same instance_uuid and volume_id.

I don't think connection_info being NULL is the correct way to avoid this as all newly created BDMs would meet this criteria, making it impossible for us to find the BDM later when calling intialize_connection etc.

Should we mark this as a duplicate of bug#1427060 and continue there?

Changed in nova:
status: New → Confirmed
importance: Undecided → Medium
Revision history for this message
Hua Zhang (zhhuabj) wrote :

Hi Lee,

Thank you for your response. To illustrate my intentions to bypass the dirty BDM entry according to the condition that connection_info is NULL, I drafted a patch below just now:

diff --git a/nova/compute/manager.py b/nova/compute/manager.py
index d6efd18..efbb225 100644
--- a/nova/compute/manager.py
+++ b/nova/compute/manager.py
@@ -4937,10 +4937,10 @@ class ComputeManager(manager.Manager):
                                                     wr_req, wr_bytes,
                                                     instance,
                                                     update_totals=True)
-
- self._driver_detach_volume(context, instance, bdm)
- connector = self.driver.get_volume_connector(instance)
- self.volume_api.terminate_connection(context, volume_id, connector)
+ if bdm.connection_info:
+ self._driver_detach_volume(context, instance, bdm)
+ connector = self.driver.get_volume_connector(instance)
+ self.volume_api.terminate_connection(context, volume_id, connector)

         if destroy_bdm:
             bdm.destroy()

When bdm.connection_info is NULL (means it's dirty data), we will bypass _driver_detach_volume() and terminate_connection() and just invoke bdm.destroy() to delete this dirty data. So:

1, If there are N (N > 1) BDMs with the same instance_uuid and volume_id, we can run 'nova detach' operation N times to delete N dirty BDMs.

2, Because above logic is just added in detach operation, so it will not affect all newly created BDMs would meet this criteria.

If you guys think I'm on the right road, I can spend time to test above patch and continue to submit the code review. Hope to hear from you. thanks.

Sean Dague (sdague)
Changed in nova:
status: Confirmed → In Progress
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.