Activity log for bug #1657097

Date Who What changed Old value New value Message
2017-01-17 11:59:43 xianming mao bug added bug
2017-01-17 12:00:02 xianming mao cinder: assignee xianming mao (mars0618)
2017-01-17 12:01:59 xianming mao description when we live-migrate a volume failed, the volume status was in-use and we only want to clear the migrate status through run following CLI: cinder reset-state [--reset-migration-status] <volume> [<volume> ...] But after that, not only clear the migrate status but also change volume status in available status, this isn't what we expected, so I report this bug to repair it. +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Property | Value | +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | attachments | [{u'server_id': u'25e82035-9238-4413-8791-6add0455ab3d', u'attachment_id': u'0f415605-9be2-4b06-8b31-ab3170503bc9', u'host_name': None, u'volume_id': u'91cc6b0b-2bc8-437c-bca4-b654de620d47', u'device': u'/dev/vdb', u'id': u'91cc6b0b-2bc8-437c-bca4-b654de620d47'}] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2017-01-16T09:11:49.000000 | | description | None | | encrypted | False | | id | 91cc6b0b-2bc8-437c-bca4-b654de620d47 | | metadata | {u'readonly': u'False', u'attached_mode': u'rw'} | | migration_status | None | | multiattach | False | | name | vol | | os-vol-host-attr:host | node-2@lvm#lvm | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 813cb32a89d540199e412dfcc1319576 | | os-volume-replication:driver_data | None | | os-volume-replication:extended_status | None | | replication_status | disabled | | size | 1 | | snapshot_id | None | | source_volid | None | | status | available | | user_id | 0af9a7fb75934600a1d24185eb8acda6 | | volume_type | lvm | +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ when we live-migrate a volume failed, the volume status was in-use and we only want to clear the migrate status through run following CLI:      cinder reset-state [--reset-migration-status] <volume> [<volume> ...] But after that, not only clear the migrate status but also change volume status into available status even though it was attached on a server, this isn't what we expected, so I report this bug to repair it. +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Property | Value | +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | attachments | [{u'server_id': u'25e82035-9238-4413-8791-6add0455ab3d', u'attachment_id': u'0f415605-9be2-4b06-8b31-ab3170503bc9', u'host_name': None, u'volume_id': u'91cc6b0b-2bc8-437c-bca4-b654de620d47', u'device': u'/dev/vdb', u'id': u'91cc6b0b-2bc8-437c-bca4-b654de620d47'}] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2017-01-16T09:11:49.000000 | | description | None | | encrypted | False | | id | 91cc6b0b-2bc8-437c-bca4-b654de620d47 | | metadata | {u'readonly': u'False', u'attached_mode': u'rw'} | | migration_status | None | | multiattach | False | | name | vol | | os-vol-host-attr:host | node-2@lvm#lvm | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 813cb32a89d540199e412dfcc1319576 | | os-volume-replication:driver_data | None | | os-volume-replication:extended_status | None | | replication_status | disabled | | size | 1 | | snapshot_id | None | | source_volid | None | | status | available | | user_id | 0af9a7fb75934600a1d24185eb8acda6 | | volume_type | lvm | +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
2017-01-17 12:11:26 xianming mao description when we live-migrate a volume failed, the volume status was in-use and we only want to clear the migrate status through run following CLI:      cinder reset-state [--reset-migration-status] <volume> [<volume> ...] But after that, not only clear the migrate status but also change volume status into available status even though it was attached on a server, this isn't what we expected, so I report this bug to repair it. +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Property | Value | +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | attachments | [{u'server_id': u'25e82035-9238-4413-8791-6add0455ab3d', u'attachment_id': u'0f415605-9be2-4b06-8b31-ab3170503bc9', u'host_name': None, u'volume_id': u'91cc6b0b-2bc8-437c-bca4-b654de620d47', u'device': u'/dev/vdb', u'id': u'91cc6b0b-2bc8-437c-bca4-b654de620d47'}] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2017-01-16T09:11:49.000000 | | description | None | | encrypted | False | | id | 91cc6b0b-2bc8-437c-bca4-b654de620d47 | | metadata | {u'readonly': u'False', u'attached_mode': u'rw'} | | migration_status | None | | multiattach | False | | name | vol | | os-vol-host-attr:host | node-2@lvm#lvm | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 813cb32a89d540199e412dfcc1319576 | | os-volume-replication:driver_data | None | | os-volume-replication:extended_status | None | | replication_status | disabled | | size | 1 | | snapshot_id | None | | source_volid | None | | status | available | | user_id | 0af9a7fb75934600a1d24185eb8acda6 | | volume_type | lvm | +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ When we live-migrate a volume failed, the volume status was in-use and we only want to clear the migrate status through run following CLI:      cinder reset-state [--reset-migration-status] <volume> [<volume> ...] But after that, not only clear the migrate status but also change volume status into available status even though it was attached on a server, this isn't what we expected, so I report this bug to repair it. +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Property | Value | +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | attachments | [{u'server_id': u'25e82035-9238-4413-8791-6add0455ab3d', u'attachment_id': u'0f415605-9be2-4b06-8b31-ab3170503bc9', u'host_name': None, u'volume_id': u'91cc6b0b-2bc8-437c-bca4-b654de620d47', u'device': u'/dev/vdb', u'id': u'91cc6b0b-2bc8-437c-bca4-b654de620d47'}] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2017-01-16T09:11:49.000000 | | description | None | | encrypted | False | | id | 91cc6b0b-2bc8-437c-bca4-b654de620d47 | | metadata | {u'readonly': u'False', u'attached_mode': u'rw'} | | migration_status | None | | multiattach | False | | name | vol | | os-vol-host-attr:host | node-2@lvm#lvm | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 813cb32a89d540199e412dfcc1319576 | | os-volume-replication:driver_data | None | | os-volume-replication:extended_status | None | | replication_status | disabled | | size | 1 | | snapshot_id | None | | source_volid | None | | status | available | | user_id | 0af9a7fb75934600a1d24185eb8acda6 | | volume_type | lvm | +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
2017-01-17 12:12:01 xianming mao description When we live-migrate a volume failed, the volume status was in-use and we only want to clear the migrate status through run following CLI:      cinder reset-state [--reset-migration-status] <volume> [<volume> ...] But after that, not only clear the migrate status but also change volume status into available status even though it was attached on a server, this isn't what we expected, so I report this bug to repair it. +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Property | Value | +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | attachments | [{u'server_id': u'25e82035-9238-4413-8791-6add0455ab3d', u'attachment_id': u'0f415605-9be2-4b06-8b31-ab3170503bc9', u'host_name': None, u'volume_id': u'91cc6b0b-2bc8-437c-bca4-b654de620d47', u'device': u'/dev/vdb', u'id': u'91cc6b0b-2bc8-437c-bca4-b654de620d47'}] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2017-01-16T09:11:49.000000 | | description | None | | encrypted | False | | id | 91cc6b0b-2bc8-437c-bca4-b654de620d47 | | metadata | {u'readonly': u'False', u'attached_mode': u'rw'} | | migration_status | None | | multiattach | False | | name | vol | | os-vol-host-attr:host | node-2@lvm#lvm | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 813cb32a89d540199e412dfcc1319576 | | os-volume-replication:driver_data | None | | os-volume-replication:extended_status | None | | replication_status | disabled | | size | 1 | | snapshot_id | None | | source_volid | None | | status | available | | user_id | 0af9a7fb75934600a1d24185eb8acda6 | | volume_type | lvm | +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ When we live-migrate a volume failed, the volume status was in-use and migrate status was error. We only want to clear the migrate status through run following CLI:      cinder reset-state [--reset-migration-status] <volume> [<volume> ...] But after that, not only clear the migrate status but also change volume status into available status even though it was attached on a server, this isn't what we expected, so I report this bug to repair it. +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Property | Value | +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | attachments | [{u'server_id': u'25e82035-9238-4413-8791-6add0455ab3d', u'attachment_id': u'0f415605-9be2-4b06-8b31-ab3170503bc9', u'host_name': None, u'volume_id': u'91cc6b0b-2bc8-437c-bca4-b654de620d47', u'device': u'/dev/vdb', u'id': u'91cc6b0b-2bc8-437c-bca4-b654de620d47'}] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2017-01-16T09:11:49.000000 | | description | None | | encrypted | False | | id | 91cc6b0b-2bc8-437c-bca4-b654de620d47 | | metadata | {u'readonly': u'False', u'attached_mode': u'rw'} | | migration_status | None | | multiattach | False | | name | vol | | os-vol-host-attr:host | node-2@lvm#lvm | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 813cb32a89d540199e412dfcc1319576 | | os-volume-replication:driver_data | None | | os-volume-replication:extended_status | None | | replication_status | disabled | | size | 1 | | snapshot_id | None | | source_volid | None | | status | available | | user_id | 0af9a7fb75934600a1d24185eb8acda6 | | volume_type | lvm | +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
2017-01-17 12:12:50 xianming mao description When we live-migrate a volume failed, the volume status was in-use and migrate status was error. We only want to clear the migrate status through run following CLI:      cinder reset-state [--reset-migration-status] <volume> [<volume> ...] But after that, not only clear the migrate status but also change volume status into available status even though it was attached on a server, this isn't what we expected, so I report this bug to repair it. +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Property | Value | +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | attachments | [{u'server_id': u'25e82035-9238-4413-8791-6add0455ab3d', u'attachment_id': u'0f415605-9be2-4b06-8b31-ab3170503bc9', u'host_name': None, u'volume_id': u'91cc6b0b-2bc8-437c-bca4-b654de620d47', u'device': u'/dev/vdb', u'id': u'91cc6b0b-2bc8-437c-bca4-b654de620d47'}] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2017-01-16T09:11:49.000000 | | description | None | | encrypted | False | | id | 91cc6b0b-2bc8-437c-bca4-b654de620d47 | | metadata | {u'readonly': u'False', u'attached_mode': u'rw'} | | migration_status | None | | multiattach | False | | name | vol | | os-vol-host-attr:host | node-2@lvm#lvm | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 813cb32a89d540199e412dfcc1319576 | | os-volume-replication:driver_data | None | | os-volume-replication:extended_status | None | | replication_status | disabled | | size | 1 | | snapshot_id | None | | source_volid | None | | status | available | | user_id | 0af9a7fb75934600a1d24185eb8acda6 | | volume_type | lvm | +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ When we live-migrate a volume failed, the volume status was in-use and migrate status was error. We only want to clear the migrate status through run following CLI:      cinder reset-state [--reset-migration-status] <volume> [<volume> ...] But after that, not only clear the migrate status but also change volume status into available even though it was attached on a server, this isn't what we expected, so I report this bug to repair it. +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Property | Value | +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | attachments | [{u'server_id': u'25e82035-9238-4413-8791-6add0455ab3d', u'attachment_id': u'0f415605-9be2-4b06-8b31-ab3170503bc9', u'host_name': None, u'volume_id': u'91cc6b0b-2bc8-437c-bca4-b654de620d47', u'device': u'/dev/vdb', u'id': u'91cc6b0b-2bc8-437c-bca4-b654de620d47'}] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2017-01-16T09:11:49.000000 | | description | None | | encrypted | False | | id | 91cc6b0b-2bc8-437c-bca4-b654de620d47 | | metadata | {u'readonly': u'False', u'attached_mode': u'rw'} | | migration_status | None | | multiattach | False | | name | vol | | os-vol-host-attr:host | node-2@lvm#lvm | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 813cb32a89d540199e412dfcc1319576 | | os-volume-replication:driver_data | None | | os-volume-replication:extended_status | None | | replication_status | disabled | | size | 1 | | snapshot_id | None | | source_volid | None | | status | available | | user_id | 0af9a7fb75934600a1d24185eb8acda6 | | volume_type | lvm | +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
2017-01-17 12:16:22 xianming mao cinder: status New In Progress
2017-01-19 02:20:38 xianming mao description When we live-migrate a volume failed, the volume status was in-use and migrate status was error. We only want to clear the migrate status through run following CLI:      cinder reset-state [--reset-migration-status] <volume> [<volume> ...] But after that, not only clear the migrate status but also change volume status into available even though it was attached on a server, this isn't what we expected, so I report this bug to repair it. +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Property | Value | +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | attachments | [{u'server_id': u'25e82035-9238-4413-8791-6add0455ab3d', u'attachment_id': u'0f415605-9be2-4b06-8b31-ab3170503bc9', u'host_name': None, u'volume_id': u'91cc6b0b-2bc8-437c-bca4-b654de620d47', u'device': u'/dev/vdb', u'id': u'91cc6b0b-2bc8-437c-bca4-b654de620d47'}] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2017-01-16T09:11:49.000000 | | description | None | | encrypted | False | | id | 91cc6b0b-2bc8-437c-bca4-b654de620d47 | | metadata | {u'readonly': u'False', u'attached_mode': u'rw'} | | migration_status | None | | multiattach | False | | name | vol | | os-vol-host-attr:host | node-2@lvm#lvm | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 813cb32a89d540199e412dfcc1319576 | | os-volume-replication:driver_data | None | | os-volume-replication:extended_status | None | | replication_status | disabled | | size | 1 | | snapshot_id | None | | source_volid | None | | status | available | | user_id | 0af9a7fb75934600a1d24185eb8acda6 | | volume_type | lvm | +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ When we live-migrate a volume failed, the volume status was in-use and migrate status was error. We only want to clear the migrate status through run following CLI:      cinder reset-state [--reset-migration-status] <volume> [<volume> ...] But after that, not only clear the migrate status but also change volume status into 'available' even though it was attached on a server, this isn't what we expected, so I report this bug to repair it. As following shown, this volume actual status is 'in-use', because it has attached on a server before we run following CLI: cinder reset-state [--reset-migration-status] <volume> but after this CLI completed ,the volume status was change into 'available', it's not TRUE, because this volume is not detach from a server untill now! +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Property | Value | +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | attachments | [{u'server_id': u'25e82035-9238-4413-8791-6add0455ab3d', u'attachment_id': u'0f415605-9be2-4b06-8b31-ab3170503bc9', u'host_name': None, u'volume_id': u'91cc6b0b-2bc8-437c-bca4-b654de620d47', u'device': u'/dev/vdb', u'id': u'91cc6b0b-2bc8-437c-bca4-b654de620d47'}] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2017-01-16T09:11:49.000000 | | description | None | | encrypted | False | | id | 91cc6b0b-2bc8-437c-bca4-b654de620d47 | | metadata | {u'readonly': u'False', u'attached_mode': u'rw'} | | migration_status | None | | multiattach | False | | name | vol | | os-vol-host-attr:host | node-2@lvm#lvm | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 813cb32a89d540199e412dfcc1319576 | | os-volume-replication:driver_data | None | | os-volume-replication:extended_status | None | | replication_status | disabled | | size | 1 | | snapshot_id | None | | source_volid | None | | status | available | | user_id | 0af9a7fb75934600a1d24185eb8acda6 | | volume_type | lvm | +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
2017-01-19 02:20:52 xianming mao description When we live-migrate a volume failed, the volume status was in-use and migrate status was error. We only want to clear the migrate status through run following CLI:      cinder reset-state [--reset-migration-status] <volume> [<volume> ...] But after that, not only clear the migrate status but also change volume status into 'available' even though it was attached on a server, this isn't what we expected, so I report this bug to repair it. As following shown, this volume actual status is 'in-use', because it has attached on a server before we run following CLI: cinder reset-state [--reset-migration-status] <volume> but after this CLI completed ,the volume status was change into 'available', it's not TRUE, because this volume is not detach from a server untill now! +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Property | Value | +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | attachments | [{u'server_id': u'25e82035-9238-4413-8791-6add0455ab3d', u'attachment_id': u'0f415605-9be2-4b06-8b31-ab3170503bc9', u'host_name': None, u'volume_id': u'91cc6b0b-2bc8-437c-bca4-b654de620d47', u'device': u'/dev/vdb', u'id': u'91cc6b0b-2bc8-437c-bca4-b654de620d47'}] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2017-01-16T09:11:49.000000 | | description | None | | encrypted | False | | id | 91cc6b0b-2bc8-437c-bca4-b654de620d47 | | metadata | {u'readonly': u'False', u'attached_mode': u'rw'} | | migration_status | None | | multiattach | False | | name | vol | | os-vol-host-attr:host | node-2@lvm#lvm | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 813cb32a89d540199e412dfcc1319576 | | os-volume-replication:driver_data | None | | os-volume-replication:extended_status | None | | replication_status | disabled | | size | 1 | | snapshot_id | None | | source_volid | None | | status | available | | user_id | 0af9a7fb75934600a1d24185eb8acda6 | | volume_type | lvm | +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ When we live-migrate a volume failed, the volume status was in-use and migrate status was error. We only want to clear the migrate status through run following CLI:      cinder reset-state [--reset-migration-status] <volume> [<volume> ...] But after that, not only clear the migrate status but also change volume status into 'available' even though it was attached on a server, this isn't what we expected, so I report this bug to repair it. As following shown, this volume actual status is 'in-use', because it has attached on a server before we run following CLI: cinder reset-state [--reset-migration-status] <volume> but after this CLI completed ,the volume status was changed into 'available', it's not TRUE, because this volume is not detach from a server untill now! +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Property | Value | +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | attachments | [{u'server_id': u'25e82035-9238-4413-8791-6add0455ab3d', u'attachment_id': u'0f415605-9be2-4b06-8b31-ab3170503bc9', u'host_name': None, u'volume_id': u'91cc6b0b-2bc8-437c-bca4-b654de620d47', u'device': u'/dev/vdb', u'id': u'91cc6b0b-2bc8-437c-bca4-b654de620d47'}] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2017-01-16T09:11:49.000000 | | description | None | | encrypted | False | | id | 91cc6b0b-2bc8-437c-bca4-b654de620d47 | | metadata | {u'readonly': u'False', u'attached_mode': u'rw'} | | migration_status | None | | multiattach | False | | name | vol | | os-vol-host-attr:host | node-2@lvm#lvm | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 813cb32a89d540199e412dfcc1319576 | | os-volume-replication:driver_data | None | | os-volume-replication:extended_status | None | | replication_status | disabled | | size | 1 | | snapshot_id | None | | source_volid | None | | status | available | | user_id | 0af9a7fb75934600a1d24185eb8acda6 | | volume_type | lvm | +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
2017-01-19 02:28:17 xianming mao description When we live-migrate a volume failed, the volume status was in-use and migrate status was error. We only want to clear the migrate status through run following CLI:      cinder reset-state [--reset-migration-status] <volume> [<volume> ...] But after that, not only clear the migrate status but also change volume status into 'available' even though it was attached on a server, this isn't what we expected, so I report this bug to repair it. As following shown, this volume actual status is 'in-use', because it has attached on a server before we run following CLI: cinder reset-state [--reset-migration-status] <volume> but after this CLI completed ,the volume status was changed into 'available', it's not TRUE, because this volume is not detach from a server untill now! +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Property | Value | +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | attachments | [{u'server_id': u'25e82035-9238-4413-8791-6add0455ab3d', u'attachment_id': u'0f415605-9be2-4b06-8b31-ab3170503bc9', u'host_name': None, u'volume_id': u'91cc6b0b-2bc8-437c-bca4-b654de620d47', u'device': u'/dev/vdb', u'id': u'91cc6b0b-2bc8-437c-bca4-b654de620d47'}] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2017-01-16T09:11:49.000000 | | description | None | | encrypted | False | | id | 91cc6b0b-2bc8-437c-bca4-b654de620d47 | | metadata | {u'readonly': u'False', u'attached_mode': u'rw'} | | migration_status | None | | multiattach | False | | name | vol | | os-vol-host-attr:host | node-2@lvm#lvm | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 813cb32a89d540199e412dfcc1319576 | | os-volume-replication:driver_data | None | | os-volume-replication:extended_status | None | | replication_status | disabled | | size | 1 | | snapshot_id | None | | source_volid | None | | status | available | | user_id | 0af9a7fb75934600a1d24185eb8acda6 | | volume_type | lvm | +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ When we live-migrate a volume failed, the volume status was in-use and migrate status was error. We only want to clear the migrate status through run following CLI:      cinder reset-state [--reset-migration-status] <volume> [<volume> ...] But after that, not only clear the migrate status but also change volume status into 'available' even though it was attached on a server, this isn't what we expected, so I report this bug to repair it. As following shown, this volume actually status is 'in-use', because it has attached on a server before we run following CLI: cinder reset-state [--reset-migration-status] <volume> but after this CLI completed ,the volume status was changed into 'available', it's not TRUE, because this volume is not detach from a server untill now! +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Property | Value | +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | attachments | [{u'server_id': u'25e82035-9238-4413-8791-6add0455ab3d', u'attachment_id': u'0f415605-9be2-4b06-8b31-ab3170503bc9', u'host_name': None, u'volume_id': u'91cc6b0b-2bc8-437c-bca4-b654de620d47', u'device': u'/dev/vdb', u'id': u'91cc6b0b-2bc8-437c-bca4-b654de620d47'}] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2017-01-16T09:11:49.000000 | | description | None | | encrypted | False | | id | 91cc6b0b-2bc8-437c-bca4-b654de620d47 | | metadata | {u'readonly': u'False', u'attached_mode': u'rw'} | | migration_status | None | | multiattach | False | | name | vol | | os-vol-host-attr:host | node-2@lvm#lvm | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 813cb32a89d540199e412dfcc1319576 | | os-volume-replication:driver_data | None | | os-volume-replication:extended_status | None | | replication_status | disabled | | size | 1 | | snapshot_id | None | | source_volid | None | | status | available | | user_id | 0af9a7fb75934600a1d24185eb8acda6 | | volume_type | lvm | +---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
2017-09-26 22:49:33 Sean McGinnis cinder: status In Progress New
2017-09-26 22:49:33 Sean McGinnis cinder: assignee xianming mao (mars0618)