Comment 9 for bug 1672624

Revision history for this message
melanie witt (melwitt) wrote :

Confirmed that the default cleanup periodic task in compute will destroy local deleted domains when nova-compute is brought back up. The default action is "reap" which will delete orphaned domains and the periodic task runs every 30 minutes by default. It doesn't run immediately when nova-compute starts, so it takes 30 minutes after nova-compute comes back up for the local deleted instance to be reaped.

Afterward, the volume can be deleted from cinder.

The config options for cleaning local deleted instances are:

    cfg.StrOpt("running_deleted_instance_action",
        default="reap",
        choices=('noop', 'log', 'shutdown', 'reap'),
        help="""
The compute service periodically checks for instances that have been
deleted in the database but remain running on the compute node. The
above option enables action to be taken when such instances are
identified.

Possible values:

* reap: Powers down the instances and deletes them(default)
* log: Logs warning message about deletion of the resource
* shutdown: Powers down instances and marks them as non-
  bootable which can be later used for debugging/analysis
* noop: Takes no action

    cfg.IntOpt("running_deleted_instance_poll_interval",
        default=1800,
        help="""
Time interval in seconds to wait between runs for the clean up action.
If set to 0, above check will be disabled. If "running_deleted_instance
_action" is set to "log" or "reap", a value greater than 0 must be set.

Possible values:

* Any positive integer in seconds enables the option.
* 0: Disables the option.
* 1800: Default value.

    cfg.IntOpt("running_deleted_instance_timeout",
        default=0,
        help="""
Time interval in seconds to wait for the instances that have
been marked as deleted in database to be eligible for cleanup.

Possible values:

* Any positive integer in seconds(default is 0).

n-cpu.log:

2017-03-15 16:54:44.939 DEBUG oslo_service.periodic_task [req-24c2664b-fe22-44a8-8e72-0b809adf50f1 None None] Running periodic task ComputeManager._cleanup_running_deleted_instances from (pid=14524) run_periodic_tasks /usr/local/lib/python2.7/dist-packages/oslo_service/periodic_task.py:215
2017-03-15 16:54:44.947 DEBUG oslo_messaging._drivers.amqpdriver [req-24c2664b-fe22-44a8-8e72-0b809adf50f1 None None] CALL msg_id: b7adb6d039274979a8c970f83bbef78f exchange 'nova' topic 'conductor' from (pid=14524) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:442
2017-03-15 16:54:44.991 DEBUG oslo_messaging._drivers.amqpdriver [-] received reply msg_id: b7adb6d039274979a8c970f83bbef78f from (pid=14524) __call__ /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:299
2017-03-15 16:54:44.995 INFO nova.compute.manager [req-24c2664b-fe22-44a8-8e72-0b809adf50f1 None None] [instance: 67244c6e-7e04-4bf1-8dc2-06d6dcfb9a89] Destroying instance with name label 'instance-00000002' which is marked as DELETED but still present on host.
2017-03-15 16:54:44.996 DEBUG oslo_messaging._drivers.amqpdriver [req-24c2664b-fe22-44a8-8e72-0b809adf50f1 None None] CALL msg_id: 8094e4320a41471b9e8d928c1fd23ef8 exchange 'nova' topic 'conductor' from (pid=14524) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:442
2017-03-15 16:54:45.013 DEBUG oslo_messaging._drivers.amqpdriver [-] received reply msg_id: 8094e4320a41471b9e8d928c1fd23ef8 from (pid=14524) __call__ /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:299
2017-03-15 16:54:45.015 DEBUG oslo_concurrency.lockutils [req-24c2664b-fe22-44a8-8e72-0b809adf50f1 None None] Lock "67244c6e-7e04-4bf1-8dc2-06d6dcfb9a89-events" acquired by "nova.compute.manager._clear_events" :: waited 0.000s from (pid=14524) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:270
2017-03-15 16:54:45.016 DEBUG oslo_concurrency.lockutils [req-24c2664b-fe22-44a8-8e72-0b809adf50f1 None None] Lock "67244c6e-7e04-4bf1-8dc2-06d6dcfb9a89-events" released by "nova.compute.manager._clear_events" :: held 0.001s from (pid=14524) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:282
2017-03-15 16:54:45.017 INFO nova.compute.manager [req-24c2664b-fe22-44a8-8e72-0b809adf50f1 None None] [instance: 67244c6e-7e04-4bf1-8dc2-06d6dcfb9a89] Terminating instance
2017-03-15 16:54:45.017 DEBUG nova.objects.instance [req-24c2664b-fe22-44a8-8e72-0b809adf50f1 None None] Lazy-loading 'info_cache' on Instance uuid 67244c6e-7e04-4bf1-8dc2-06d6dcfb9a89 from (pid=14524) obj_load_attr /opt/stack/nova/nova/objects/instance.py:1034
2017-03-15 16:54:45.019 DEBUG oslo_messaging._drivers.amqpdriver [req-24c2664b-fe22-44a8-8e72-0b809adf50f1 None None] CALL msg_id: 19a63225ce49479c8583fef3b219c5c8 exchange 'nova' topic 'conductor' from (pid=14524) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:442
2017-03-15 16:54:45.073 DEBUG oslo_messaging._drivers.amqpdriver [-] received reply msg_id: 19a63225ce49479c8583fef3b219c5c8 from (pid=14524) __call__ /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:299
2017-03-15 16:54:45.077 DEBUG nova.compute.manager [req-24c2664b-fe22-44a8-8e72-0b809adf50f1 None None] [instance: 67244c6e-7e04-4bf1-8dc2-06d6dcfb9a89] Start destroying the instance on the hypervisor. from (pid=14524) _shutdown_instance /opt/stack/nova/nova/compute/manager.py:2242
2017-03-15 16:54:45.546 INFO nova.virt.libvirt.driver [-] [instance: 67244c6e-7e04-4bf1-8dc2-06d6dcfb9a89] Instance destroyed successfully.

$ cinder list
+--------------------------------------+-----------+------+------+-------------+----------+-------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+------+------+-------------+----------+-------------+
| f19c27c5-596f-4d30-9a3f-bd7df7b43e12 | available | | 1 | ceph | true | |
+--------------------------------------+-----------+------+------+-------------+----------+-------------+

$ cinder delete f19c27c5-596f-4d30-9a3f-bd7df7b43e12
Request to delete volume f19c27c5-596f-4d30-9a3f-bd7df7b43e12 has been accepted.

$ cinder list
+----+--------+------+------+-------------+----------+-------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+----+--------+------+------+-------------+----------+-------------+
+----+--------+------+------+-------------+----------+-------------+