Comment 3 for bug 1814595

The general steps are as follows, then they are repeated for several hours.

Test Step 1: Clear local storage cache on compute-0
rm -rf /var/lib/nova/instances/_base/*
'sync;echo 3 > /proc/sys/vm/drop_caches

Step 2: Create a flavor with 2 vcpus, dedicated cpu policy, and local_image storage
nova --os-username 'admin' --os-password '<password>' --os-project-name admin --os-auth-url http://keystone.openstack.svc.cluster.local/v3 --os-user-domain-name Default --os-project-domain-name Default --os-endpoint-type internalURL --os-region-name RegionOne flavor-create local_image auto 1024 2 2'

Step 3: Tenant2 launches instance1
tenant2 boots the instance eg. from local_image on compute-0 (collect startup time info)
'nova --os-username 'tenant2' --os-password '<password>' --os-project-name tenant2 --os-auth-url http://keystone.openstack.svc.cluster.local/v3 --os-user-domain-name Default --os-project-domain-name Default --os-endpoint-type internalURL --os-region-name RegionOne boot --key-name keypair-tenant2 --flavor <flavorid> --image <tis-centos-guest image> --nic net-id=28348793-baef-4987-ae23-0baf02036819 --nic net-id=9267348a-c940-41e4-8a57-ce1046946f42 --nic net-id=3de977f7-94d2-45ff-907d-977a715c7782 tenant2-local_image-1 --poll'
Send 'ping -c 3 <ip of mgmt network for tenant2>

Step 4: Tenant1 launches instance2 (the observer instance)
Send 'nova --os-username 'tenant1' --os-password '<password>' --os-project-name tenant1 --os-auth-url http://keystone.openstack.svc.cluster.local/v3 --os-user-domain-name Default --os-project-domain-name Default --os-endpoint-type internalURL --os-region-name RegionOne boot --key-name keypair-tenant1 --flavor <flavorid> --image <tis-centos-guest image> --nic net-id=f6ac6f3d-45e2-4e1f-88ad-ac10bbe8ccf2 --nic net-id=0d3b246c-f0b5-4ec4-bbd4-c76eb39d1b9b --nic net-id=3de977f7-94d2-45ff-907d-977a715c7782 tenant1-observer-2 --poll'
Send 'ping -c 3 <ip of mgmt network for tenant1>

Step 5 Collects live migrate KPI for vm booted from local_image
Step 6: Setting up traffic for pairs instance1 and instance2

Step 7: Cold migrates the instance & confirm cold migraton (collects KPI for each)
Step 8: Setup traffic for pairs between the instances (the one launched in step 3 and the observer instance launched in step 4)

Step 9/10: Ping instance1 from NatBox on a new thread (collect kpi for rebuild)
Test Step 11: Perform rebuild on instance1 again and ensure it's reachable after that (ping tenant2 mgmt network)

Teardown