Comment 16 for bug 1477475

Revision history for this message
Dmitriy Kruglov (dkruglov) wrote :

The initial issue is still (periodically) reproduced if the procedure described in comment #14 is followed (stopping VM, w/o migration):
1 Deploy a cluster (3 controllers + 1 compute, neutron VLAN)
2 Create an OS instance and assign to it proper security rules for network connectivity
3 Ensure that the instance is pingable
4 Stop the instance
     $ nova stop <INSTANCE_ID>
5 Disable nova-compute service on the node that hosts the instance
     $ nova service-disable <HOSTNAME> nova-compute
6 Enable partition preservation for compute node (excluding OS partition) that hosts the test instance:
    - download disk.yaml
             $ fuel node --node <NODE_ID> --disk --download
    - edit downloaded disk.yaml. Set 'keep_data' flag to true for partitions, excluding OS partition
    - upload disk.yaml
             $ fuel node --node <NODE_ID> --disk --upload
7 Reinstall the compute node:
    $ fuel node --node-id <NODE_ID> --provision
    $ fuel node --node-id <NODE_ID> --deploy
8 Enable the nova-compute service back
     $ nova service-enable <HOSTNAME> nova-compute
9 Start the instance
     $ nova start <INSTANCE_ID>
10 Ping the instance

Expected result: the instance is accessible via ICMP
Actual result: the instance is up and running, but is not accessible via ICMP

MOS 8.0, build 548.