Live migration of guests created with config-drive using non-standard "migration" binding fails
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Nova Cloud Controller Charm |
Invalid
|
Undecided
|
Unassigned | ||
OpenStack Nova Compute Charm |
Fix Committed
|
Undecided
|
Unassigned |
Bug Description
[Environment]
Focal/Ussuri, latest stable charms
juju show-application nova-compute-dpdk: https:/
[Description]
There are multiple Nova applications in this deployment (3 specifically - "generic", "sriov" and "dpdk" landscapes). There wasn't any problems with the live migration at the generic hosts, however, when trying to live migrate the DPDK machine, we've got the following error in Nova log:
in pre_live_migration
migrate_data = self.driver.
File "/usr/lib/
self._
File "/usr/lib/
self.driver.
File "/usr/lib/
processutils.
File "/usr/lib/
raise ProcessExecutio
"oslo_concurren
Command: scp -r u0400s2entcomp0
Exit code: 1
Stdout: ''
Stderr: 'Host key verification failed.\r\n'
[Analysis]
This issue fires, because Nova is trying to do an outgoing SSH connection to the remote hypervisor, using its FQDN (which resolves to the different IP address, which is not equal to the ingress-address of "migration" binding).
https:/
if (configdrive.
This scenario applies for any config-
j run --unit nova-compute-
bind-addresses:
- mac-address: f4:a4:d6:f3:68:a1
interface-name: bond1.811
addresses:
- hostname: ""
address: 10.35.174.1 ##### looks correct
cidr: 10.35.174.0/25
macaddress: f4:a4:d6:f3:68:a1
interfacename: bond1.811
egress-subnets:
- 10.35.174.1/32
ingress-addresses:
- 10.35.174.1
$ j run --unit nova-compute-dpdk/0 'relation-ids cloud-compute'
cloud-compute:204
$ j run --unit nova-compute-
availability_zone: default
egress-subnets: 10.35.174.1/32
hostname: u0400s2entcomp02
ingress-address: 10.35.174.1 ### looks also good (as expected)
But:
ubuntu@
nova@u0400s2ent
nova@u0400s2ent
/var/lib/nova
nova@u0400s2ent
|1|uwlHwgyd2RN3
nova@u0400s2ent
The authenticity of host 'u0400s2entcomp02 (10.35.81.249)' can't be established.
# same host, but using internal IP
nova@u0400s2ent
nova
u0400s2entcomp02
[Expected result]
config-drive guests should migrate properly, considering the binding (or using the IP instead of hypervisor FQDN).
[Available workarounds]
1. Suppress the warning by adding /var/lib/
Host *
StrictHostK
UserKnownHo
2. Use oam-space for migration binding (so the known_hosts would generate properly)
tags: | removed: field-high |
Changed in charm-nova-cloud-controller: | |
status: | New → Invalid |
+ field-high, since that impacts ongoing delivery (and this functionality is expected to work).