libvirt: Use VIR_MIGRATE_TLS to get QEMU's native TLS support for migration and NBD

Bug #1798796 reported by Kashyap Chamarthy on 2018-10-19
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Kashyap Chamarthy

Bug Description

Make Nova's libvirt driver use libvirt's VIR_MIGRATE_TLS, which will
transport a Nova instance's migration and NBD data streams via QEMU's
native TLS.


From a downstream bug description by Dan Berrangé:

    "The default QEMU migration transport runs a clear text TCP connection
    between the two QEMU servers. It is possible to tunnel the migration
    connection over libvirtd's secure connection but this imposes a
    significant performance penalty. It is also not possible to tunnel the
    NBD connection use for block migration at all.

    "As a step towards securing the management network we need to have Nova
    configure QEMU to use native TLS support on its migration and NBD data
    transports, without any tunnelling."

Minimum version requirements for this feature to work:

    QEMU == 2.9
    libvirt == v.4.4.0

                * * *

Broader context and background here:
    RFC: Universal encryption on QEMU I/O channels

tags: added: libvirt
Changed in nova:
importance: Undecided → Medium
assignee: nobody → Kashyap Chamarthy (kashyapc)

Fix proposed to branch: master

Changed in nova:
status: New → In Progress
melanie witt (melwitt) wrote :

I think this is more accurately characterized as an enhancement, rather than a bug, as we discussed in #openstack-nova [1] today. We will use a blueprint + include docs in the patch OR use a spec to capture details and relevant reference material.


Changed in nova:
importance: Medium → Wishlist
Kashyap Chamarthy (kashyapc) wrote :

Yep, agreed on te Blueprint: here we go:

But a small comment: naming it as "Wishlist" can be misleading; probably you didn't intend it that way, and are just doing the necessary "bug metadata work". Because migrating disks over an encrypted channel is a strong requirement for many IT Orgs. FWIW, quoting from DanPB's RFC[*] on qemu-devel (from Feb 2015):

    "We have a broad goal in OpenStack that every network channel in use
    must have encryption and authentication capabilities. Currently all
    the communication channels between the end user and the cloud
    infrastructure edge servers are secured, but internally a number of
    the cloud infrastructure components are unsecured. For example, we
    recommend to tunnel migration via libvirt, though that excludes use
    of the NBD for block migration since libvirt can't currently tunnel
    that. [...]

    "Essentially the project considers that it is no longer sufficient
    to consider the private management LAN (on which the cloud
    infrastructure is deployed) to be fully trusted; it must be
    considered hostile."


Martin Schuppert (mschuppert) wrote :
Download full text (10.2 KiB)

While working on the tripleo integration on this. Some results with the patch from t :

SRC compute:

2019-01-09 07:50:58.430+0000: 238166: info : qemuMonitorIOWrite:551 : QEMU_MONITOR_IO_WRITE: mon=0x7fa21001bc80 buf={"execute":"blockdev-add","arguments":{"driver":"nbd","server":{"type":"inet","host":"","port":"61153"},"export":"drive-vi
rtio-disk0","tls-creds":"objlibvirt_migrate_tls0","node-name":"migration-vda-storage","read-only":false,"discard":"unmap"},"id":"libvirt-26"} len=309 ret=309 errno=0
2019-01-09 07:50:58.562+0000: 238166: debug : qemuMonitorJSONIOProcessLine:197 : Line [{"return": {}, "id": "libvirt-26"}]
2019-01-09 07:50:58.562+0000: 238166: info : qemuMonitorJSONIOProcessLine:217 : QEMU_MONITOR_RECV_REPLY: mon=0x7fa21001bc80 reply={"return": {}, "id": "libvirt-26"}
2019-01-09 07:50:58.569+0000: 238183: debug : qemuMonitorJSONCommandWithFd:310 : Receive command reply ret=0 rxObject=0x560388a64150
2019-01-09 07:50:58.569+0000: 238183: debug : qemuMonitorBlockdevAdd:4336 : props=0x7fa2080138b0 (node-name=migration-vda-format)
2019-01-09 07:50:58.569+0000: 238183: debug : qemuMonitorBlockdevAdd:4338 : mon:0x7fa21001bc80 vm:0x7fa204003440 json:1 fd:29
2019-01-09 07:50:58.569+0000: 238183: debug : qemuMonitorJSONCommandWithFd:305 : Send command '{"execute":"blockdev-add","arguments":{"node-name":"migration-vda-format","read-only":false,"driver":"raw","file":"migration-vda-storage"},"id":"libvirt-27"}' for write with FD -1
2019-01-09 07:50:58.569+0000: 238183: info : qemuMonitorSend:1083 : QEMU_MONITOR_SEND_MSG: mon=0x7fa21001bc80 msg={"execute":"blockdev-add","arguments":{"node-name":"migration-vda-format","read-only":false,"driver":"raw","file":"migration-vda-storage"},"id":"libvirt-27"} fd=-1
2019-01-09 07:50:58.569+0000: 238166: info : qemuMonitorIOWrite:551 : QEMU_MONITOR_IO_WRITE: mon=0x7fa21001bc80 buf={"execute":"blockdev-add","arguments":{"node-name":"migration-vda-format","read-only":false,"driver":"raw","file":"migration-vda-storage"},"id":"libvirt-27"} len=159 ret=159 errno=0

2019-01-09 07:50:58.576+0000: 238183: debug : qemuMonitorJSONCommandWithFd:305 : Send command '{"execute":"blockdev-mirror","arguments":{"device":"drive-virtio-disk0","target":"migration-vda-format","speed":9223372036853727232,"sync":"top"},"id":"libvirt-28"}' for write with FD -1
2019-01-09 07:50:58.576+0000: 238183: info : qemuMonitorSend:1083 : QEMU_MONITOR_SEND_MSG: mon=0x7fa21001bc80 msg={"execute":"blockdev-mirror","arguments":{"device":"drive-virtio-disk0","target":"migration-vda-format","speed":9223372036853727232,"sync":"top"},"id":"libvirt-28"} fd=-1
2019-01-09 07:50:58.576+0000: 238166: info : qemuMonitorIOWrite:551 : QEMU_MONITOR_IO_WRITE: mon=0x7fa21001bc80 buf={"execute":"blockdev-mirror","arguments":{"device":"drive-virtio-disk0","target":"migration-vda-format","speed":9223372036853727232,"sync":"top"},"id":"libvirt-28"} len=166 ret=166 errno=0
2019-01-09 07:50:58.580+0000: 238166: debug : qemuMonitorJSONIOProcessLine:197 : Line [{"timestamp": {"seconds": 1547020258, "microseconds": 579521}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id...

Kashyap Chamarthy (kashyapc) wrote :

Thanks for confirming, Martin.

For others reading the comment#4 and wondering what it is: Martin just posted the test evidence of what commands libvirt is sending to QEMU.

I have reviewed the the log content, and it looks correct — as in: "native TLS" is being used correctly for live migration without shared storage (i.e. "live block migration").

Compared to the test evidence I posted here:

Submitter: Zuul
Branch: master

commit 9160fe50987131feda9429c4e95d573e176916b6
Author: Kashyap Chamarthy <email address hidden>
Date: Wed Dec 12 16:51:52 2018 +0100

    libvirt: Support native TLS for migration and disks over NBD

    The encryption offered by Nova (via `live_migration_tunnelled`, i.e.
    "tunnelling via libvirtd") today secures only two migration streams:
    guest RAM and device state; but it does _not_ encrypt the NBD (Network
    Block Device) transport—which is used to migrate disks that are on
    non-shared storage setup (also called: "block migration"). Further, the
    "tunnelling via libvirtd" has a huge performance penalty and latency,
    because it burns more CPU and memory bandwidth due to increased number
    of data copies on both source and destination hosts.

    To solve this existing limitation, introduce a new config option
    `live_migration_with_native_tls`, which will take advantage of "native
    TLS" (i.e. TLS built into QEMU, and relevant support in libvirt). The
    native TLS transport will encrypt all migration streams, *including*
    disks that are not on shared storage — all of this without incurring the
    limitations of the "tunnelled via libvirtd" transport.

    Closes-Bug: #1798796
    Blueprint: support-qemu-native-tls-for-live-migration

    Change-Id: I78f5fef41b6fbf118880cc8aa4036d904626b342
    Signed-off-by: Kashyap Chamarthy <email address hidden>

Changed in nova:
status: In Progress → Fix Released
Changed in nova:
importance: Wishlist → Medium

This issue was fixed in the openstack/nova release candidate.

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers