emulator_threads_policy=isolate doesn't work with multi numa node

Bug #1746674 reported by Tetsuro Nakamura
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Fix Released
Medium
Tetsuro Nakamura
Pike
Fix Committed
Medium
Tetsuro Nakamura
Queens
Fix Committed
Medium
Tetsuro Nakamura

Bug Description

Description
===========

As described in test_multi_nodes_isolate() in https://github.com/openstack/nova/blob/master/nova/tests/unit/virt/test_hardware.py#L3006-L3024,
numa_fit_instance_to_host() function returns None for cpuset_reserved for cells with id > 0.

---------
   def test_multi_nodes_isolate(self):
        host_topo = self._host_topology()
        inst_topo = objects.InstanceNUMATopology(
            emulator_threads_policy=(
                fields.CPUEmulatorThreadsPolicy.ISOLATE),
            cells=[objects.InstanceNUMACell(
                id=0,
                cpuset=set([0]), memory=2048,
                cpu_policy=fields.CPUAllocationPolicy.DEDICATED),
            objects.InstanceNUMACell(
                id=1,
                cpuset=set([1]), memory=2048,
                cpu_policy=fields.CPUAllocationPolicy.DEDICATED)])

        inst_topo = hw.numa_fit_instance_to_host(host_topo, inst_topo)
        self.assertEqual({0: 0}, inst_topo.cells[0].cpu_pinning)
        self.assertEqual(set([1]), inst_topo.cells[0].cpuset_reserved)
        self.assertEqual({1: 2}, inst_topo.cells[1].cpu_pinning)
        self.assertIsNone(inst_topo.cells[1].cpuset_reserved)
--------

However, we are testing libvirt driver with non-None cpuset_reserved value in
https://github.com/openstack/nova/blob/master/nova/tests/unit/virt/libvirt/test_driver.py#L3052.

--------
    def test_get_guest_config_numa_host_instance_isolated_emulator_threads(
            self):
        instance_topology = objects.InstanceNUMATopology(
            emulator_threads_policy=(
                fields.CPUEmulatorThreadsPolicy.ISOLATE),
            cells=[
                objects.InstanceNUMACell(
                    id=0, cpuset=set([0, 1]),
                    memory=1024, pagesize=2048,
                    cpu_policy=fields.CPUAllocationPolicy.DEDICATED,
                    cpu_pinning={0: 4, 1: 5},
                    cpuset_reserved=set([6])),
                objects.InstanceNUMACell(
                    id=1, cpuset=set([2, 3]),
                    memory=1024, pagesize=2048,
                    cpu_policy=fields.CPUAllocationPolicy.DEDICATED,
                    cpu_pinning={2: 7, 3: 8},
                    cpuset_reserved=set([]))]) # <- this should be None!!

...(snip)...
--------

Actually, this causes errors when deploying VMs with multi numa nodes with `emulator_threads_policy=isolate`.

Environment & Steps to reproduce
==================

1. Use devstack to build all-in-one OpenStack with libvirt/KVM driver on a VM whose lscpu looks like this.

--------
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 8
NUMA node(s): 2
NUMA node0 CPU(s): 0-3
NUMA node1 CPU(s): 4-7
--------

2. Try to Build a VM with multi numa nodes with emulator_threads_policy=isolate.

--------
$ openstack flavor create c2r1024d1 --id 6 --ram 1024 --disk 1 --vcpu 2
$ openstack flavor set c2r1024d1 --property hw:cpu_policy=dedicated --property hw:emulator_threads_policy=isolate --property hw:numa_nodes=2
$ openstack server create test1 --image cirros-0.3.5-x86_64-disk --flavor c2r1024d1 --network private
--------

Expected & Actual result
===============

Expected: VM is built without an error
Actual: VM goes into an ERROR state with the following message.

--------
$ openstack server show test1
--------

...(snip)...
| fault | {u'message': u'Exceeded maximum number of retries. Exhausted all hosts available for retrying build failures for instance e8d7537f-9140-4ae6-bb3f-b98458a862ac.', u'code': 500, u'details': u' File "/opt/stack/nova/nova/conductor/manager.py", line 578, in build_instances\\n raise exception.MaxRetriesExceeded(reason=msg)\\n', u'created': u'2018-02-07T09:48:43Z'} |
...(snip)...

Logs & Configs
==============

--------
$ sudo journalctl -a --unit devstack@n-* | tail -1000 | grep -e ERROR -e INFO
--------

...(snip)...

Feb 07 04:48:38 ubuntu01 nova-scheduler[5195]: INFO nova.virt.hardware [None req-ec367a39-e9b3-4403-ab28-00519b5760ce demo admin] Computed NUMA topology CPU pinning: usable pCPUs: [[0], [1], [2], [3]], vCPUs mapping: [(0, 0)]

Feb 07 04:48:38 ubuntu01 nova-scheduler[5195]: INFO nova.virt.hardware [None req-ec367a39-e9b3-4403-ab28-00519b5760ce demo admin] Computed NUMA topology reserved pCPUs: usable pCPUs: [[], [1], [2], [3]], reserved pCPUs: set([1])

Feb 07 04:48:38 ubuntu01 nova-scheduler[5195]: INFO nova.virt.hardware [None req-ec367a39-e9b3-4403-ab28-00519b5760ce demo admin] Computed NUMA topology CPU pinning: usable pCPUs: [[4], [5], [6], [7]], vCPUs mapping: [(1, 4)]

Feb 07 04:48:39 ubuntu01 nova-compute[9951]: INFO nova.compute.claims [None req-ec367a39-e9b3-4403-ab28-00519b5760ce demo admin] [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] Attempting claim on node ubuntu01: memory 1024 MB, disk 1 GB, vcpus 2 CPU

Feb 07 04:48:39 ubuntu01 nova-compute[9951]: INFO nova.compute.claims [None req-ec367a39-e9b3-4403-ab28-00519b5760ce demo admin] [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] Total memory: 5824 MB, used: 512.00 MB

Feb 07 04:48:39 ubuntu01 nova-compute[9951]: INFO nova.compute.claims [None req-ec367a39-e9b3-4403-ab28-00519b5760ce demo admin] [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] memory limit not specified, defaulting to unlimited

Feb 07 04:48:39 ubuntu01 nova-compute[9951]: INFO nova.compute.claims [None req-ec367a39-e9b3-4403-ab28-00519b5760ce demo admin] [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] Total disk: 90 GB, used: 0.00 GB

Feb 07 04:48:39 ubuntu01 nova-compute[9951]: INFO nova.compute.claims [None req-ec367a39-e9b3-4403-ab28-00519b5760ce demo admin] [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] disk limit not specified, defaulting to unlimited

Feb 07 04:48:39 ubuntu01 nova-compute[9951]: INFO nova.compute.claims [None req-ec367a39-e9b3-4403-ab28-00519b5760ce demo admin] [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] Total vcpu: 8 VCPU, used: 0.00 VCPU

Feb 07 04:48:39 ubuntu01 nova-compute[9951]: INFO nova.compute.claims [None req-ec367a39-e9b3-4403-ab28-00519b5760ce demo admin] [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] vcpu limit not specified, defaulting to unlimited

Feb 07 04:48:39 ubuntu01 nova-compute[9951]: INFO nova.virt.hardware [None req-ec367a39-e9b3-4403-ab28-00519b5760ce demo admin] Computed NUMA topology CPU pinning: usable pCPUs: [[0], [1], [2], [3]], vCPUs mapping: [(0, 0)]

Feb 07 04:48:39 ubuntu01 nova-compute[9951]: INFO nova.virt.hardware [None req-ec367a39-e9b3-4403-ab28-00519b5760ce demo admin] Computed NUMA topology reserved pCPUs: usable pCPUs: [[], [1], [2], [3]], reserved pCPUs: set([1])

Feb 07 04:48:39 ubuntu01 nova-compute[9951]: INFO nova.virt.hardware [None req-ec367a39-e9b3-4403-ab28-00519b5760ce demo admin] Computed NUMA topology CPU pinning: usable pCPUs: [[4], [5], [6], [7]], vCPUs mapping: [(1, 4)]

Feb 07 04:48:39 ubuntu01 nova-compute[9951]: INFO nova.compute.claims [None req-ec367a39-e9b3-4403-ab28-00519b5760ce demo admin] [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] Claim successful on node ubuntu01

Feb 07 04:48:40 ubuntu01 nova-compute[9951]: INFO nova.virt.libvirt.driver [None req-ec367a39-e9b3-4403-ab28-00519b5760ce demo admin] [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] Creating image

Feb 07 04:48:41 ubuntu01 <email address hidden>[28408]: INFO nova.api.openstack.compute.server_external_events [None req-a9935589-09fd-45aa-b172-0e51ef79ca88 service nova] Creating event network-changed:a67cd6cd-1553-4f36-bbcc-eea893f55941 for instance e8d7537f-9140-4ae6-bb3f-b98458a862ac on ubuntu01

Feb 07 04:48:41 ubuntu01 <email address hidden>[28408]: INFO nova.api.openstack.requestlog [None req-a9935589-09fd-45aa-b172-0e51ef79ca88 service nova] 192.168.122.101 "POST /compute/v2.1/os-server-external-events" status: 200 len: 179 microversion: 2.1 time: 0.080556

Feb 07 04:48:42 ubuntu01 nova-compute[9951]: ERROR nova.compute.manager [None req-a9935589-09fd-45aa-b172-0e51ef79ca88 service nova] [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] Instance failed to spawn: TypeError: 'NoneType' object is not iterable

Feb 07 04:48:42 ubuntu01 nova-compute[9951]: ERROR nova.compute.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] Traceback (most recent call last):

Feb 07 04:48:42 ubuntu01 nova-compute[9951]: ERROR nova.compute.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] File "/opt/stack/nova/nova/compute/manager.py", line 2249, in _build_resources

Feb 07 04:48:42 ubuntu01 nova-compute[9951]: ERROR nova.compute.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] yield resources

Feb 07 04:48:42 ubuntu01 nova-compute[9951]: ERROR nova.compute.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] File "/opt/stack/nova/nova/compute/manager.py", line 2033, in _build_and_run_instance

Feb 07 04:48:42 ubuntu01 nova-compute[9951]: ERROR nova.compute.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] block_device_info=block_device_info)

Feb 07 04:48:42 ubuntu01 nova-compute[9951]: ERROR nova.compute.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 3005, in spawn

Feb 07 04:48:42 ubuntu01 nova-compute[9951]: ERROR nova.compute.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] mdevs=mdevs)

Feb 07 04:48:42 ubuntu01 nova-compute[9951]: ERROR nova.compute.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 5253, in _get_guest_xml

Feb 07 04:48:42 ubuntu01 nova-compute[9951]: ERROR nova.compute.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] context, mdevs)

Feb 07 04:48:42 ubuntu01 nova-compute[9951]: ERROR nova.compute.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 4987, in _get_guest_config

Feb 07 04:48:42 ubuntu01 nova-compute[9951]: ERROR nova.compute.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] instance.numa_topology, flavor, allowed_cpus, image_meta)

Feb 07 04:48:42 ubuntu01 nova-compute[9951]: ERROR nova.compute.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 4326, in _get_guest_numa_config

Feb 07 04:48:42 ubuntu01 nova-compute[9951]: ERROR nova.compute.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] object_numa_cell.cpuset_reserved)

Feb 07 04:48:42 ubuntu01 nova-compute[9951]: ERROR nova.compute.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] TypeError: 'NoneType' object is not iterable

Feb 07 04:48:42 ubuntu01 nova-compute[9951]: ERROR nova.compute.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac]

Feb 07 04:48:42 ubuntu01 nova-compute[9951]: INFO nova.compute.manager [None req-a9935589-09fd-45aa-b172-0e51ef79ca88 service nova] [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] Terminating instance

Feb 07 04:48:42 ubuntu01 nova-compute[9951]: INFO nova.virt.libvirt.driver [None req-a9935589-09fd-45aa-b172-0e51ef79ca88 service nova] [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] Instance destroyed successfully.

Feb 07 04:48:42 ubuntu01 ovs-vsctl[18497]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=120 -- --if-exists del-port br-int qvoa67cd6cd-15

Feb 07 04:48:42 ubuntu01 nova-compute[9951]: INFO os_vif [None req-a9935589-09fd-45aa-b172-0e51ef79ca88 service nova] Successfully unplugged vif VIFBridge(active=False,address=fa:16:3e:77:cf:80,bridge_name='qbra67cd6cd-15',has_traffic_filtering=True,id=a67cd6cd-1553-4f36-bbcc-eea893f55941,network=Network(f17bbb1a-c562-4722-a95a-5c117af6ee19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa67cd6cd-15')

Feb 07 04:48:42 ubuntu01 nova-compute[9951]: INFO nova.virt.libvirt.driver [None req-a9935589-09fd-45aa-b172-0e51ef79ca88 service nova] [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] Deleting instance files /opt/stack/data/nova/instances/e8d7537f-9140-4ae6-bb3f-b98458a862ac_del

Feb 07 04:48:42 ubuntu01 nova-compute[9951]: INFO nova.virt.libvirt.driver [None req-a9935589-09fd-45aa-b172-0e51ef79ca88 service nova] [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] Deletion of /opt/stack/data/nova/instances/e8d7537f-9140-4ae6-bb3f-b98458a862ac_del complete

Feb 07 04:48:42 ubuntu01 nova-compute[9951]: INFO nova.compute.manager [None req-a9935589-09fd-45aa-b172-0e51ef79ca88 service nova] [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] Took 0.19 seconds to destroy the instance on the hypervisor.

Feb 07 04:48:42 ubuntu01 nova-compute[9951]: INFO nova.scheduler.client.report [None req-a9935589-09fd-45aa-b172-0e51ef79ca88 service nova] Deleted allocation for instance e8d7537f-9140-4ae6-bb3f-b98458a862ac

Feb 07 04:48:42 ubuntu01 nova-conductor[9025]: ERROR nova.scheduler.utils [None req-ec367a39-e9b3-4403-ab28-00519b5760ce demo admin] [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] Error from last host: ubuntu01 (node ubuntu01): [u'Traceback (most recent call last):\n', u' File "/opt/stack/nova/nova/compute/manager.py", line 1851, in _do_build_and_run_instance\n filter_properties, request_spec)\n', u' File "/opt/stack/nova/nova/compute/manager.py", line 2119, in _build_and_run_instance\n instance_uuid=instance.uuid, reason=six.text_type(e))\n', u"RescheduledException: Build of instance e8d7537f-9140-4ae6-bb3f-b98458a862ac was re-scheduled: 'NoneType' object is not iterable\n"]

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: WARNING nova.scheduler.utils [None req-ec367a39-e9b3-4403-ab28-00519b5760ce demo admin] [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] Setting instance to ERROR state.: MaxRetriesExceeded: Exceeded maximum number of retries. Exhausted all hosts available for retrying build failures for instance e8d7537f-9140-4ae6-bb3f-b98458a862ac.

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [None req-ec367a39-e9b3-4403-ab28-00519b5760ce demo admin] [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] Failed to deallocate networks: EndpointNotFound: ['internal', 'public'] endpoint for network service not found

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] Traceback (most recent call last):

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] File "/opt/stack/nova/nova/conductor/manager.py", line 364, in _cleanup_allocated_networks

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] context, instance, requested_networks=requested_networks)

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] File "/opt/stack/nova/nova/network/neutronv2/api.py", line 1253, in deallocate_for_instance

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] data = neutron.list_ports(**search_opts)

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] File "/opt/stack/nova/nova/network/neutronv2/api.py", line 114, in wrapper

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] ret = obj(*args, **kwargs)

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 786, in list_ports

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] **_params)

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] File "/opt/stack/nova/nova/network/neutronv2/api.py", line 114, in wrapper

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] ret = obj(*args, **kwargs)

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 369, in list

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] for r in self._pagination(collection, path, **params):

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 384, in _pagination

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] res = self.get(path, params=params)

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] File "/opt/stack/nova/nova/network/neutronv2/api.py", line 114, in wrapper

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] ret = obj(*args, **kwargs)

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 354, in get

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] headers=headers, params=params)

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] File "/opt/stack/nova/nova/network/neutronv2/api.py", line 114, in wrapper

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] ret = obj(*args, **kwargs)

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 331, in retry_request

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] headers=headers, params=params)

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] File "/opt/stack/nova/nova/network/neutronv2/api.py", line 114, in wrapper

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] ret = obj(*args, **kwargs)

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 282, in do_request

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] headers=headers)

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] File "/usr/local/lib/python2.7/dist-packages/neutronclient/client.py", line 342, in do_request

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] self._check_uri_length(url)

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] File "/usr/local/lib/python2.7/dist-packages/neutronclient/client.py", line 335, in _check_uri_length

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] uri_len = len(self.endpoint_url) + len(url)

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] File "/usr/local/lib/python2.7/dist-packages/neutronclient/client.py", line 349, in endpoint_url

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] return self.get_endpoint()

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line 223, in get_endpoint

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] return self.session.get_endpoint(auth or self.auth, **kwargs)

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py", line 942, in get_endpoint

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] return auth.get_endpoint(self, **kwargs)

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] File "/opt/stack/nova/nova/context.py", line 78, in get_endpoint

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] region_name=region_name)

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/access/service_catalog.py", line 338, in url_for

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] endpoint_id=endpoint_id).url

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/access/service_catalog.py", line 400, in endpoint_data_for

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] raise exceptions.EndpointNotFound(msg)

Feb 07 04:48:43 ubuntu01 nova-conductor[9025]: ERROR nova.conductor.manager [instance: e8d7537f-9140-4ae6-bb3f-b98458a862ac] EndpointNotFound: ['internal', 'public'] endpoint for network service not found

...(snip)...

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (master)

Fix proposed to branch: master
Review: https://review.openstack.org/539865

Changed in nova:
assignee: nobody → Tetsuro Nakamura (tetsuro0907)
status: New → In Progress
Revision history for this message
Stephen Finucane (stephenfinucane) wrote : Re: isolated cpu thread policy doesn't work with multi numa node

> This causes errors when deploying VMs with multi numa nodes with `cpu_thread_policy=isolate`.

Any chance we could get a sample traceback?

Revision history for this message
Tetsuro Nakamura (tetsuro0907) wrote :

> Any chance we could get a sample traceback?

That is on my ToDo list.

summary: - isolated cpu thread policy doesn't work with multi numa node
+ emulator_threads_policy=isolate doesn't work with multi numa node
Revision history for this message
Tetsuro Nakamura (tetsuro0907) wrote :

@stephen
I updated the Bug Description adding the traceback

description: updated
Matt Riedemann (mriedem)
tags: added: libvirt numa
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to nova (master)

Reviewed: https://review.openstack.org/539865
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=24d9e06ec68dfd0ead9bc56ad186e7b4e0acd8fb
Submitter: Zuul
Branch: master

commit 24d9e06ec68dfd0ead9bc56ad186e7b4e0acd8fb
Author: Tetsuro Nakamura <email address hidden>
Date: Thu Feb 1 04:32:41 2018 +0900

    add check before adding cpus to cpuset_reserved

    There are some cases where None value is set to cpuset_reserved in
    InstanceNUMATopology at _numa_fit_instance_cell() function in
    hardware.py. However, libvirt driver treat cpuset_reserved value
    as an iterate object when it constructs xml configuration.

    To avoid a risk to get an error in libvirt driver, this patch adds
    a check to see if the value is not None before adding the cpus
    for emulator threads.

    Change-Id: Iab3d950c4f4138118ac6a9fd98407eaadcb24d9e
    Closes-Bug: #1746674

Changed in nova:
status: In Progress → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (stable/queens)

Fix proposed to branch: stable/queens
Review: https://review.openstack.org/557621

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (stable/pike)

Fix proposed to branch: stable/pike
Review: https://review.openstack.org/557622

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to nova (stable/queens)

Reviewed: https://review.openstack.org/557621
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=2dc4d7a366efe11acb1099eca13bd3f3c2af9012
Submitter: Zuul
Branch: stable/queens

commit 2dc4d7a366efe11acb1099eca13bd3f3c2af9012
Author: Tetsuro Nakamura <email address hidden>
Date: Thu Feb 1 04:32:41 2018 +0900

    add check before adding cpus to cpuset_reserved

    There are some cases where None value is set to cpuset_reserved in
    InstanceNUMATopology at _numa_fit_instance_cell() function in
    hardware.py. However, libvirt driver treat cpuset_reserved value
    as an iterate object when it constructs xml configuration.

    To avoid a risk to get an error in libvirt driver, this patch adds
    a check to see if the value is not None before adding the cpus
    for emulator threads.

    Conflicts:
            nova/virt/libvirt/driver.py

    The conflict is due to f0620a01dee1cd98685ec04b3bdef09765cdff61

    Change-Id: Iab3d950c4f4138118ac6a9fd98407eaadcb24d9e
    Closes-Bug: #1746674
    (cherry picked from commit 24d9e06ec68dfd0ead9bc56ad186e7b4e0acd8fb)

tags: added: in-stable-queens
Matt Riedemann (mriedem)
Changed in nova:
importance: Undecided → Medium
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to nova (stable/pike)

Reviewed: https://review.openstack.org/557622
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=461e42dcc4831083c46f4ff9763807862aab9b4a
Submitter: Zuul
Branch: stable/pike

commit 461e42dcc4831083c46f4ff9763807862aab9b4a
Author: Tetsuro Nakamura <email address hidden>
Date: Thu Feb 1 04:32:41 2018 +0900

    add check before adding cpus to cpuset_reserved

    There are some cases where None value is set to cpuset_reserved in
    InstanceNUMATopology at _numa_fit_instance_cell() function in
    hardware.py. However, libvirt driver treat cpuset_reserved value
    as an iterate object when it constructs xml configuration.

    To avoid a risk to get an error in libvirt driver, this patch adds
    a check to see if the value is not None before adding the cpus
    for emulator threads.

    Change-Id: Iab3d950c4f4138118ac6a9fd98407eaadcb24d9e
    Closes-Bug: #1746674
    (cherry picked from commit 24d9e06ec68dfd0ead9bc56ad186e7b4e0acd8fb)
    (cherry picked from commit 2dc4d7a366efe11acb1099eca13bd3f3c2af9012)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/nova 17.0.2

This issue was fixed in the openstack/nova 17.0.2 release.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/nova 16.1.1

This issue was fixed in the openstack/nova 16.1.1 release.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/nova 18.0.0.0b1

This issue was fixed in the openstack/nova 18.0.0.0b1 development milestone.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.