Multi node setup doesn't show list of hypervisors

Bug #1557663 reported by Prithiv
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
kolla
Invalid
Critical
Prithiv
Mitaka
Invalid
Critical
Prithiv

Bug Description

Multi node setup doesn't show the list of hypervisors, be it with or without fake libvirt driver. This is not the case with single node deployment.

+----+---------------------+-------+--------+
| ID | Hypervisor hostname | State | Status |
+----+---------------------+-------+--------+
+----+---------------------+-------+--------+

[control]
# These hostname must be resolvable from your deployment host
controller ansible_ssh_user=root
# The above can also be specified as follows:
#control[01:03] ansible_ssh_user=kolla

# The network nodes are where your l3-agent and loadbalancers will run
# This can be the same as a host in the control group
[network]
network ansible_ssh_user=root

[compute]
silpixa00394079 ansible_ssh_user=root
# When compute nodes and control nodes use different interfaces,
# you can specify "api_interface" and another interfaces like below:
#compute01 neutron_external_interface=eth0 api_interface=em1 storage_interface=em1 tunnel_interface=em1

[storage]
storage ansible_ssh_user=root

Prithiv (prithiv)
Changed in kolla:
assignee: nobody → Prithiv (prithiv)
Revision history for this message
Steven Dake (sdake) wrote :

This needs confirmation and I lack a multinode cluster at present to verify. Can someone with a 3 node setup please confirm?

TIA.

Changed in kolla:
status: New → Triaged
importance: Undecided → Critical
milestone: none → mitaka-rc1
Steven Dake (sdake)
Changed in kolla:
milestone: mitaka-rc1 → mitaka-rc2
Revision history for this message
Sam Yaple (s8m) wrote :

root@ubuntu1:~/kolla# nova hypervisor-list
+----+---------------------+-------+---------+
| ID | Hypervisor hostname | State | Status |
+----+---------------------+-------+---------+
| 4 | ubuntu2 | up | enabled |
| 7 | ubuntu1 | up | enabled |
| 10 | ubuntu3 | up | enabled |
+----+---------------------+-------+---------+

Works fine over here. This a deploy from ~8 hours ago. I suspect there is something up with libvirt on your host.

Revision history for this message
Prithiv (prithiv) wrote :
Download full text (3.3 KiB)

Below is my multi node inventory file

# These initial groups are the only groups required to be modified. The
# additional groups are for more control of the environment.
[control]
# These hostname must be resolvable from your deployment host
control ansible_ssh_user=root
# The above can also be specified as follows:
#control[01:03] ansible_ssh_user=kolla

# The network nodes are where your l3-agent and loadbalancers will run
# This can be the same as a host in the control group
[network]
network ansible_ssh_user=root

[compute]
compute ansible_ssh_user=root
# When compute nodes and control nodes use different interfaces,
# you can specify "api_interface" and another interfaces like below:
#compute01 neutron_external_interface=eth0 api_interface=em1 storage_interface=em1 tunnel_interface=em1

[storage]
silpixa00394079 ansible_ssh_user=root

# You can explicitly specify which hosts run each project by updating the
# groups in the sections below. Common services are grouped together.
[kibana:children]
control

[elasticsearch:children]
control

[haproxy:children]
network

[mariadb:children]
control

[rabbitmq:children]
control

[mongodb:children]
control

[keystone:children]
control

[glance:children]
control

[nova:children]
control

[neutron:children]
network

[cinder:children]
control

[memcached:children]
control

[horizon:children]
control

[swift:children]
control

[heat:children]
control

[murano:children]
control

[ironic:children]
control

#[ceph-mon:children]
#control

#[ceph-rgw:children]
#control

#[ceph-osd:children]
#storage

[magnum:children]
control

[mistral:children]
control

[manila:children]
control

# Additional control implemented here. These groups allow you to control which
# services run on which hosts at a per-service level.
#
# Word of caution: Some services are required to run on the same host to
# function appropriately. For example, neutron-metadata-agent must run on the
# same host as the l3-agent and (depending on configuration) the dhcp-agent.

# Glance
[glance-api:children]
glance

[glance-registry:children]
glance

# Nova
[nova-api:children]
nova

[nova-conductor:children]
nova

[nova-consoleauth:children]
nova

[nova-novncproxy:children]
nova

[nova-scheduler:children]
nova

[nova-spicehtml5proxy:children]
nova

[nova-compute-ironic:children]
nova

# Neutron
[neutron-server:children]
control

[neutron-dhcp-agent:children]
neutron

[neutron-l3-agent:children]
neutron

[neutron-metadata-agent:children]
neutron

# Cinder
[cinder-api:children]
cinder

[cinder-backup:children]
storage

[cinder-scheduler:children]
cinder

[cinder-volume:children]
storage

# Manila
[manila-api:children]
manila

[manila-scheduler:children]
manila

[manila-share:children]
storage

# Swift
[swift-proxy-server:children]
swift

[swift-account-server:children]
storage

[swift-container-server:children]
storage

[swift-object-server:children]
storage

# Heat
[heat-api:children]
heat

[heat-api-cfn:children]
heat

[heat-engine:children]
heat

# Murano
[murano-api:children]
murano

[murano-engine:children]
murano

# Ironic
[ironic-api:children]
ironic

[ironic-conductor:children]
ironic

[ironic-inspector:children]
ironic

[ironic-...

Read more...

Revision history for this message
Prithiv (prithiv) wrote :

Sam, What branch you're trying. Also can you post config details about libvirt or your inventory

Revision history for this message
Carlos Cesario (ccesario) wrote :

Your nodes are physical or virtual machines ?

Could you post the nova-* logs ?

Steven Dake (sdake)
Changed in kolla:
milestone: mitaka-rc2 → mitaka-rc3
milestone: mitaka-rc3 → newton-1
status: Triaged → Incomplete
Revision history for this message
Varsha (varsha-jayaraj94) wrote :

Did you add the host names of each of your nodes in the /etc/hosts file?

Revision history for this message
Matthew Taylor (matthew-taylor-f) wrote :

no problems in Mitaka either, multi node setup here.

[root@controller01 ~]$ nova hypervisor-list
+----+------------------------------------------+-------+---------+
| ID | Hypervisor hostname | State | Status |
+----+------------------------------------------+-------+---------+
| 4 | kvsrv02.<removed> | up | enabled |
| 7 | kvsrv01.<removed> | up | enabled |
| 10 | kvsrv03.<removed> | up | enabled |
+----+------------------------------------------+-------+---------+

Revision history for this message
bjolo (bjorn-lofdahl) wrote :

Prithiv, can you please verify kvm is OK on all nodes?

docker exec nova_libvirt virt-host-validate

I had another issue with VMs getting multiple IPs, but noticed that list of hypervisor was not working either. Found one node did not have virtualization (vt-x) enabled in BIOS. Once enabled, both issues resolved.

shake.chen (shake-chen)
Changed in kolla:
status: Incomplete → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.