[DVS] Error occurs when creating neutron subnet with vmware nsx plugin
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
vmware-nsx |
Invalid
|
Undecided
|
Unassigned |
Bug Description
Summary : [DVS] Error occurs when creating neutron subnet with vmware nsx plugin
Decription
==========
Error occurs when creating neutron subnet with vmware nsx plugin.
Then vm does not get ip from neutron dhcp agent.
Environment
===========
Openstack Controller & Compute 1ea (CentOS Linux release 7.3.1611 / 3.10.0-
VMware ESXi 6.5 2ea
OpenStack Version (newton release)
- openstack-
- openvswitch-
- python-
Configuration summary
$ cat /etc/neutron/
[DEFAULT]
...
core_plugin = vmware_
...
$ cat /etc/neutron/
[dvs]
host_ip = <vcenter ip>
host_username = <email address hidden>
host_password = <vcenter password>
dvs_name = DSwitch
insecure = true
$ cat /etc/neutron/
[DEFAULT]
...
ovs_integration
enable_
enable_
...
$ cat /etc/neutron/
[ovs]
...
integration_bridge = br-dvs
bridge_mappings =
...
$ cat /etc/nova/nova.conf
[DEFAULT]
...
compute_driver = vmwareapi.
...
[vmware]
host_ip = <vcenter ip>
host_username = <email address hidden>
host_password = <vcenter password>
cluster_name = TestCluster
datastore_regex = "VMFS.*"
insecure = true
Reproduce
=========
$ neutron agent-list
+------
| id | agent_type | host | alive | admin_state_up | binary |
+------
| 20a968ac-
| b2d78dad-
| f4318806-
+------
$ neutron net-create --tenant-id 17e3939cff16474
--provider:
--provider:
--provider:
test-network75
Created a new network:
+------
| Field | Value |
+------
| admin_state_up | True |
| created_at | 2018-01-
| description | |
| id | 168469fc-
| name | test-network75 |
| port_security_
| project_id | 17e3939cff16474
| provider:
| provider:
| provider:
| revision_number | 2 |
| router:external | False |
| shared | False |
| status | ACTIVE |
| subnets | |
| tags | |
| tenant_id | 17e3939cff16474
| updated_at | 2018-01-
+------
$ neutron subnet-create --tenant-id 17e3939cff16474
--name test-subnet75 \
--gateway 10.168.75.1 \
--allocation-pool start=10.
--enable-dhcp \
test-network75 10.168.75.0/24
Created a new subnet:
+------
| Field | Value |
+------
| allocation_pools | {"start": "10.168.75.11", "end": "10.168.75.254"} |
| cidr | 10.168.75.0/24 |
| created_at | 2018-01-
| description | |
| dns_nameservers | |
| enable_dhcp | True |
| gateway_ip | 10.168.75.1 |
| host_routes | |
| id | caa4a01b-
| ip_version | 4 |
| ipv6_address_mode | |
| ipv6_ra_mode | |
| name | test-subnet75 |
| network_id | 168469fc-
| project_id | 17e3939cff16474
| revision_number | 2 |
| subnetpool_id | |
| tenant_id | 17e3939cff16474
| updated_at | 2018-01-
+------
$ ovs-vsctl show
ba18cda3-
Manager "ptcp:6640:
Bridge br-dvs
fail_mode: secure
Port "bond0"
Port br-dvs
Port "tap0898fb4e-9d"
ovs_version: "2.6.1"
$ ip netns exec qdhcp-f2c516f0-
1: lo: <LOOPBACK,
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
49: tap0898fb4e-9d: <BROADCAST,
link/ether fa:16:3e:c1:68:5a brd ff:ff:ff:ff:ff:ff
inet 10.168.75.11/24 brd 10.168.75.255 scope global tap0898fb4e-9d
valid_lft forever preferred_lft forever
inet 169.254.169.254/16 brd 169.254.255.255 scope global tap0898fb4e-9d
valid_lft forever preferred_lft forever
inet6 fe80::f816:
valid_lft forever preferred_lft forever
$ ip netns exec qdhcp-f2c516f0-
PING 10.168.75.11 (10.168.75.11) 56(84) bytes of data.
64 bytes from 10.168.75.11: icmp_seq=1 ttl=64 time=0.035 ms
64 bytes from 10.168.75.11: icmp_seq=2 ttl=64 time=0.021 ms
^C
--- 10.168.75.11 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.021/0.
$ ip netns exec qdhcp-f2c516f0-
PING 10.168.75.1 (10.168.75.1) 56(84) bytes of data.
From 10.168.75.11 icmp_seq=1 Destination Host Unreachable
From 10.168.75.11 icmp_seq=2 Destination Host Unreachable
From 10.168.75.11 icmp_seq=3 Destination Host Unreachable
From 10.168.75.11 icmp_seq=4 Destination Host Unreachable
^C
--- 10.168.75.1 ping statistics ---
5 packets transmitted, 0 received, +4 errors, 100% packet loss, time 4000ms
pipe 4
$ tail /var/log/
2018-01-26 13:46:29.877 22666 DEBUG neutron.
2018-01-26 13:46:29.878 22666 DEBUG oslo_messaging.
2018-01-26 13:46:29.884 22666 DEBUG oslo_messaging.
2018-01-26 13:46:29.884 22666 ERROR neutron.
2018-01-26 13:46:29.884 22666 ERROR neutron.
2018-01-26 13:46:29.884 22666 ERROR neutron.
2018-01-26 13:46:29.884 22666 ERROR neutron.
2018-01-26 13:46:29.884 22666 ERROR neutron.
2018-01-26 13:46:29.884 22666 ERROR neutron.
2018-01-26 13:46:29.884 22666 ERROR neutron.
2018-01-26 13:46:29.884 22666 ERROR neutron.
2018-01-26 13:46:29.884 22666 ERROR neutron.
2018-01-26 13:46:29.884 22666 ERROR neutron.
2018-01-26 13:46:29.884 22666 ERROR neutron.
2018-01-26 13:46:29.884 22666 ERROR neutron.
2018-01-26 13:46:29.884 22666 ERROR neutron.
2018-01-26 13:46:29.884 22666 ERROR neutron.
2018-01-26 13:46:29.884 22666 ERROR neutron.
2018-01-26 13:46:29.884 22666 ERROR neutron.
2018-01-26 13:46:29.884 22666 ERROR neutron.
2018-01-26 13:46:29.884 22666 ERROR neutron.
2018-01-26 13:46:29.884 22666 ERROR neutron.
2018-01-26 13:46:29.884 22666 ERROR neutron.
2018-01-26 13:46:29.884 22666 ERROR neutron.
2018-01-26 13:46:29.884 22666 ERROR neutron.
2018-01-26 13:46:29.884 22666 ERROR neutron.
2018-01-26 13:46:29.884 22666 ERROR neutron.
2018-01-26 13:46:29.884 22666 ERROR neutron.
2018-01-26 13:46:29.884 22666 ERROR neutron.
2018-01-26 13:46:29.884 22666 ERROR neutron.
2018-01-26 13:46:29.885 22666 DEBUG neutron.
Changed in vmware-nsx: | |
status: | New → Invalid |
I solved it. agent.linux. dhcp.Dnsmasq to vmware_ nsx.plugins. dvs.dhcp. Dnsmasq. _bridge to dvs_integration _bridge. openvswitch- agent is not used.
I changed dhcp_driver from neutron.
And I changed from ovs_integration
Neutron-
$ cat /etc/neutron/ dhcp_agent. ini nsx.plugins. dvs.dhcp. Dnsmasq _bridge = br-dvs isolated_ metadata = true metadata_ network = true
[DEFAULT]
interface_driver = openvswitch
dhcp_driver = vmware_
dvs_integration
enable_
enable_
ovs_use_veth = true