when boot many vms with quantum, nova sometimes allocates two quantum ports rather than one
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Compute (nova) |
Fix Released
|
Medium
|
Unassigned | ||
neutron |
Fix Released
|
High
|
Jakub Libosvar | ||
Grizzly |
Fix Released
|
High
|
Gary Kotton | ||
oslo-incubator |
Fix Released
|
Medium
|
Gary Kotton | ||
Ubuntu |
Confirmed
|
Undecided
|
Unassigned |
Bug Description
i have install grizzly-g3, but quantum does not work well, when i boot 128 instances, i found one of instances got more than one fixed ip, howerver, when i boot 64 intances, it nerver happened, besides that , sometimes i can not ping vm with floatingip, i did not find any error message in my quantum log( all the files in the /var/log/quantum), follows are the error output and configurations
| 97a93600-
| 99aeb6b8-
| 9aa82a35-
| 9b6b1289-
| 9e0d3aa5-
| 9ea62124-
my setup : one db host(db service), one glance host(glance service), on api host(keystone,
i used vlan type network and openvswitch plugin:
my quantum.conf
[DEFAULT]
# Default log level is INFO
# verbose and debug has the same result.
# One of them will set DEBUG log level output
debug = True
# Address to bind the API server
bind_host = 0.0.0.0
# Port the bind the API server to
bind_port = 9696
# Quantum plugin provider module
# core_plugin =
core_plugin = quantum.
# Advanced service modules
# service_plugins =
# Paste configuration file
api_paste_config = /etc/quantum/
# The strategy to be used for auth.
# Supported values are 'keystone'
auth_strategy = keystone
# Modules of exceptions that are permitted to be recreated
# upon receiving exception data from an rpc call.
# allowed_
# AMQP exchange to connect to if using RabbitMQ or QPID
control_exchange = quantum
# RPC driver. DHCP agents needs it.
notification_driver = quantum.
# default_
default_
# Defined in rpc_notifier, can be comma separated values.
# The actual topic names will be %s.%(default_
notification_topics = notifications
[QUOTAS]
# resource name(s) that are supported in quota features
# quota_items = network,subnet,port
# default number of resource allowed per tenant, minus for unlimited
# default_quota = -1
# number of networks allowed per tenant, and minus means unlimited
# quota_network = 10
# number of subnets allowed per tenant, and minus means unlimited
# quota_subnet = 10
# number of ports allowed per tenant, and minus means unlimited
quota_port = 5000
quota_floatingip = 5000
# default driver to use for quota checks
# quota_driver = quantum.
# =========== items for agent management extension =============
# Seconds to regard the agent as down.
# agent_down_time = 5
# =========== end of items for agent management extension =====
[DEFAULT_
# Description of the default service type (optional)
# description = "default service type"
# Enter a service definition line for each advanced service provided
# by the default service type.
# Each service definition should be in the following format:
# <service>
[SECURITYGROUP]
# If set to true this allows quantum to receive proxied security group calls from nova
# proxy_mode = False
[AGENT]
root_helper = sudo quantum-rootwrap /etc/quantum/
# =========== items for agent management extension =============
# seconds between nodes reporting state to server, should be less than
# agent_down_time
# report_interval = 4
# =========== end of items for agent management extension =====
[keystone_
auth_host = host-keystone
auth_port = 35357
auth_protocol = http
admin_tenant_name = demoTenant
admin_user = test
admin_password = 123456
signing_dir = /var/lib/
my dhcp_agent.ini
[DEFAULT]
# Where to store dnsmasq state files. This directory must be writable by the
# user executing the agent.
state_path = /var/lib/quantum
# OVS based plugins(OVS, Ryu, NEC, NVP, BigSwitch/
interface_driver = quantum.
# The agent can use other DHCP drivers. Dnsmasq is the simplest and requires
# no additional setup of the DHCP server.
dhcp_driver = quantum.
my ovs_quantum_
[DATABASE]
# This line MUST be changed to actually run the plugin.
sql_connection = mysql:/
# Database reconnection interval in seconds - if the initial connection to the
# database fails
reconnect_interval = 2
[OVS]
# (StrOpt) Type of network to allocate for tenant networks. The
# default value 'local' is useful only for single-box testing and
# provides no connectivity between hosts. You MUST either change this
# to 'vlan' and configure network_vlan_ranges below or change this to
# 'gre' and configure tunnel_id_ranges below in order for tenant
# networks to provide connectivity between hosts. Set to 'none' to
# disable creation of tenant networks.
tenant_
network_vlan_ranges = DemoNet:1:4094
bridge_mappings = DemoNet:DemoBridge
[AGENT]
# Agent's polling interval in seconds
polling_interval = 2
[SECURITYGROUP]
when i execute "quantum router-gateway-set" to add external network id as the router gateway, i found that the status of port for external network id in the router is DOWN. does it matters, if it does, how can i fixed it.
this blocked me for serveral days , can someone help me sovle it, any help will be appreciated.
Changed in nova: | |
status: | New → Triaged |
importance: | Undecided → Medium |
Changed in quantum: | |
importance: | Undecided → High |
tags: | removed: grizzly-backport-potential |
Changed in neutron: | |
status: | Fix Committed → Fix Released |
Changed in oslo: | |
milestone: | none → havana-2 |
status: | Fix Committed → Fix Released |
tags: | removed: in-stable-grizzly |
Changed in oslo: | |
milestone: | havana-2 → 2013.2 |
Changed in neutron: | |
milestone: | havana-2 → 2013.2 |
Changed in neutron: | |
assignee: | Gary Kotton (garyk) → Maru Newby (maru) |
Changed in neutron: | |
assignee: | Maru Newby (maru) → Jakub Libosvar (libosvar) |
Note: most of the interesting details are in the thread of: https:/ /answers. launchpad. net/quantum/ +question/ 225158
Basically, when a large number of vms are booted at once, each of which should have a single nic (and thus a single quantum port), every once and a while one of the VMs gets two quantum ports allocated. This is then visible to the tenant as two fixed IPs on the VM (since both quantum ports have a device_id of the same VM).
It may be that the allocate_ for_instance logic is running twice for the same VM-id. unclear if this is happening on the same nova-compute host, or multiple. Since the ports are created back-to-back, my best guess is that its on the same nova-compute.
xinxin-shu, can you also provide the quantum-server log that covers the time period when the two ports are created? I'm trying to understand if there is an error on the quantum side that prompts nova to try to create the port again.