[DPDK] could not open network device dpdk0 (Cannot allocate memory) error

Bug #1595970 reported by Kristina Berezovskaia on 2016-06-24
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Mirantis OpenStack
Medium
Kristina Berezovskaia

Bug Description

Detailed bug description:
 After boot and delete some vns on env with dpdk, vms become to be in ERROR state

Steps to reproduce:
 1) Deploy env with DPDK
 2) Create flavor for using Huge pages
nova flavor-create hpgs1 auto 512 1 2
nova flavor-key hpgs1 set hw:mem_page_size=2048
 3) Boot vm and delete sevral times
Expected results:
 All vms become in ACTIVE state
Actual result:
 After some times create-delete vm, all new vms are in ERROR state
We can see:
ovs-vsctl show
48ff216b-9471-4476-969e-3d3d0a4bf546
    Bridge br-int
        fail_mode: secure
        Port int-br-prv
            Interface int-br-prv
                type: patch
                options: {peer=phy-br-prv}
        Port br-int
            Interface br-int
                type: internal
    Bridge br-prv
        Port br-prv
            Interface br-prv
                type: internal
        Port "dpdk0"
            Interface "dpdk0"
                type: dpdk
                error: "could not open network device dpdk0 (Cannot allocate memory)"
        Port phy-br-prv
            Interface phy-br-prv
                type: patch
                options: {peer=int-br-prv}
    ovs_version: "2.4.1"
In nova-compute we can see: "BuildAbortException: Build of instance d3d57aff-a0dc-418c-9b70-5e74f7e317d8 aborted: Failed to allocate the network(s), not rescheduling"
In neutron-all log on compute: "Unable to execute ['ovs-vsctl', '--timeout=10', '--oneline', '--format=json', '--', '--columns=type', 'list', 'Interface', 'int-br-floating']. Exception: Exit code: 1; Stdin: ; Stdout: ; Stderr: ovs-vsctl: no row "int-br-floating" in table Interface"

Description of the environment:
neutron+vlan, 1 controller, 1 cinder, 3 computes: 2 with dpdk and 1 without
on computes with dpdk:
Nova CPU pinning 14
DPDK CPU pinning 6
Nova Huge pages SizeCount 2.0 MB - 16000
1.0 GB - 10
DPDK Huge Pages
1024

iso:
cat /etc/fuel_build_id:
 495
cat /etc/fuel_build_number:
 495
cat /etc/fuel_release:
 9.0
cat /etc/fuel_openstack_version:
 mitaka-9.0
rpm -qa | egrep 'fuel|astute|network-checker|nailgun|packetary|shotgun':
 fuel-release-9.0.0-1.mos6349.noarch
 fuel-misc-9.0.0-1.mos8460.noarch
 python-packetary-9.0.0-1.mos140.noarch
 fuel-bootstrap-cli-9.0.0-1.mos285.noarch
 fuel-migrate-9.0.0-1.mos8460.noarch
 rubygem-astute-9.0.0-1.mos750.noarch
 fuel-mirror-9.0.0-1.mos140.noarch
 shotgun-9.0.0-1.mos90.noarch
 fuel-openstack-metadata-9.0.0-1.mos8743.noarch
 fuel-notify-9.0.0-1.mos8460.noarch
 nailgun-mcagents-9.0.0-1.mos750.noarch
 python-fuelclient-9.0.0-1.mos325.noarch
 fuel-9.0.0-1.mos6349.noarch
 fuel-utils-9.0.0-1.mos8460.noarch
 fuel-setup-9.0.0-1.mos6349.noarch
 fuel-provisioning-scripts-9.0.0-1.mos8743.noarch
 fuel-library9.0-9.0.0-1.mos8460.noarch
 network-checker-9.0.0-1.mos74.x86_64
 fuel-agent-9.0.0-1.mos285.noarch
 fuel-ui-9.0.0-1.mos2717.noarch
 fuel-ostf-9.0.0-1.mos936.noarch
 fuelmenu-9.0.0-1.mos274.noarch
 fuel-nailgun-9.0.0-1.mos8743.noarch

Changed in mos:
importance: Undecided → High
status: New → Confirmed
Dmitry Klenov (dklenov) on 2016-06-27
tags: added: area-library
Sergey Matov (smatov) wrote :

After several attempts we were not able to reproduce this issue.
However, following behavior is seeing if there is DPDK-based application is running on Guest VM. This out-of-scope fact is described in https://bugzilla.redhat.com/show_bug.cgi?id=1293495.

Detailed description is mailed to dpdk-users list.

Switching priority to Medium since it's reproducibility is not 100%.

Changed in mos:
importance: High → Medium
Sergey Matov (smatov) wrote :

Bug moved to incomplete state until next appearance.

Changed in mos:
status: Confirmed → Incomplete
Changed in mos:
assignee: Sergey Matov (smatov) → Kristina Kuznetsova (kkuznetsova)
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers

Bug attachments

Remote bug watches

Bug watches keep track of this bug in other bug trackers.