#nova flavor-key m1.small set hw:mem_page_size=1048576
Able to create two ubuntu instance in flavor m1.small
Achieved iperf3 tcp throughput of ~7.5Gbps
Ensured the vhostport is created and HugePage is consumed at the end of 2VM created each of 2GB ie 4GB for VMs and 2GB for socket totally 6GB
The same scenario carried for without DPDK case of openstack and achieved higher throughput of ~19Gbps, which is contradictory to the expected results. Kindly suggest me what additional DPDK configuration to be done for high throughput. Also tried cpu pinning and multi queue for OpenStack DPDK but no improvement in the result.
Host - ubuntu16.04
devstack - stable/newton
which install DPDK 16.07 and OVS 2.6 versions
with DPDK plugin and following DPDK configurations
Grub changes
GRUB_CMDLINE_ LINUX_DEFAULT= "quiet splash default_ hugepagesz= 1G hugepagesz=1G hugepages=8 iommu=pt intel_iommu=on"
local.conf - changes for DPDK
enable_plugin networking-ovs-dpdk https:/ /git.openstack. org/openstack/ networking- ovs-dpdk master MODE=controller _ovs_dpdk BIND_PORT= False VHOST_USER_ DEBUG=n HUGEPAGES= True MOUNT_PAGESIZE= 1G TYPE=netdev
OVS_DPDK_
OVS_NUM_HUGEPAGES=8
OVS_CORE_MASK=2
OVS_PMD_CORE_MASK=4
OVS_DPDK_
OVS_SOCKET_MEM=2048
OVS_DPDK_
OVS_ALLOCATE_
OVS_HUGEPAGE_
MULTI_HOST=1
OVS_DATAPATH_
before VM creation
#nova flavor-key m1.small set hw:mem_ page_size= 1048576
Able to create two ubuntu instance in flavor m1.small
Achieved iperf3 tcp throughput of ~7.5Gbps
Ensured the vhostport is created and HugePage is consumed at the end of 2VM created each of 2GB ie 4GB for VMs and 2GB for socket totally 6GB
tel@tel- ProLiant- ML150-Gen9: ~$ sudo cat /proc/meminfo |grep Huge
AnonHugePages: 0 kB
HugePages_Total: 8
HugePages_Free: 2
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 1048576 kB
The same scenario carried for without DPDK case of openstack and achieved higher throughput of ~19Gbps, which is contradictory to the expected results. Kindly suggest me what additional DPDK configuration to be done for high throughput. Also tried cpu pinning and multi queue for OpenStack DPDK but no improvement in the result.