I am not able to reproduce this bug in my setup. I am using almost the same network configuration that the one posted in this bug report but I am using stable/grizzly with only one compute node, I am able to reach up to 200 VMs with no problems at all: | 284a318b-84df-4008-a386-d4a7b0510dee | vm-178 | ACTIVE | None | Running | private=10.0.0.178 | | b2e1bbec-e93c-4b48-9843-b45a45a33b87 | vm-179 | ACTIVE | None | Running | private=10.0.0.182 | | c8f5d908-79c3-4020-adca-6348f76f44cf | vm-18 | ACTIVE | None | Running | private=10.0.0.22 | | 2f452943-92f5-41c1-a294-f5e52ef97052 | vm-180 | ACTIVE | None | Running | private=10.0.0.184 | | 43314b6c-7f6f-4da3-8054-5a8158a48ef9 | vm-181 | ACTIVE | None | Running | private=10.0.0.183 | | 3821636d-c94b-4030-95d4-47c183893227 | vm-182 | ACTIVE | None | Running | private=10.0.0.187 | | f6b27836-acf3-49a9-a960-a2a6835df534 | vm-183 | ACTIVE | None | Running | private=10.0.0.185 | | c798b778-b471-4a1f-b43c-f9d69873354d | vm-184 | ACTIVE | None | Running | private=10.0.0.186 | | 99a023ec-aeff-4451-9e3c-27316d1ffd78 | vm-185 | ACTIVE | None | Running | private=10.0.0.188 | | 4475818f-801a-4a59-9a85-1113e54e0c18 | vm-186 | ACTIVE | None | Running | private=10.0.0.192 | | 56145c8c-a645-4bdf-84b8-5078c19c39a4 | vm-187 | ACTIVE | None | Running | private=10.0.0.189 | | f2c3074a-43aa-4506-b3a9-43f2ff3f461c | vm-188 | ACTIVE | None | Running | private=10.0.0.190 | | f12c882a-c25b-46f2-8550-7ec807d3b0b7 | vm-189 | ACTIVE | None | Running | private=10.0.0.191 | | 9a40b604-0f9e-44d0-a080-59bfb9adedf5 | vm-19 | ACTIVE | None | Running | private=10.0.0.20 | | 952a25a6-501b-44de-9149-0cb25b217d96 | vm-190 | ACTIVE | None | Running | private=10.0.0.194 | | 5012700f-83a7-4f9d-85a7-796a65043aed | vm-191 | ACTIVE | None | Running | private=10.0.0.193 | | 9d7bbc67-55e1-49e8-8c7b-6f6bd5b2183a | vm-192 | ACTIVE | None | Running | private=10.0.0.195 | | 13192f4b-541f-4e92-b3f5-51e51d28408a | vm-193 | ACTIVE | None | Running | private=10.0.0.198 | | 4ff125dc-8f1d-40b1-9db6-738665cdb8e7 | vm-194 | ACTIVE | None | Running | private=10.0.0.197 | | d8f366e2-5867-421c-ab86-c15c06802f48 | vm-195 | ACTIVE | None | Running | private=10.0.0.196 | | 5ea852aa-02a8-409a-9dc0-8abd5dcd6dc0 | vm-196 | ACTIVE | None | Running | private=10.0.0.201 | | 2c60b14a-d60a-4fbc-9f1c-509c5fb5ed8c | vm-197 | ACTIVE | None | Running | private=10.0.0.199 | | eacddd5d-0e3d-4c7d-9bdb-2f9010ff83e3 | vm-198 | ACTIVE | None | Running | private=10.0.0.202 | | 5dc1f542-abef-4d28-946c-e87b8f5cd48f | vm-199 | ACTIVE | None | Running | private=10.0.0.200 | | bba098a3-de0a-40b0-b09e-4b6916287a97 | vm-2 | ACTIVE | None | Running | private=10.0.0.4 | | 5192cdf0-d4ce-4c02-948e-c5659271da80 | vm-20 | ACTIVE | None | Running | private=10.0.0.23 | | c13cccd3-9d43-4511-93ea-d023e03f8f36 | vm-200 | ACTIVE | None | Running | private=10.0.0.203 | I am using a very simple scripts to boot all these VMs: emagana@rock:~$ more performance.sh #!/bin/bash COUNTER=0 for i in {1..200} do echo "Deploy VM vm-$i" nova boot --flavor 42 --image tty-quantum --nic net-id=fced80ba-100f-43f0-ab2e-a2bf657d15ab vm-$i let COUNTER=$COUNTER+1 if [ $COUNTER -eq 300 ] then sleep 5 COUNTER=0 fi done