Create 2 slave nodes, both of them with 4 NICs.
With the following network connectivity:
Slave1:
eth0, eth1 - the same models, connect them to public network
eth2, eth3 - the same models, connect them to admin network
Slave2:
eth0, eth1 - the same models, connect them to admin network
eth2, eth3 - the same models, connect them to public network
So, NIC order should be quite different (That step could be easily done on VMs). One node has 2 first NICs connected to public, the another node has 2 first NICs connected to admin network.
Create cluster with those 2 slave nodes, assign conroller role to both of them.
Go to network settings, Create bonding for one pair of interfaces (Doesn't matter what NICs will be in a bond.). Leave the another pair untouched.
Start deployment. One node will fail to provision with the same traceback as was reported.
Look like it's not a provisioning issue. The root cause may be hidden somewhere between nailgun and UI interactions. I bet it's nailgun.
Here comes the steps to reproduce the issue:
Create 2 slave nodes, both of them with 4 NICs.
With the following network connectivity:
Slave1:
eth0, eth1 - the same models, connect them to public network
eth2, eth3 - the same models, connect them to admin network
Slave2:
eth0, eth1 - the same models, connect them to admin network
eth2, eth3 - the same models, connect them to public network
So, NIC order should be quite different (That step could be easily done on VMs). One node has 2 first NICs connected to public, the another node has 2 first NICs connected to admin network.
Create cluster with those 2 slave nodes, assign conroller role to both of them.
Go to network settings, Create bonding for one pair of interfaces (Doesn't matter what NICs will be in a bond.). Leave the another pair untouched.
Start deployment. One node will fail to provision with the same traceback as was reported.
Look like it's not a provisioning issue. The root cause may be hidden somewhere between nailgun and UI interactions. I bet it's nailgun.