While doing scale testing, I ended up getting:
ERROR cannot add unit 53/100 to service "ubuntu": cannot add unit to service "ubuntu": write tcp 127.0.0.1:37017: i/o timeout
ERROR cannot add unit 53/100 to service "ubuntu": cannot add unit to service "ubuntu": write tcp 127.0.0.1:37017: i/o timeout
ERROR cannot add unit 53/100 to service "ubuntu": cannot add unit to service "ubuntu": write tcp 127.0.0.1:37017: i/o timeout
ERROR cannot add unit 52/100 to service "ubuntu": cannot add unit to service "ubuntu": write tcp 127.0.0.1:37017: i/o timeout
ERROR cannot add unit 52/100 to service "ubuntu": cannot add unit to service "ubuntu": write tcp 127.0.0.1:37017: i/o timeout
ERROR cannot add unit 53/100 to service "ubuntu": cannot add unit to service "ubuntu": write tcp 127.0.0.1:37017: i/o timeout
ERROR cannot add unit 52/100 to service "ubuntu": cannot add unit to service "ubuntu": write tcp 127.0.0.1:37017: i/o timeout
ERROR cannot add unit 52/100 to service "ubuntu": cannot add unit to service "ubuntu": write tcp 127.0.0.1:37017: i/o timeout
ERROR cannot add unit 53/100 to service "ubuntu": cannot add unit to service "ubuntu": write tcp 127.0.0.1:37017: i/o timeout
ERROR cannot assign unit "ubuntu/8589" to machine 14: write tcp 127.0.0.1:37017: i/o timeout
ERROR cannot add unit 50/100 to service "ubuntu": cannot add unit to service "ubuntu": write tcp 127.0.0.1:37017: i/o timeout
ERROR cannot add unit 52/100 to service "ubuntu": cannot add unit to service "ubuntu": write tcp 127.0.0.1:37017: i/o timeout
ERROR cannot add unit 53/100 to service "ubuntu": cannot add unit to service "ubuntu": write tcp 127.0.0.1:37017: i/o timeout
ERROR cannot add unit 53/100 to service "ubuntu": cannot add unit to service "ubuntu": write tcp 127.0.0.1:37017: i/o timeout
ERROR cannot add unit 55/100 to service "ubuntu": cannot add unit to service "ubuntu": write tcp 127.0.0.1:37017: i/o timeout
ERROR cannot add unit 55/100 to service "ubuntu": cannot add unit to service "ubuntu": write tcp 127.0.0.1:37017: i/o timeout
It appears that if you overload Mongo enough that it can't respond to your requests as fast as you are making them, you end up failing because of i/o timeout, rather than getting a nicer error.
This might be considered a bug in 'mgo' that it doesn't notice that Mongo is in overload and tell you to back off. At the least, it feels like juju needs to be able to gracefully recover from this state.
We've actually seen this exact problem with deploying 4 charms on trusty using juju-core 1.18.x
The charms deployed were:
nova-cloud- controller
glance
keystone
mysql
All in local provider with lxc containers for each