I was able to reproduce this bug on very slow environment, where usual Keystone operation takes more then 30 seconds:
root@node-2:~# time openstack endpoint list
+----------------------------------+-----------+----------------+-----------------+
| ID | Region | Service Name | Service Type |
+----------------------------------+-----------+----------------+-----------------+
| 181ca598b76c4fa1a8cc010413a57f31 | RegionOne | neutron | network |
...
| 06acaa28cc914a5ab9a7fa6393363e9c | RegionOne | keystone | identity |
+----------------------------------+-----------+----------------+-----------------+
real 0m33.146s
user 0m0.791s
sys 0m0.227s
VM has single core which is overloaded on 100%.
Also it doesn't make sense to raise timeout again, because they are already reasonable,
and we can't raise timeouts every time for each slow env.
I was able to reproduce this bug on very slow environment, where usual Keystone operation takes more then 30 seconds:
root@node-2:~# time openstack endpoint list ------- ------- ------- ------- +------ -----+- ------- ------- -+----- ------- -----+ ------- ------- ------- ------- +------ -----+- ------- ------- -+----- ------- -----+ 1a8cc010413a57f 31 | RegionOne | neutron | network | ab9a7fa6393363e 9c | RegionOne | keystone | identity | ------- ------- ------- ------- +------ -----+- ------- ------- -+----- ------- -----+
+------
| ID | Region | Service Name | Service Type |
+------
| 181ca598b76c4fa
...
| 06acaa28cc914a5
+------
real 0m33.146s
user 0m0.791s
sys 0m0.227s
VM has single core which is overloaded on 100%.
Also it doesn't make sense to raise timeout again, because they are already reasonable,
and we can't raise timeouts every time for each slow env.