When polling a juju environment, MAAS frequnetly returns 401
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
MAAS |
New
|
Undecided
|
Unassigned |
Bug Description
This is really breaking our ability to use juju-deployer to do continuous testing of juju/maas environments. deployer polls juju at various phases. This frequently fails with a 401 UNAUTHORIZED error being returned by the juju client. Using the following simple test, I am able to reproduce reliably:
#!/bin/bash
i=1
while [[ 1 ]] ; do
juju status >/dev/null
[[ $? != 0 ]] && echo "FAIL" && exit 1
echo "OK: $i"
i=$[$i+1]
done
The test sometimes fails in the first ~10 requests. Other times after ~75. The errors returned by Juju vary, but I've yet to be able to succeed in making 100 requests.
gomaasapi: got error back from server: 401 UNAUTHORIZED
could not access file 'e2fa8553-
could not access file 'e2fa8553-
In the maas.log I see OAuth errors:
ERROR 2014-01-15 18:18:28,633 maasserver #######
ERROR 2014-01-15 18:18:28,633 maasserver Traceback (most recent call last):
File "/usr/lib/
response = callback(request, *callback_args, **callback_kwargs)
File "/usr/lib/
response = func(*args, **kwargs)
File "/usr/lib/
actor, anonymous = self.authentica
File "/usr/lib/
RestrictedR
File "/usr/lib/
if not authenticator.
File "/usr/lib/
raise OAuthUnauthoriz
OAuthUnauthorized
This is reproducible across 2 different MAAS+juju environments, setup a bit different:
A) juju client connecting to a MAAS server running on localhost.
B) juju client connecting to a MAAS server running externally.
It was mentioned that syncing the clocks should help. In both cases, I've ensured clocks have been synced MAAS API node + the environments bootstrap node (machine 0). In the case of cluster B, I've ensured clocks synced between juju client node, MAAS API node and machine 0. The problem still persists.
On both clusters:
maas 1.4+bzr1693+ dfsg-0ubuntu2. 2 0ubuntu1~ ubuntu13. 04.1~juju1
juju-core 1.16.5-