cannot get replset config: not authorized for query on local.system.replset
| Affects | Status | Importance | Assigned to | Milestone | |
|---|---|---|---|---|---|
| | juju |
High
|
Unassigned | ||
| | juju-core |
High
|
Unassigned | ||
| | 1.24 |
High
|
Unassigned | ||
| | 1.25 |
High
|
Unassigned | ||
Bug Description
Long log, paste here: http://
2014-07-21 22:05:19 DEBUG juju.worker.
2014-07-21 22:05:19 DEBUG juju.mongo open.go:87 connection failed, will retry: dial tcp 127.0.0.1:37017: connection refused
2014-07-21 22:05:19 DEBUG juju.mongo open.go:87 connection failed, will retry: dial tcp 127.0.0.1:37017: connection refused
2014-07-21 22:05:19 INFO juju.mongo open.go:95 dialled mongo successfully
2014-07-21 22:05:19 INFO juju.worker.
2014-07-21 22:05:19 ERROR juju.cmd supercommand.go:323 cannot get replica set configuration: cannot get replset config: not authorized for query on local.system.
ERROR bootstrap failed: subprocess encountered error code 1
Stopping instance...
Bootstrap failed, destroying environment
ERROR subprocess encountered error code 1
Will add the rest to a log file.
| David Britton (davidpbritton) wrote : | #1 |
| tags: | added: oil |
| Ryan Harper (raharper) wrote : | #3 |
What's the other bug?
| David Britton (davidpbritton) wrote : | #4 |
Ryan -- IIRC, either this one, or one like it (we added tags to a couple of these types of bugs):
https:/
Basically, our system was left on, even though the power script had run from the previous attempt to shut them down. The juju bootstrap came around and grabbed a machine that was already booted as a juju bootstrap node, leading to this error (mongo was up, but the credentials were all wrong).
As for why the power down command didn't work. I don't remember specifically, but if I didn't file a bug it was probably a network glitch.
| Jason Hobbs (jason-hobbs) wrote : | #5 |
This is showing up in OIL with juju-core 1.22-beta4.
| summary: |
- mongo timeout? cannot get replica set configuration + cannot get replset config: not authorized for query on + local.system.replset |
| Changed in juju-core: | |
| status: | Invalid → New |
| Jason Hobbs (jason-hobbs) wrote : | #6 |
Some more logs for it:
| Changed in juju-core: | |
| status: | New → Triaged |
| importance: | Undecided → Medium |
| importance: | Medium → High |
| milestone: | none → 1.23 |
| Changed in juju-core: | |
| assignee: | nobody → Wayne Witzel III (wwitzel3) |
| Wayne Witzel III (wwitzel3) wrote : | #7 |
Working with the OIL installation scripts to determine the point during the deployment process when the error starts to occur. I was unable reproduce on juju-1.22 trunk using just a simple HA deployment on MAAS.
| Wayne Witzel III (wwitzel3) wrote : | #8 |
Still not able to successfully reproduce this error using my local MAAS. Can you provide the exact commands that were issued for the bootstrap and adding units?
| Changed in juju-core: | |
| assignee: | Wayne Witzel III (wwitzel3) → nobody |
| Curtis Hovey (sinzui) wrote : | #9 |
My reading of the log from comment 6 (https:/
| Curtis Hovey (sinzui) wrote : | #10 |
We are removing this bug from the 1.22-beta5 milestone to unblock its release.
| no longer affects: | juju-core/1.22 |
| Changed in juju-core: | |
| milestone: | 1.23 → 1.24-alpha1 |
| Tyson Malchow (tyson-malchow) wrote : | #11 |
i get this from taking the following steps:
- launch vagrant machine using trusty-
- destroy-environment
- try to bootstrap again
literally cannot get juju to work again on that machine after this point. uninstalling everything via apt-get purge to no avail. have to start a new machine entirely. :'(
| Alexis Bruemmer (alexis-bruemmer) wrote : | #12 |
There needs to either be valid reproduction steps or access to an environment that can reproduce this issue.
| Changed in juju-core: | |
| status: | Triaged → Incomplete |
| Changed in juju-core: | |
| milestone: | 1.24-alpha1 → 1.24.0 |
| Changed in juju-core: | |
| milestone: | 1.24.0 → 1.25.0 |
| Changed in juju-core: | |
| milestone: | 1.25.0 → 1.25.1 |
| Launchpad Janitor (janitor) wrote : | #13 |
[Expired for juju-core 1.24 because there has been no activity for 60 days.]
| Changed in juju-core: | |
| milestone: | 1.25.1 → 1.26.0 |
| milestone: | 1.26.0 → none |
| James Tunnicliffe (dooferlad) wrote : | #14 |
Not sure if we want to count this as the same issue, but if you set up MAAS 1.9 (maybe other versions) to give out addresses over DHCP and try to bootstrap a node with 1.25.x (not tested with 2.0) the bootstrap will fail:
2016-05-25 11:21:42 WARNING juju.replicaset replicaset.go:98 Initiate: fetching replication status failed: cannot get replica set status: can't get local.system.
2016-05-25 11:21:42 WARNING juju.replicaset replicaset.go:98 Initiate: fetching replication status failed: cannot get replica set status: can't get local.system.
2016-05-25 11:21:43 WARNING juju.replicaset replicaset.go:98 Initiate: fetching replication status failed: cannot get replica set status: can't get local.system.
2016-05-25 11:21:43 WARNING juju.replicaset replicaset.go:98 Initiate: fetching replication status failed: cannot get replica set status: can't get local.system.
2016-05-25 11:21:44 INFO juju.worker.
2016-05-25 11:21:44 ERROR juju.cmd supercommand.go:429 cannot initiate replica set: cannot get replica set status: can't get local.system.
2016-05-25 11:21:55 ERROR juju.cmd supercommand.go:429 failed to bootstrap environment: subprocess encountered error code 1
The same set up with addressing changed to auto assign works fine.
| Changed in juju-core: | |
| status: | Incomplete → Confirmed |
| status: | Confirmed → Triaged |
| Changed in juju-core: | |
| milestone: | none → 2.0.0 |
| Anastasia (anastasia-macmood) wrote : | #15 |
We have not seen this in 2.0. We need a reproducible scenario to tackle it.
| Changed in juju-core: | |
| status: | Triaged → Incomplete |
| Changed in juju-core: | |
| status: | Incomplete → Invalid |
| Changed in juju-core: | |
| status: | Invalid → Incomplete |
| affects: | juju-core → juju |
| Changed in juju: | |
| milestone: | 2.0.0 → none |
| milestone: | none → 2.0.0 |
| Changed in juju-core: | |
| importance: | Undecided → High |
| status: | New → Won't Fix |
| Nate Finch (natefinch) wrote : | #16 |
I'm getting this when I bring up a GCE machine to use as a bootstrap machine for manual provider:
WARNING juju.replicaset replicaset.go:98 Initiate: fetching replication status failed: cannot get replica set status: can't get local.system.
notably, when I'm ssh'd into that machine, I can telnet to localhost 37017 (the port mongod is listening on), but I can't telnet into <externalIP> 37017, which is what is stored in the replicaset config (you can check the replicaset config easily by doing a grep replSetInitiate /var/log/syslog)
Mine says:
command admin.$cmd command: { replSetInitiate: { _id: "juju", version: 1, members: [ { _id: 1, host: "104.155.
The way replicasets work is that juju connects to mongo on localhost, and mongo will tell it to connect to one of the machines listed in the replicaset config (in this case, just the one with the local machine's external IP address), and then juju drops the original connection and tries to connect to that machine... which in this case just fails... the error it fails with is that it can't get the replicaset config from any members of the replicaset (because it can't connect to them). This even though it read the replicaset in order to get the list of addresses to try :/
| Nate Finch (natefinch) wrote : | #17 |
So, it's likely to be a firewall issue with GCE - there's a firewall external to the machine that is configurable via the web GCE console. I haven't yet twiddled with it enough to get it to work, though.
| Changed in juju: | |
| milestone: | 2.0.0 → none |
| Launchpad Janitor (janitor) wrote : | #18 |
[Expired for juju because there has been no activity for 60 days.]
| Changed in juju: | |
| status: | Incomplete → Expired |


MAAS environment was dirty due to another bug that we are tracking down and will file. If anyone else gets this on maas, make sure your machines were in fact *off* before the bootstrap was allocated one of them.