[2.5] FATAL: remaining connection slots are reserved for non-replication superuser connections
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
MAAS |
Expired
|
Critical
|
Blake Rouse |
Bug Description
I had a MAAS that has 2 region/racks + 2 racks controllers. 2 physical machines, 1 deployed as a pod. All of the sudden I started seeing this issue.
After i noticed this issue, I also noticed my secondary region/rack was dead and these logs are from the primary region/rack.
Lastly, I manually restarted regiond on the primary region/rack and things resolved themselves.
Also, max_connections is set to 200.
2018-10-25 03:55:05 provisioningser
Traceback (most recent call last):
File "/usr/lib/
self.
File "/usr/lib/
f(*args, **kwargs)
File "/usr/lib/
self.
File "/usr/lib/
self.
--- <exception caught here> ---
File "/usr/lib/
current.result = callback(
File "/usr/lib/
key = error.trap(
File "/usr/lib/
self.
File "/usr/lib/
raise self.value.
File "/usr/lib/
result = inContext.theWork()
File "/usr/lib/
inContext.
File "/usr/lib/
return self.currentCon
File "/usr/lib/
return func(*args,**kw)
File "/usr/lib/
return func(*args, **kwargs)
File "/usr/lib/
result = func(*args, **kwargs)
File "/usr/lib/
with connected(), post_commit_hooks:
File "/usr/lib/
return next(self.gen)
File "/usr/lib/
connection
File "/usr/lib/
self.connect()
File "/usr/lib/
six.
File "/usr/lib/
raise value.with_
File "/usr/lib/
self.connect()
File "/usr/lib/
self.
File "/usr/lib/
connection = Database.
File "/usr/lib/
conn = _connect(dsn, connection_
django.
Changed in maas: | |
importance: | Undecided → Critical |
status: | New → Triaged |
assignee: | nobody → Blake Rouse (blake-rouse) |
milestone: | none → 2.5.0rc1 |
description: | updated |
description: | updated |
description: | updated |
Changed in maas: | |
milestone: | 2.5.0rc1 → 2.5.0 |
Changed in maas: | |
milestone: | 2.5.0 → 2.5.x |
Further information i found:
1. primary regiond/rackd is running
2. secondary regiond/rackd is dead
3. deployed ubuntu, it worked.
4. deployed custom os (esxi), it worked
5. after a while being idled, the machine started displaying these issues again.