juju controllers hit open file limit
This bug report will be marked for expiration in 44 days if no further activity occurs. (find out why)
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Canonical Juju |
Incomplete
|
Undecided
|
Unassigned |
Bug Description
Juju controllers use a lot of socket connections that they frequently hit the open files limit 64000.
```
2023-06-07 11:35:41 WARNING juju.worker.
2023-06-07 11:35:41 WARNING juju.worker.
2023-06-07 11:35:42 WARNING juju.worker.
2023-06-07 11:35:43 ERROR juju.worker.
2023-06-07 11:35:43 WARNING juju.worker.
2023-06-07 11:35:44 WARNING juju.worker.
2023-06-07 11:35:45 WARNING juju.worker.
2023-06-07 11:35:46 WARNING juju.worker.
2023-06-07 11:35:47 ERROR juju.apiserver.
github.
2023-06-07 11:35:47 WARNING juju.worker.
2023-06-07 11:35:48 WARNING juju.worker.
2023-06-07 11:35:48 WARNING juju.apiserver.
2023-06-07 11:35:49 WARNING juju.worker.
```
At the time this was oberserved the controllers were at 2.9.42. Since then they've been upgraded to 2.9.43. I doubt this matters - just noting it.
After restarting the controllers, they started gradually going up again - in about 2 days the fd usage of one of the controllers reached ~15000 and growing.
This looks similar to https:/
description: | updated |
description: | updated |
summary: |
- juju controllers git open file limit + juju controllers hit open file limit |
Changed in juju: | |
status: | Expired → New |
Changed in juju: | |
status: | New → Incomplete |
Can you get the lsof output similarly to the bug you've linked, so we can see the source/destination of the connections?
Is this controller involved in CMRs like that one?