Logrotate config missed for ceph
Bug #1607117 reported by
Max Lvov
This bug affects 4 people
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Fuel for OpenStack |
Invalid
|
High
|
Fuel Sustaining | ||
8.0.x |
Fix Released
|
High
|
Rodion Tikunov | ||
Mitaka |
Invalid
|
High
|
Fuel Sustaining |
Bug Description
MOS 8.0 MU1
3 controllers 5 computes 4 ceph 3 mongodb
Logs for ceph stored both on ceph nodes and controllers, but there is no logrotate config for ceph. Ceph during data redistribution (i.e. after new node addition) generates the overwhelming amount of logs and causes disk run out of free space (5Gb in 3 days).
Changed in fuel: | |
milestone: | none → 8.0-updates |
assignee: | nobody → MOS Maintenance (mos-maintenance) |
tags: | added: customer-found |
Changed in fuel: | |
assignee: | MOS Maintenance (mos-maintenance) → Anton Chevychalov (achevychalov) |
Changed in fuel: | |
status: | New → Confirmed |
Changed in fuel: | |
importance: | Undecided → High |
tags: | added: area-library |
no longer affects: | fuel/newton |
To post a comment you must log in.
My proposal is to add /var/log/ceph/ to logrotate system and reduce loglevel to WARN