'debug-log' fails when logs are large when using db-log
Bug #1524135 reported by
John A Meinel
This bug affects 1 person
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
juju-core |
Fix Released
|
Critical
|
John A Meinel |
Bug Description
While doing a scale test I ran into this failure:
2015-12-09 00:29:21 ERROR juju.apiserver debuglog.go:96 debug-log handler error: tailer stopped: too much data for sort() with no index. add an index or specify a smaller limit
I have 1500 ubuntu units, so there is a significant level of logging going on. Apparently the code which tracks the log content is getting overloaded and Mongo is refusing to do time sorting for us.
This is in an environment bootstrapped with "JUJU_DEV_
Changed in juju-core: | |
milestone: | none → 2.0-alpha1 |
assignee: | nobody → John A Meinel (jameinel) |
importance: | High → Critical |
status: | Triaged → In Progress |
Changed in juju-core: | |
status: | In Progress → Fix Committed |
Changed in juju-core: | |
milestone: | 2.0-alpha1 → 1.26-alpha3 |
Changed in juju-core: | |
status: | Fix Committed → Fix Released |
tags: | added: 2.0-count |
To post a comment you must log in.
It turns out we have an index on environment UUID and time (e,t) but we are doing a filter e and sort on (t, _id), but Mongo doesn't seem to do a good job noticing that it could use that index. So we have to explicitly match our index in order for sort to use it.
One option is to sort on (e, t) or maybe add an index so we could sort on (e, t, _id).
We're discussing whether we need to sort based on "_id".