Comment 3 for bug 1344940

Revision history for this message
Kapil Thangavelu (hazmat) wrote :

so the actual issue.. is that we're configuring a replica set with 1 member and mongodb is storing the replication oplog in a very large capped collection. combined with constant database writes, we get very large oplog being stored.

so solution would be

a) stop configuring a replica set before we have one, tune the settings on the replica set oplog ( currently set to 51200MB) which gives us the 51Gb number
b) have juju stop writing to the database when there are no changes to actually write cause its inflating the oplog

even steady state this system is producing 15-50 write ops a second, it has saturated the disk i/o during actual activity.

juju:PRIMARY> show dbs;
admin 0.03125GB
juju 0.03125GB
local 50.515380859375GB
presence 0.03125GB
test (empty)

# on local, back to back commands show the growth due to spurious writes.

juju:PRIMARY> db['oplog.rs'].count()
55033
juju:PRIMARY> db['oplog.rs'].count()
55041

# on juju

rs.status()
{
        "set" : "juju",
        "date" : ISODate("2014-07-19T16:19:49Z"),
        "myState" : 1,
        "members" : [
                {
                        "_id" : 1,
                        "name" : "192.168.9.74:37017",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 10732,
                        "optime" : Timestamp(1405786789, 2),
                        "optimeDate" : ISODate("2014-07-19T16:19:49Z"),
                        "self" : true
                }
        ],
        "ok" : 1
}

juju:PRIMARY> db.printReplicationInfo()
configured oplog size: 51200MB
log length start to end: 11119secs (3.09hrs)
oplog first event time: Sat Jul 19 2014 13:20:46 GMT+0000 (UTC)
oplog last event time: Sat Jul 19 2014 16:26:05 GMT+0000 (UTC)
now: Sat Jul 19 2014 16:26:05 GMT+0000 (UTC)