Comment 5 for bug 1494359

Revision history for this message
Caleb Tennis (ctennis) wrote :

This is what the ring looked like at the time:

swiftstack@platform:/opt/ss/builder_configs/c57c2ecf-cf91-4db5-ad7b-669a8265b124$ swift-ring-builder object-1.builder
object-1.builder, build version 47
1024 partitions, 14.000000 replicas, 1 regions, 1 zones, 24 devices, 999.99 balance, 86.04 dispersion
The minimum number of hours before a partition can be reassigned is 1
The overload factor is 10000000.00% (100000.000000)
Devices: id region zone ip address port replication ip replication port name weight partitions balance meta
             0 1 1 172.30.3.43 6000 172.30.3.43 6003 d0 0.00 0 0.00
             1 1 1 172.30.3.43 6000 172.30.3.43 6003 d1 0.00 0 0.00
             2 1 1 172.30.3.43 6000 172.30.3.43 6003 d2 0.00 149 999.99
             3 1 1 172.30.3.43 6000 172.30.3.43 6003 d3 0.00 482 999.99
             4 1 1 172.30.3.43 6000 172.30.3.43 6003 d4 0.00 731 999.99
             5 1 1 172.30.3.43 6000 172.30.3.43 6003 d5 8.59 911 20.74
             6 1 1 172.30.3.43 6000 172.30.3.43 6003 d6 8.59 911 20.74
             7 1 1 172.30.3.43 6000 172.30.3.43 6003 d7 8.59 912 20.87
             8 1 1 172.30.3.44 6000 172.30.3.44 6003 d8 8.59 640 -15.18
             9 1 1 172.30.3.44 6000 172.30.3.44 6003 d9 8.59 640 -15.18
            10 1 1 172.30.3.44 6000 172.30.3.44 6003 d10 8.59 640 -15.18
            11 1 1 172.30.3.44 6000 172.30.3.44 6003 d11 8.59 640 -15.18
            12 1 1 172.30.3.44 6000 172.30.3.44 6003 d12 8.59 639 -15.31
            13 1 1 172.30.3.44 6000 172.30.3.44 6003 d13 8.59 641 -15.05
            14 1 1 172.30.3.44 6000 172.30.3.44 6003 d14 8.59 640 -15.18
            15 1 1 172.30.3.44 6000 172.30.3.44 6003 d15 8.59 640 -15.18
            16 1 1 172.30.3.45 6000 172.30.3.45 6003 d16 8.59 640 -15.18
            17 1 1 172.30.3.45 6000 172.30.3.45 6003 d17 8.59 640 -15.18
            18 1 1 172.30.3.45 6000 172.30.3.45 6003 d18 8.59 640 -15.18
            19 1 1 172.30.3.45 6000 172.30.3.45 6003 d19 8.59 640 -15.18
            20 1 1 172.30.3.45 6000 172.30.3.45 6003 d20 8.59 640 -15.18
            21 1 1 172.30.3.45 6000 172.30.3.45 6003 d21 8.59 640 -15.18
            22 1 1 172.30.3.45 6000 172.30.3.45 6003 d22 8.59 640 -15.18
            23 1 1 172.30.3.45 6000 172.30.3.45 6003 d23 8.59 640 -15.18

No EC settings changes, this is the same policy I've been running the whole time. I've just been bringing disks in and out of the policy to test fullness issues. None of the disks are near full in this case.