Read_affinity per storage policy doesn't working

Bug #1737932 reported by Velychkovsky
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Object Storage (swift)
Invalid
Undecided
Unassigned

Bug Description

Hello.
I've setup my swift cluster with 1 default storage policy and 1 additional, gold, policy.

swift.conf

[storage-policy:0]
name = Policy-0
default = yes
aliases = yellow, orange

[storage-policy:1]
name = gold
policy_type = replication

So, gold policy include 3 zones, 2 zones with HDD and one of them is SSD node. And then I need to point my proxy server reads from SSD zone (r1z3).

This is my proxy-server.conf section
-------------
[app:proxy-server]
use = egg:swift#proxy
log_level = DEBUG
allow_account_management = true
account_autocreate = true

[proxy-server:policy:1]
sorting_method = affinity
write_affinity = r1z3
read_affinity = r1z3=10, r1z2=200, r1z1=300
write_affinity_node_count = 3
write_affinity_handoff_delete_count =2
log_level = DEBUG
-------------

And it's doesn't working.
When I try to GET object from gold bucket, and then look to my access log - the object was getting randomly from z1, z2 or z3.

But when I setup read_affinity setting in the global section - it's working well, and object getting from z3 each time

------------
[app:proxy-server]
use = egg:swift#proxy
sorting_method = affinity
write_affinity = r1z3
read_affinity = r1z3=10, r1z2=200, r1z1=300
log_level = DEBUG
allow_account_management = true
account_autocreate = true

[proxy-server:policy:1]
sorting_method = affinity
write_affinity = r1z3
read_affinity = r1z3=10, r1z2=200, r1z1=300
write_affinity_node_count = 3
write_affinity_handoff_delete_count =2
log_level = DEBUG
----------------

So looks like read_affinity per policy does not working.
I'm using Openstack Ocata Swift storage.

I've 4 storage nodes in my cluster, 3 of them HDD and 1 - SSD.
This is my object rings:

1. Default policy
# swift-ring-builder object.builder
Devices: id region zone ip address:port replication ip:port name weight partitions balance flags meta
            0 1 1 1.1.1.1:6200 1.1.1.1:6200 1 2000.00 21846 0.00
            1 1 1 1.1.1.1:6200 1.1.1.1:6200 2 2000.00 21845 -0.00
            2 1 1 1.1.1.1:6200 1.1.1.1:6200 3 2000.00 21845 -0.00
            3 1 2 2 2.2.2.:6200 2.2.2.2:6200 1 2000.00 21846 0.00
            4 1 2 2.2.2.2:6200 2.2.2.2:6200 2 2000.00 21845 -0.00
            5 1 2 2.2.2.2:6200 2.2.2.2:6200 3 2000.00 21845 -0.00
            6 1 3 3.3.3.3:6200 3.3.3.3:6200 1 2000.00 21846 0.00
            7 1 3 3.3.3.3:6200 3.3.3.3:6200 2 2000.00 21845 -0.00
            8 1 3 3.3.3.3:6200 3.3.3.3:6200 3 2000.00 21845 -0.00

2. Gold policy
# swift-ring-builder object-1.builder
Devices: id region zone ip address:port replication ip:port name weight partitions balance flags meta
            0 1 1 2.2.2.2:6200 2.2.2.2:6200 1 2000.00 21846 0.00
            1 1 1 2.2.2.2:6200 2.2.2.2:6200 2 2000.00 21845 -0.00
            2 1 1 2.2.2.2:6200 2.2.2.2:6200 3 2000.00 21845 -0.00
            3 1 2 3.3.3.3:6200 3.3.3.3:6200 1 2000.00 21846 0.00
            4 1 2 3.3.3.3:6200 3.3.3.3:6200 2 2000.00 21845 -0.00
            5 1 2 3.3.3.3:6200 3.3.3.3:6200 3 2000.00 21845 -0.00
            6 1 3 4.4.4.4:6200 4.4.4.4:6200 1 2000.00 21846 0.00
            7 1 3 4.4.4.4:6200 4.4.4.4:6200 2 2000.00 21845 -0.00
            8 1 3 4.4.4.4:6200 4.4.4.4:6200 3 2000.00 21845 -0.00

Velychkovsky (ahvizl)
description: updated
Revision history for this message
Alistair Coles (alistair-coles) wrote :

per-policy affinity settings are added in the Pike release series, so this will not work with Ocata. Please confirm if you see this behaviour with swift 2.15.0 or higher.

[1] https://docs.openstack.org/releasenotes/swift/pike.html

Changed in swift:
status: New → Incomplete
Revision history for this message
Velychkovsky (ahvizl) wrote :

Alistair Coles thanks!
Will test that

Revision history for this message
Velychkovsky (ahvizl) wrote :

It's working!
Very thanks for assistance

Changed in swift:
status: Incomplete → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.