Support for Sigv4-streaming

Bug #1810026 reported by Bhaskar Singhal
This bug affects 4 people
Affects Status Importance Assigned to Milestone
OpenStack Object Storage (swift)
In Progress

Bug Description

I am trying to upload an object (100+MB) to swift3/swift using the AWS .Net SDK multi-segment upload ( ).
But the upload keeps failing with 'Amazon.Runtime.AmazonClientException' Expected hash not equal to calculated hash.

This is happening because the ETag calculated in obj/server is on the entire chunk, but in this case, the chunk has the chunk-signature etc. We need to decoded the chunk and then calculate ETag.

                    for chunk in iter(timeout_reader, ''):
                        start_time = time.time()
                        if start_time > upload_expiration:
                            return HTTPRequestTimeout(request=request)
                        etag.update(chunk) --> this chunk has signature, etc
                        upload_size = writer.write(chunk)
                        elapsed_time += time.time() - start_time

Various headers set in the request:

{'HTTP_AUTHORIZATION': 'AWS4-HMAC-SHA256 Credential=4a203e8a63cc4fdfa4bbd3ea6c208bac/20181228/us-east-1/s3/aws4_request, SignedHeaders=content-length;content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-decoded-content-length, Signature=e7d32398ec602bd2fbfbbcd613ba26835d2036f6215862255a04ee91dadb242e', 'SCRIPT_NAME': '', 'keystone.token_auth': <keystonemiddleware.auth_token._user_plugin.UserAuthPlugin object at 0x7f00e68d5b90>, 'webob.adhoc_attrs': {'response': <_AuthTokenResponse at 0x7f00e68d5f50 200 OK>}, 'REQUEST_METHOD': 'PUT', 'HTTP_X_AMZ_DATE': '20181228T181903Z', 'PATH_INFO': '/bucket1/BBBunny123', 'SERVER_PROTOCOL': 'HTTP/1.0', 'QUERY_STRING': 'partNumber=4&uploadId=YzQ0Yjk2MzMtMzdhYy00NDkyLWEwZjItZjBhYjA0M2FjYjMx', 'REMOTE_ADDR': '', 'CONTENT_LENGTH': '5246566', 'HTTP_X_AMZ_DECODED_CONTENT_LENGTH': '5242880', 'HTTP_USER_AGENT': 'aws-sdk-dotnet-45/ aws-sdk-dotnet-core/ .NET_Runtime/4.0 .NET_Framework/4.0 OS/Microsoft_Windows_NT_6.2.9200.0 ClientAsync TransferManager/MultipartUploadCommand', 'eventlet.posthooks': [], 'RAW_PATH_INFO': '/bucket1/BBBunny123', 'REMOTE_PORT': '58993', 'eventlet.input': <eventlet.wsgi.Input object at 0x7f00df266250>, 'HTTP_X_IDENTITY_STATUS': 'Invalid', 'wsgi.url_scheme': 'http', 'SERVER_PORT': '80', 'HTTP_X_AMZ_CONTENT_SHA256': 'STREAMING-AWS4-HMAC-SHA256-PAYLOAD', 'wsgi.input': <eventlet.wsgi.Input object at 0x7f00df266250>, 'HTTP_HOST': '', 'wsgi.multithread': True, 'HTTP_EXPECT': '100-continue', 'wsgi.version': (1, 0), 'SERVER_NAME': '', 'GATEWAY_INTERFACE': 'CGI/1.1', 'wsgi.run_once': False, 'wsgi.errors': <swift.common.utils.LoggerFileObject object at 0x7f00df2665d0>, 'wsgi.multiprocess': False, 'swift.trans_id': 'tx8f4b1c9093d1422389664-005c266917', 'CONTENT_TYPE': 'text/plain', 'swift.cache': <swift.common.memcached.MemcacheRing object at 0x7f00df2667d0>}


description: updated
description: updated
Revision history for this message
Tim Burke (1-tim-z) wrote :

Good news: as of you'll no longer store bad data (!), and you'll no longer have the client raising exceptions about mismatched hashes.

Bad news: you'll get 501s instead :-(

We definitely *should* support sigv4-streaming, though -- I think how high we can prioritize it will depend on how many clients are interested in using it. (Or if someone already has a patch in hand...)

Changed in swift:
status: New → Confirmed
importance: Undecided → Low
tags: removed: object-server slo
Revision history for this message
Bhaskar Singhal (bhaskarsinghal) wrote :

Thanks, Tim.

Bad News: Content-Encoding header is missing(refer to the put request dump in the description), hence the commit doesn't help and I end up with few bad chunks getting stored.

Here is the issue regarding missing Content-Encoding header:

Revision history for this message
Tim Burke (1-tim-z) wrote :

Bah, forgot about that. Apparently that's not uncommon -- I tried fixing it in (for example), but apparently S3 might start responding with an *empty* Content-Encoding header... bleh.

FWIW, there's also, but it hasn't landed yet...

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to swift (master)

Submitter: Zuul
Branch: master

commit 1a51604b26e7cd1b3f3e7d30176b251501070e07
Author: Tim Burke <email address hidden>
Date: Thu Nov 29 17:55:55 2018 -0800

    s3api: Look for more indications of aws-chunked uploads

    Change-Id: I7dda8a25c9e13b0d81293f0a966c34713c93f6ad
    Related-Bug: 1810026

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to swift (feature/losf)

Related fix proposed to branch: feature/losf

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to swift (feature/losf)
Download full text (4.6 KiB)

Submitter: Zuul
Branch: feature/losf

commit 926a024135d380999d9f8494b19b59bb87a7f5b6
Author: Tim Burke <email address hidden>
Date: Thu Feb 14 21:02:01 2019 +0000

    Fix up flakey TestContainer.test_PUT_bad_metadata

    Change-Id: I7489f2bb95c27d1ddd5e8fa7e5786904100fb567

commit 002d21991e100ee6199e79679ae990c96ea05730
Author: Tim Burke <email address hidden>
Date: Wed Feb 13 17:02:08 2019 +0000

    Make get_data/async/tmp_dir explicit

    functools.partial is all well and good in code, but apparently it
    doesn't play real well with docs.

    Change-Id: Ia460473af9038d890346502784e3cf4d0e1d1c40

commit ac01d186b44856385a13fa77ecf527238c803443
Author: Pete Zaitcev <email address hidden>
Date: Mon Feb 11 21:42:34 2019 -0600

    Leave less garbage in /var/tmp

    All our tests that invoked broker.set_sharding_state() created
    /var/tmp/tmp, when it called DatabaseBroker.get_device_path(),
    then added "tmp" to it. We provided 1 less level, so it walked up
    ouside of the test's temporary directory.

    The case of "cleanUp" instead of "tearDown" didn't break out of
    jail, but left trash in /var/tmp all the same.

    Change-Id: I8030ea49e2a977ebb7048e1d5dcf17338c1616df

commit bb1a2d45685a3b2230f21f7f6ff0e998e666723e
Author: Tim Burke <email address hidden>
Date: Fri Jul 27 20:03:36 2018 +0000

    Display crypto data/metadata details in swift-object-info

    Change-Id: If577c69670a10decdbbf5331b1a38d9392d12711

commit ea8e545a27f06868323ff91c1584d18ab9ac6cda
Author: Clay Gerrard <email address hidden>
Date: Mon Feb 4 15:46:40 2019 -0600

    Rebuild frags for unmounted disks

    Change the behavior of the EC reconstructor to perform a fragment
    rebuild to a handoff node when a primary peer responds with 507 to the
    REPLICATE request.

    Each primary node in a EC ring will sync with exactly three primary
    peers, in addition to the left & right nodes we now select a third node
    from the far side of the ring. If any of these partners respond
    unmounted the reconstructor will rebuild it's fragments to a handoff
    node with the appropriate index.

    To prevent ssync (which is uninterruptible) receiving a 409 (Conflict)
    we must give the remote handoff node the correct backend_index for the
    fragments it will recieve. In the common case we will use
    determistically different handoffs for each fragment index to prevent
    multiple unmounted primary disks from forcing a single handoff node to
    hold more than one rebuilt fragment.

    Handoff nodes will continue to attempt to revert rebuilt handoff
    fragments to the appropriate primary until it is remounted or
    rebalanced. After a rebalance of EC rings (potentially removing
    unmounted/failed devices), it's most IO efficient to run in
    handoffs_only mode to avoid unnecessary rebuilds.

    Closes-Bug: #1510342

    Change-Id: Ief44ed39d97f65e4270bf73051da9a2dd0ddbaec

commit 8a6159f67b6a3e7917e68310e4c24aae81...


tags: added: in-feature-losf
Revision history for this message
fengli (fengli-ostorage) wrote :

We are also struggling with this issue. We backup the Elasticsearch snapshots to swift cluster using the ES S3 plugin but failed, because the plugin only support chunked encoding upload (ES 6.x, it can be configured to disable chunked encoding in ES 7.5 S3 plugin), and Swift + S3api doesn't. Moreover, some applications rely on chunked encoding for video/media streaming data upload. We are investigating and try to resolve it. Any hint is welcome.

Revision history for this message
clayg (clay-gerrard) wrote :

Some Hadoop connector also apparently expects this; I'm not sure there's many workarounds with the client is a 3rd party plugin that expects you're talking to s3 and they want this feature?

Changed in swift:
importance: Low → Medium
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to swift (master)

Fix proposed to branch: master

Changed in swift:
status: Confirmed → In Progress
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.