Comment 0 for bug 2044852

Revision history for this message
Tobias Heider (tobhe) wrote :

[ Impact ]

SHA3 produces wrong results for inputs bigger than 4 GiB

[ Test Plan ]

Calculate sha3 hash of a big input file and compare with output of
another implementation like OpenSSL.

Expected behavior: same output
Actual behavior: different output

[ Where problems could occur ]

People relying on the broken hash might be surprised by the new fixed
result. The impact is hopefully low since SHA3 from libgcrypt is not
too widely used, especially not with this input size.

[ Other Info ]

From upstream bug report:

The SHA3 functions give wrong results for inputs larger than 4GB, because the originally size_t argument handled as unsigned int in keccak_write and leads to integer overflows. This does not happen if the data is fed into the md_write by smaller chunks. More information and reproducers are available from Clemens in the attached bug.

The fix that should solve the problem (use of the size_t) is available now at gitlab: https://gitlab.com/redhat-crypto/libgcrypt/libgcrypt-mirror/-/merge_requests/6 Comments welcomed.

I was considering updating the some of the hash tests to capture this issue, but did not find a simple way to do that yet so I will keep it on you to decide if you believe some regression test is needed here.

Upstream Bug: https://dev.gnupg.org/T6217
Upstream Fix: https://dev.gnupg.org/rC9c828129b2058c3f36e07634637929a54e8377ee