Colin, I hope you'll reconsider this change and revert it.
I understand that there are buggy servers which fail when they get offered too many ciphers by clients, but they *always* failed; that's nothing new. So in order to expand the use cases for the library, this change has caused a regression. It's much worse to take correctly-working server/client pairs and deliberately break them than to fail to support incorrectly-working server/client pairs.
It's not just us; Jordon Bedwell above had the same problem. It's going to break a *lot* of people.
Moreover, it is really an important security issue as well as an interoperability one. I have a right to expect that I will get the most secure cipher from the set formed by the intersection of the client's and the server's supported sets; with this change, I do not, because the client has artificially eliminated some of its supported set.
This is a serious, serious regression, both in security and in interoperability.
Colin, I hope you'll reconsider this change and revert it.
I understand that there are buggy servers which fail when they get offered too many ciphers by clients, but they *always* failed; that's nothing new. So in order to expand the use cases for the library, this change has caused a regression. It's much worse to take correctly-working server/client pairs and deliberately break them than to fail to support incorrectly-working server/client pairs.
It's not just us; Jordon Bedwell above had the same problem. It's going to break a *lot* of people.
Moreover, it is really an important security issue as well as an interoperability one. I have a right to expect that I will get the most secure cipher from the set formed by the intersection of the client's and the server's supported sets; with this change, I do not, because the client has artificially eliminated some of its supported set.
This is a serious, serious regression, both in security and in interoperability.