fix(middleware-flexible-checksums): buffer stream chunks to minimum required size #6882
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
helps with #6859
Attempts to buffer input stream chunks to a minimum required size as required by S3, as described in https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html.
This is for input streams e.g. S3 PUT Object or Part.
With checksumming on, user streams go through a filter called AWS chunked encoding. This process annotates every stream chunk with its size and thus the chunks are sent to S3 as-is in whatever size the user provides them, instead of Node.js being able to automatically buffer smaller chunks.
S3 has always required a chunk size of 8kb, preferably 64kb, but due to automatic buffering this was rarely an issue prior to the introduction of default checksums.
As a mitigation / convenience feature, the JS SDK can automatically buffer streams for the user before undergoing chunked encoding. This incurs a small performance penalty that can be avoided by the user if they provide streams with larger chunks.