Http Client Library issues with amazon s3 upload

After upgrading to XP7 and using the new http lib client (3.0.0) we can’t “put” files to Amazon S3 any more. S3 doesn’t seem to like “Transfer-Encoding”

Code:

const item = portalLib.getMultipartItem(FIELD_NAME_FILE);
const stream = portalLib.getMultipartStream(FIELD_NAME_FILE);

const req = {
        method: "PUT",
        url: 'xxxxxxx',
        contentType: item.contentType,
        body: stream
    };

Headers set by http-client

transfer-encoding: chunked
content-type: multipart/mixed; boundary=6b8c8338-8e6f-4538-8b6e-3efc9586d3ab

The response from S3

  "body": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>NotImplemented</Code><Message>A header you provided implies functionality that is not implemented</Message><Header>Transfer-Encoding</Header><RequestId>XXXXX</RequestId><HostId>XXXX=</HostId></Error>",

After reading about this on different sites the Content-Length header should be set to avoid chunked transfer-encoding. Since the http-lib doesnt do that itself I tried to add the header “Content-Length” but it’s restricted:

Caused by: java.lang.IllegalArgumentException: restricted header name: "Content-Length"

Any way to make this work in XP7?

What is file in your case? It needs to be a known size in order for library to set content-length

It’s just a random image actually.

portalLib.getMultipartForm() gives the size

{"name":"file","fileName":"logo_square_transparent.png","contentType":"image/png","size":46935}

Can you post more code how you get file for body?

Sorry, the file variable was actually the stream variable. Just passed to another function so I see how that was confusing

The HTML Form

<form method="post" action="/_/service/xxx/form" enctype="multipart/form-data">
<input type="file" name="file" id="file" class="a-input">

The service called “form”

const FIELD_NAME_FILE = "file";
const item = portalLib.getMultipartItem(FIELD_NAME_FILE);
const stream = portalLib.getMultipartStream(FIELD_NAME_FILE);

const req = {
        method: "PUT",
        url,
        contentType: item.contentType,
        body: stream
    };

I just used RequestBin.com — A modern request bin to collect, inspect and debug HTTP requests and webhooks to se what the http client sends since we don’t have any test instance of the Amazon S3 endpoint our supplier is using

Looks like this is related to the use of ofInputStream() BodyPublisher used in the http-client.

BodyPublishers.ofInputStream() creates a RequestPublishers.InputStreamPublisher which always implements contentLength() method as “return -1;”. This means the content length of the request is unknown. When the HttpClient encounters a request body of unknown content length it defaults to chunked encoding, because this encoding doesnot require the content length to be transfered. S3 on the other hand does not allow chunked encoding to be used to upload files.

If it was possible to use the BodyPublishers.ofByteArray() instead, the request would be able to set the content length, but I am not sure if this is easy to do with how the http-client is designed today.

I think we can make an improvement in the lib to read the stream size (if it is known).
Stay tuned it may appear in the next lib version

2 Likes

Check out the new version of lib-http-client Lib-http-client version 3.2.2 has been released
It should help you to fix the issue