Upload files to Amazon S3 from the browser using pre-signed urls

Aakash
6 min readMay 25, 2018

Technology Stack: Angular 5 Frontend, Node.js Backend

Use Case: After I finished writing an app with Angular 5 and Node.js that allowed users to upload 100+MB files for processing it was time to deploy the app to AWS. Until this time I was using simple file upload using FormData from Angular to my Node.js server and storing the files in the filesystem using a popular middleware multer. Everything worked as expected. While deploying the app it dawned on me that since my servers will be in an AWS auto scaling group, AWS can spawn or kill these servers based on load. Since I was storing the uploaded files in the EBS disk attached to a specific server, once the server is removed all the files stored on that instance would be gone forever. However, these files are extremely important for my users and would need to persist whether or not a server is alive or dead. I was desperate to find a scalable, cheap, viable and secure solution.

Another issue I noticed that would give a little more context to this issue was based on AWS EC2 instance type Amazon allows specific I/O bandwidth etc, which for smaller instances are not guaranteed and the file upload speeds were almost unacceptable. I wanted the file uploads to be considerably faster. Enter Amazon S3… Instead of defining all that S3 is, I would just say it is AWESOME and a perfect solution to my problem. Read more about it here. One last thing for this preface before we move on to the code was that I didn’t want my users to upload a file from Angular to my Node server and then I would again upload that file to S3 from my Node server. This is what you would call a server roundtrip that increases the data you are moving by 2x. No way I was going to do that.

Solution:

  1. User submits a form with a file from Angular frontend.
  2. Instead of posting the FormData to Node right away, request Node for a AWS S3 pre-signed URL to store this file on S3 bucket with secure bucket policies and permissions with CORS enabled. This pre-signed URL will have an expiry defined and only specific authorized user can generate the pre-signed url.
  3. Once generated this pre-signed url will be sent back to Angular from Node as a response.
  4. Angular will then directly use this pre-signed url to upload the file to AWS S3.
  5. Last but not the least we will enable AWS S3 Bucket Acceleration for upto 3x speed for file transfer. Read more about it here.

Let’s get into it.

Install aws-sdk for javascript for node.js

npm install aws-sdk

Now let’s configure the S3 object. In your server.js

const AWS = require(‘aws-sdk’);
const s3 = new AWS.S3({accessKeyId : config.aws_access_key_id, secretAccessKey : config.aws_secret_access_key, useAccelerateEndpoint: true});

config for me is nothing but a config.js file that has environment specific config. I would never commit this file to any public repo since the AWS keys can be used from cli to spawn any servers that I would end up paying for.

You can use that config like this.

const config = require(‘./config.js’).get(process.env.NODE_ENV);

The beauty of having config tied to NODE_ENV is that I can use the same code pull different config values depending on:

sudo NODE_ENV=local node server

vs

sudo NODE_ENV=dev node server

vs

sudo NODE_ENV=prod node server

Now let’s create an api endpoint that Angular will send requests to get presigned S3 urls.

app.get(‘/generatepresignedurl’, function(req,res){
var fileurls = [];

/*setting the presigned url expiry time in seconds, also check if user making the request is an authorized user for your app (this will be specific to your app’s auth mechanism so i am skipping it)*/
const signedUrlExpireSeconds = 60 * 60;

setting the bucket name from config based on NODE_ENV, your dev and prod buckets should be different

const myBucket = config.s3bucketname;

change the name of your file based on your app logic

const myKey = ‘api/uploads/’ + ‘test.csv’;

set params for the getpresigned url request to Amazon S3

const params = {Bucket: myBucket, Key: myKey, Expires: signedUrlExpireSeconds, ACL: ‘bucket-owner-full-control’, ContentType:’text/csv’};

Since you will be using this presigned url for PUT it is extremely important to specify the ACL(Access Control List) and the ContentType

Now you are ready to call AWS SDKs awesome api to get your signed url

s3.getSignedUrl(‘putObject’, params, function (err, url){
if(err){
console.log(‘Error getting presigned url from AWS S3’);
res.json({ success : false, message : ‘Pre-Signed URL error’, urls : fileurls});
}
else{
fileurls[0] = url;
console.log(‘Presigned URL: ‘, fileurls[0]);
res.json({ success : true, message : ‘AWS SDK S3 Pre-signed urls generated successfully.’, urls : fileurls});
}
});
});

That is all you need to generate presigned urls for uploading a file (putObject) using aws sdk from node.js

Now what! Lets configure our bucket policy, permissions etc on AWS S3.

  1. Create a Bucket and name it whatever you want, it has to be a DNS compliant.
  2. In Properties, to keep it simple enough we will enable encryption at rest and transfer acceleration.
Encryption settings

Click Enabled on Transfer Acceleration and you will see AWS grants you a bucket acceleration endpoint.

yourbucketname.s3-accelerate.amazonaws.com

Let’s go to Permissions next. Permissions can be complicated or simplified yet secure. In my case I will only add my AWS user as the sole account that can will be on the ACL as bucket owner that can list, get, put and delete objects. I will not grant Public access or Log delivery access to this bucket for now.

For Bucket policy you will add the following. You can use the policy generator for this as well.

{
“Version”: “2012–10–17”,
“Id”: “<policy-id>”,
“Statement”: [
{
“Sid”: “<sid>”,
“Effect”: “Allow”,
“Principal”: {
“AWS”: “arn:aws:iam::<awsaccount>:user/<awsusername>”
},
“Action”: “s3:*”,
“Resource”: “arn:aws:s3:::<s3-bucket-name>”
}
]
}

Add this to the CORS configuration to the S3 bucket and you are all set.

<?xml version=”1.0" encoding=”UTF-8"?>
<CORSConfiguration xmlns=”http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>
<AllowedMethod>HEAD</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>

Now let’s look at the Angular 5 code and we will be ready to upload our first file directly to S3 from Chrome using transfer acceleration…

I am not going into the Form html file-input etc but let’s say we have a form with one file input field and one submit button to keep things simple.

We are interested in the code that does the work once the submit button is clicked.

onSubmit() {//call node/express api endpoint to generate presignedurl/*fileService is nothing but a service injected into your angular component’s constructor like this constructor(private fileService: FileService) {}*/this.fileService.getpresignedurls().subscribe(res =>{console.log(res); // Your res object will have your pre-signed url//my res object is structured as success: boolean, message: string, urls: Array<string>;if(res.success){const fileuploadurl = res.urls[0];//once the presigned url is received the next service call will upload the file.this.fileService.uploadfileAWSS3(fileuploadurl, ‘text/csv’, this.yourform.get(‘filedetails’).value).subscribe((event: HttpEvent<any>) => {//handle HttpEvent progress or response and update view});}});}

Now lets take a look at the service methods getpresignedurls() and uploadfileAWSS3(). By the way I have used Angular reactive forms with my component.

getpresignedurls(): Observable<PreSignedURL>{
let getheaders = new HttpHeaders().set(‘Accept’, ‘application/json’);
return this.http.get<PreSignedURL>(this.getpresignedurlsserver, { headers: getheaders});
}

<PreSignedURL> is nothing but an interceptor that gives the structure of the response object I already mentioned.

uploadfileAWSS3(fileuploadurl, contenttype, file): Observable<any>{ //this will be used to upload all csv files to AWS S3 const headers = new HttpHeaders({‘Content-Type’: contenttype});
const req = new HttpRequest(
‘PUT’,
fileuploadurl,
file,
{
headers: headers,
reportProgress: true, //This is required for track upload process
});
return this.http.request(req);
}

One of the key things I would like to mention here is that you must pass the contenttype on the uploadfileAWSS3 call in the headers. Not passing it will work fine on macOS and all browsers on it but not on Windows. I haven’t figured the reason out yet but in this upload from browser will work on Windows if you are explicitly passing the content type.

That’s it! Once you hook this all up you will see your file will find its beautiful home in AWS S3. Now we don’t need to worry about servers being spawned or killed by AWS Auto Scaling or anything else. The upload speeds are amazing too. Your users will love it.

Let me know if you have any questions. Until later.

--

--