AWS S3 bucket File Upload with Spring boot
Hello Everyone..!
In this blog, we are going to discuss uploading files to Amazon S3 Bucket by using the Spring Boot application.
Why?
There will be questions arise why we need to upload images in the aws s3 bucket rather than our local Database. The answer is pretty simple. There are 2 main reasons to do this. The first, CloudFront is a true CDN and will load files much faster than your server, the second, Browsers can only open so many connections to the server at once, by offloading files like this to different host names, then your visitors will be able to download more items in parallel.
Why Using Amazon AWS?
- It’s easy to programmatically upload any file using their API.
- Amazon supports a lot of programming languages.
- There is a web interface where you can see all of your uploaded files.
- You can manually upload/delete any file using the web interface.
OK, now start to look at how we are achieving this.
AWS account configuration
You need to create an account on Amazon website to start using S3 Bucket. Registration is easy and clear enough, but you will have to verify your phone number and enter your credit card info (don’t worry, your card will not be charged if only you buy some services).
Once we are done with the account creation, we need to create an S3 bucket.
After opening the s3 bucket, click Create Bucket
On the General configuration, you have to enter the bucket name, choose a region
On the Object ownership, there are 2 options. ACLs disabled & ACLs enabled. To get more details about ACL (Access Control List). Visit
Here I am selecting ACLs Enabled to allow another AWS account to access my S3 bucket.
For now uncheck Block all public access, to access the bucket publicly, later we can configure the access according to our requirement.
The rest set can be default for now and click create a bucket, your bucket will be created.
After the bucket is created, we need to configure our policy as public for now.
Click the created bucket & select permission
Scroll down to bucket policy & Update the following code. Don’t forget to Rename YOUR_BUCKET_NAME.
{
“Version”: “2012–10–17”,
“Statement”: [
{
“Sid”: “Statement1”,
“Effect”: “Allow”,
“Principal”: “*”,
“Action”: “s3:GetObject”,
“Resource”: “arn:aws:s3:::YOUR_BUCKET_NAME/*”
}
]
}
Create I AM User
Now your bucket is created but we need to give permission for users to access this bucket. It is not secure to give the access keys of your root user to your developer team or someone else. We need to create a new IAM user and give him permission to use only the S3 Bucket.
Go to IAM services
Go to Users & Click Add User
Enter the user’s name and check Access type ‘Programmatic access’. Press the next button. We need to add permissions to this user. Press ‘Attach existing policy directly’, in the search field enter ‘s3’ and among found permissions choose AmazonS3FullAccess.
Then press next and ‘Create User’. If you did everything right then you should see the Access key ID and Secret access key for your user. There is also a ‘Download .csv’ button for downloading these keys, so please click on it in order not to lose the keys.
Our S3 Bucket configuration is done so let’s proceed to the Spring Boot application.
Spring boot Application
Let’s create Spring Boot project and add amazon dependency.
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk</artifactId>
<version>1.11.133</version>
</dependency>
Now let’s add s3 bucket properties to our application.yml file, you can use application.properties files also.
amazonProperties:
endpointUrl: https://s3.us-east-2.amazonaws.com
accessKey: XXXXXXXXXXXXXXXXX
secretKey: XXXXXXXXXXXXXXXXXXXXXXXXXX
bucketName: your-bucket-name
Now start our API with RestController with two request mappings “/uploadFile” and “/deleteFile”.
@RestController
@RequestMapping("/storage/")
public class BucketController { private AmazonClient amazonClient; @Autowired
BucketController(AmazonClient amazonClient) {
this.amazonClient = amazonClient;
} @PostMapping("/uploadFile")
public String uploadFile(@RequestPart(value = "file") MultipartFile file) {
return this.amazonClient.uploadFile(file);
} @DeleteMapping("/deleteFile")
public String deleteFile(@RequestPart(value = "url") String fileUrl) {
return this.amazonClient.deleteFileFromS3Bucket(fileUrl);
}
}
There is nothing special in the controller, except uploadFile() method recieves MultipartFile as a RequestPart.
This code is actually broken because we don’t have AmazonClient class yet, so let’s create this class with the following fields.
@Service
public class AmazonClient { private AmazonS3 s3client; @Value("${amazonProperties.endpointUrl}")
private String endpointUrl; @Value("${amazonProperties.bucketName}")
private String bucketName; @Value("${amazonProperties.accessKey}")
private String accessKey; @Value("${amazonProperties.secretKey}")
private String secretKey;@PostConstruct
private void initializeAmazon() {
AWSCredentials credentials = new BasicAWSCredentials(this.accessKey, this.secretKey);
this.s3client = new AmazonS3Client(credentials);
}
}
AmazonS3 is a class from amazon dependency. All other fields are just a representation of variables from our application.yml file. The @Value annotation will bind application properties directly to class fields during application initialization.
We added the method initializeAmazon() to set amazon credentials to the amazon client. Annotation @PostConstruct is needed to run this method after the constructor will be called, because class fields marked with @Value annotation are null in the constructor.
S3 bucket uploading method requires File as a parameter, but we have MultipartFile, so we need to add a method which can make this conversion.
private File convertMultiPartToFile(MultipartFile file) throws IOException {
File convFile = new File(file.getOriginalFilename());
FileOutputStream fos = new FileOutputStream(convFile);
fos.write(file.getBytes());
fos.close();
return convFile;
}
Also you can upload the same file many times, so we should generate unique name for each of them. Let’s use a timestamp and also replace all spaces in filename with underscores to avoid issues in future.
private String generateFileName(MultipartFile multiPart) {
return new Date().getTime() + "-" + multiPart.getOriginalFilename().replace(" ", "_");
}
Now let’s add method which uploads file to S3 bucket.
private void uploadFileTos3bucket(String fileName, File file) {
s3client.putObject(new PutObjectRequest(bucketName, fileName, file)
.withCannedAcl(CannedAccessControlList.PublicRead));
}
Note: For java 11 and more you will be getting the error “java.lang.ClassNotFoundException: javax.xml.bind.JAXBException”. The JAXB APIs are considered to be Java EE APIs and therefore are no longer contained on the default classpath in Java SE 9. In Java 11, they are completely removed from the JDK. Java 9 introduces the concepts of modules, and by default, the java.se
aggregate module is available on the classpath (or rather, module-path). As the name implies, the java.se
aggregate module does not include the Java EE APIs that have been traditionally bundled with Java 6/7/8. Fortunately, these Java EE APIs that were provided in JDK 6/7/8 are still in the JDK, but they just aren’t on the classpath by default. The extra Java EE APIs are provided in the following modules:
java.activation
java.corba
java.transaction
java.xml.bind << This one contains the JAXB APIs
java.xml.ws
java.xml.ws.annotation
To solve this Add the following dependency in you pom.xml.
<dependency>
<groupId>javax.xml.bind</groupId>
<artifactId>jaxb-api</artifactId>
<version>2.3.0</version>
</dependency>
In this method, we are adding Public Read permissions to this file. It means that anyone who has the file URL can access this file. It’s a good practice for images only because you probably will display these images on your website or mobile application, so you want to be sure that each user can see them.
Finally, we will combine all these methods into one general that is called from our controller. This method will save a file to an S3 bucket and return fileUrl which you can store in the database. For example, you can attach this URL to the user’s model if it’s a profile image, etc.
public String uploadFile(MultipartFile multipartFile) { String fileUrl = "";
try {
File file = convertMultiPartToFile(multipartFile);
String fileName = generateFileName(multipartFile);
fileUrl = endpointUrl + "/" + bucketName + "/" + fileName;
uploadFileTos3bucket(fileName, file);
file.delete();
} catch (Exception e) {
e.printStackTrace();
}
return fileUrl;
}
The only thing left to add is deleteFile() method.
public String deleteFileFromS3Bucket(String fileUrl) {
String fileName = fileUrl.substring(fileUrl.lastIndexOf("/") + 1);
s3client.deleteObject(new DeleteObjectRequest(bucketName + "/", fileName));
return "Successfully deleted";
}
Note: S3 bucket cannot delete file by URL. It requires a bucket name and a file name, that’s why we retrieved the file name from URL.
Note : If you uploaded you project inside of specific folders like bucketName/some/folder/image.png.
String filePath = "/some/folder/image.png"
s3client.deleteObject(new DeleteObjectRequest(bucketName + "/", filePath));
Test
Let’s test our application by making requests using Postman. We need to choose the POST method, in the Body we should select ‘form-data’. As a key, we should enter ‘file’ and choose the value type ‘File’. Then choose any file from your PC as a value. The endpoint URL is http://localhost:8080/storage/uploadFile.
If you did everything correctly then you should get the file URL in the response body.
And if you open your S3 bucket on Amazon then you should see one uploaded image there.
Now let’s test our delete method. Choose the DELETE method with endpoint url: http://localhost:8080/storage/deleteFile. Body type is the same: ‘form-data’, key: ‘url’, and into the value field enter the fileUrl created by S3 bucket.
Conclusion
Now you got a better understanding of using the S3 bucket with spring boot application. Hope this was helpful to you. If you have any questions please feel free to leave a comment. Thank you for reading.
Bye…!