AWS S3 bucket File Upload with Spring boot

Hello Everyone..!

In this blog, we are going to discuss uploading files to Amazon S3 Bucket by using the Spring Boot application.

Why?

Why Using Amazon AWS?

  • Amazon supports a lot of programming languages.
  • There is a web interface where you can see all of your uploaded files.
  • You can manually upload/delete any file using the web interface.

OK, now start to look at how we are achieving this.

AWS account configuration

Once we are done with the account creation, we need to create an S3 bucket.

After opening the s3 bucket, click Create Bucket

On the General configuration, you have to enter the bucket name, choose a region

On the Object ownership, there are 2 options. ACLs disabled & ACLs enabled. To get more details about ACL (Access Control List). Visit

Here I am selecting ACLs Enabled to allow another AWS account to access my S3 bucket.

For now uncheck Block all public access, to access the bucket publicly, later we can configure the access according to our requirement.

The rest set can be default for now and click create a bucket, your bucket will be created.

After the bucket is created, we need to configure our policy as public for now.

Click the created bucket & select permission

Scroll down to bucket policy & Update the following code. Don’t forget to Rename YOUR_BUCKET_NAME.

{
“Version”: “2012–10–17”,
“Statement”: [
{
“Sid”: “Statement1”,
“Effect”: “Allow”,
“Principal”: “*”,
“Action”: “s3:GetObject”,
“Resource”: “arn:aws:s3:::YOUR_BUCKET_NAME/*”
}
]
}

Create I AM User

Go to IAM services

Go to Users & Click Add User

Enter the user’s name and check Access typeProgrammatic access’. Press the next button. We need to add permissions to this user. Press ‘Attach existing policy directly’, in the search field enter ‘s3’ and among found permissions choose AmazonS3FullAccess.

Then press next and ‘Create User’. If you did everything right then you should see the Access key ID and Secret access key for your user. There is also a ‘Download .csv’ button for downloading these keys, so please click on it in order not to lose the keys.

Our S3 Bucket configuration is done so let’s proceed to the Spring Boot application.

Spring boot Application

<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk</artifactId>
<version>1.11.133</version>
</dependency>

Now let’s add s3 bucket properties to our application.yml file, you can use application.properties files also.

amazonProperties:
endpointUrl:
https://s3.us-east-2.amazonaws.com
accessKey: XXXXXXXXXXXXXXXXX
secretKey: XXXXXXXXXXXXXXXXXXXXXXXXXX
bucketName: your-bucket-name

Now start our API with RestController with two request mappings “/uploadFile” and “/deleteFile”.

@RestController
@RequestMapping("/storage/")
public class BucketController {
private AmazonClient amazonClient; @Autowired
BucketController(AmazonClient amazonClient) {
this.amazonClient = amazonClient;
}
@PostMapping("/uploadFile")
public String uploadFile(@RequestPart(value = "file") MultipartFile file) {
return this.amazonClient.uploadFile(file);
}
@DeleteMapping("/deleteFile")
public String deleteFile(@RequestPart(value = "url") String fileUrl) {
return this.amazonClient.deleteFileFromS3Bucket(fileUrl);
}
}

There is nothing special in the controller, except uploadFile() method recieves MultipartFile as a RequestPart.

This code is actually broken because we don’t have AmazonClient class yet, so let’s create this class with the following fields.

@Service
public class AmazonClient {
private AmazonS3 s3client; @Value("${amazonProperties.endpointUrl}")
private String endpointUrl; @Value("${amazonProperties.bucketName}")
private String bucketName; @Value("${amazonProperties.accessKey}")
private String accessKey; @Value("${amazonProperties.secretKey}")
private String secretKey;@PostConstruct
private void initializeAmazon() {
AWSCredentials credentials = new BasicAWSCredentials(this.accessKey, this.secretKey);
this.s3client = new AmazonS3Client(credentials);
}
}

AmazonS3 is a class from amazon dependency. All other fields are just a representation of variables from our application.yml file. The @Value annotation will bind application properties directly to class fields during application initialization.

We added the method initializeAmazon() to set amazon credentials to the amazon client. Annotation @PostConstruct is needed to run this method after the constructor will be called, because class fields marked with @Value annotation are null in the constructor.

S3 bucket uploading method requires File as a parameter, but we have MultipartFile, so we need to add a method which can make this conversion.

private File convertMultiPartToFile(MultipartFile file) throws IOException {
File convFile = new File(file.getOriginalFilename());
FileOutputStream fos = new FileOutputStream(convFile);
fos.write(file.getBytes());
fos.close();
return convFile;
}

Also you can upload the same file many times, so we should generate unique name for each of them. Let’s use a timestamp and also replace all spaces in filename with underscores to avoid issues in future.

private String generateFileName(MultipartFile multiPart) {
return new Date().getTime() + "-" + multiPart.getOriginalFilename().replace(" ", "_");
}

Now let’s add method which uploads file to S3 bucket.

private void uploadFileTos3bucket(String fileName, File file) {
s3client.putObject(new PutObjectRequest(bucketName, fileName, file)
.withCannedAcl(CannedAccessControlList.PublicRead));
}

Note: For java 11 and more you will be getting the error “java.lang.ClassNotFoundException: javax.xml.bind.JAXBException”. The JAXB APIs are considered to be Java EE APIs and therefore are no longer contained on the default classpath in Java SE 9. In Java 11, they are completely removed from the JDK. Java 9 introduces the concepts of modules, and by default, the java.se aggregate module is available on the classpath (or rather, module-path). As the name implies, the java.se aggregate module does not include the Java EE APIs that have been traditionally bundled with Java 6/7/8. Fortunately, these Java EE APIs that were provided in JDK 6/7/8 are still in the JDK, but they just aren’t on the classpath by default. The extra Java EE APIs are provided in the following modules:

java.activation
java.corba
java.transaction
java.xml.bind << This one contains the JAXB APIs
java.xml.ws
java.xml.ws.annotation

To solve this Add the following dependency in you pom.xml.

<dependency>
<groupId>javax.xml.bind</groupId>
<artifactId>jaxb-api</artifactId>
<version>2.3.0</version>
</dependency>

In this method, we are adding Public Read permissions to this file. It means that anyone who has the file URL can access this file. It’s a good practice for images only because you probably will display these images on your website or mobile application, so you want to be sure that each user can see them.

Finally, we will combine all these methods into one general that is called from our controller. This method will save a file to an S3 bucket and return fileUrl which you can store in the database. For example, you can attach this URL to the user’s model if it’s a profile image, etc.

public String uploadFile(MultipartFile multipartFile) {    String fileUrl = "";
try {
File file = convertMultiPartToFile(multipartFile);
String fileName = generateFileName(multipartFile);
fileUrl = endpointUrl + "/" + bucketName + "/" + fileName;
uploadFileTos3bucket(fileName, file);
file.delete();
} catch (Exception e) {
e.printStackTrace();
}
return fileUrl;
}

The only thing left to add is deleteFile() method.

public String deleteFileFromS3Bucket(String fileUrl) {
String fileName = fileUrl.substring(fileUrl.lastIndexOf("/") + 1);
s3client.deleteObject(new DeleteObjectRequest(bucketName + "/", fileName));
return "Successfully deleted";
}

Note: S3 bucket cannot delete file by URL. It requires a bucket name and a file name, that’s why we retrieved the file name from URL.

Note : If you uploaded you project inside of specific folders like bucketName/some/folder/image.png.

String filePath = "/some/folder/image.png"
s3client.deleteObject(new DeleteObjectRequest(bucketName + "/", filePath));

Test

If you did everything correctly then you should get the file URL in the response body.

And if you open your S3 bucket on Amazon then you should see one uploaded image there.

Now let’s test our delete method. Choose the DELETE method with endpoint url: http://localhost:8080/storage/deleteFile. Body type is the same: ‘form-data’, key: ‘url’, and into the value field enter the fileUrl created by S3 bucket.

Conclusion

Bye…!

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store