Introduction
In this blog post, we’ll develop a Spring Boot application that is destined for deployment on AWS. The application will establish connections with both the S3 service and an Amazon Managed PostgreSQL database. The project adheres to best practices, ensuring that no credentials are stored in the code. Instead, IAM Roles are employed for connecting to AWS resources. For local development, the SystemPropertyCredentialsProvider is utilized.
EC2
Amazon Elastic Compute Cloud (Amazon EC2) provides a comprehensive and versatile compute platform, featuring more than 750 instances. It offers a selection of the latest processors, storage options, networking configurations, operating systems, and purchase models, enabling clients to tailor their choices to diverse requirements. EC2 supports Intel, AMD, and Arm processors, including on-demand EC2 Mac instances.
S3
Amazon S3 functions as a repository for online data, providing a reliable, fast, and economical infrastructure for storing data. It streamlines web-scale computing by making it easy to store and retrieve any amount of data, whether within Amazon EC2 or anywhere on the web, at any time.
RDS – PostgreSQL
Amazon RDS simplifies the process of establishing, managing, and expanding PostgreSQL deployments in the cloud. This service enables the swift deployment of scalable PostgreSQL configurations within minutes, utilizing cost-effective and adjustable hardware capacity. Amazon RDS takes care of intricate and time-consuming administrative responsibilities, including upgrades, storage management, replication to ensure high availability and enhanced read throughput, as well as backups for disaster recovery.
Create SpringBoot Application
Step 1: Spring Boot Postgres Application
We have already created a full-fledged SpringBoot Postgres application in one of my previous blogs which uses a docker container for the Postgres connectivity, we will use the same base code and add S3 and RDS Connectivity. Please refer below blog to learn more
Step 2: Integrate AWS S3 – Code Changes
Add Dependency
<dependency>
<groupId>software.amazon.awssdk</groupId>
<artifactId>s3</artifactId>
<version>2.21.27</version>
</dependency>
S3 Configuration, Create an S3 Client that picks the credentials from .aws for the local Environment, from Ec2 Instance Roles for the Cloud
package com.jonesjalapat.blog.tradesman.config;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Profile;
import software.amazon.awssdk.auth.credentials.InstanceProfileCredentialsProvider;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.S3Client;
@Configuration
public class S3Configuration {
@Value("${region}")
private String region;
@Bean
@Profile("prod")
public S3Client getProdClient() {
return S3Client.builder()
.credentialsProvider(InstanceProfileCredentialsProvider.create())
.region(Region.of(region))
.build();
}
@Bean
@Profile("dev")
public S3Client getDevClient() {
return S3Client.builder().region(Region.of(region)).build();
}
}
S3 Client to check if the object exists in the bucket
package com.jonesjalapat.blog.tradesman.cloud;
import lombok.AllArgsConstructor;
import lombok.RequiredArgsConstructor;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Service;
import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.s3.model.HeadObjectRequest;
import software.amazon.awssdk.services.s3.model.HeadObjectResponse;
import software.amazon.awssdk.services.s3.model.NoSuchKeyException;
@Service
@RequiredArgsConstructor
public class S3Service {
private final S3Client s3Client;
@Value("${bucketname}")
private String bucketName;
public void validateAvatar(String avatar) {
this.exists(bucketName, avatar);
}
private boolean exists(String bucket, String key) {
try {
HeadObjectResponse headResponse =
s3Client.headObject(HeadObjectRequest.builder().bucket(bucket).key(key).build());
return true;
} catch (NoSuchKeyException e) {
throw e;
}
}
}
application-dev.yml Configuration Changes for local
## application.yaml
cloud:
aws:
region:
static: us-east-2
stack:
auto: false
credentials:
profile-name: default
logging:
level:
com:
amazonaws:
util:
EC2MetadataUtils: error
Step 3: Integrate RDS PostgreSQL – Code Changes
Application.yml Changes: make sure to keep the DB name as Postgres
spring:
profiles:
active: dev
datasource:
driver-class-name: org.postgresql.Driver
username: postgres
url: jdbc:postgresql://tradesman.url.us-east-2.rds.amazonaws.com:5432/postgres
password: password
jpa:
properties:
hibernate:
dialect: org.hibernate.dialect.PostgreSQLDialect
logging:
level:
liquibase: INFO
bucketname : tradesman-bucket
region : us-east-2
Step 4: AWS S3 Bucket Changes
Create Bucket: add Bucket policy for Get
Create a Policy for accessing Ec2, RDS, and S3, For prod – resources should be mentioned, however for simplicity, we have kept *.
Assign the policy to the Ec2 instance for Authentication in Cloud
Assign the policy to the User for Authentication in the Local
That’s all it for connecting to S3
Step 5: AWS RDS PostgreSQL Changes
Create an RDS instance, Make sure to give the name for DB in case we need a name other than the default name i.e postgres
Database Authentication: Based on both password and IAM policy.
Security Groups: Make sure, to allow from local & Ec2, outbound as everyone.
Hurray! that’s it, We have the environment and code ready, lets now test it
Step 6: Testing
Local
Run aws configure, and enter the secret key, value to be stored in your credentials chain. Now run mvn spring-boot:run
AWS EC2
In the EC2 Instance, I install Java, maven, git, and tomcat for running my Spring boot application which connects to S3, PostgreSQL.
apt-get update
apt-get upgrade
apt install OpenJDK-17-jdk openjdk-17-jre
sudo apt install maven
Git clone the code and run mvn spring-boot:run
Testing the API
GitHub Link
I’ve pushed the code for the project publicly, however, this application will not work automatically as I have restricted the RDS access by Security Group rules and IAM policies with no credentials in the code, hence create your RDS and configure the policies as similar to those mentioned above.