AWS Setup
Let’s start by setting up an EC2 instance to deploy our application. To do this, and you’ll need to open an AWS account (if you don’t already have one).
If you don’t know how to set up AWS EC2 , please visit this link which
will guide you through how to : launch EC2
Once we’ve finished configuring AWS EC2 and the instance is up and running, we can install Docker on it.
So connect via SSH into the instance using your Key Pair as :
$ ssh -i your-key-pair.pem ec2-user@<PUBLIC-IP-ADDRESS>
#example
# ssh -i ~./ssh/react-tutorial.perm ec2-user@54.201.189.94
After accessing the instance, start by updating the server package and installing the latest version of Docker and Docker-compose
:
[ec2-user]$ sudo yum update -y
[ec2-user]$ sudo yum install -y docker
[ec2-user]$ sudo service docker start
[ec2-user]$ sudo curl -L https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose
[ec2-user]$ sudo chmod +x /usr/local/bin/docker-compose
[ec2-user]$ docker --version
Docker version 20.10.23, build 7155243
[ec2-user]$ docker-compose --version
Docker Compose version v2.18.1
Add the ec2-user
to the docker group
so that you can use it without running the Docker command with sudo :
[ec2-user]$ sudo usermod -a -G docker ec2-user
Next, let’s generate the SSH KEY
[ec2-user]$ ssh-keygen -t rsa
Please! save the Key without setting any password
Next, copy the public key into the authorized_keys file and set the appropriate authorizations.:
[ec2-user]$ cat ~/.ssh/id_rsa.pub
# Copy this public key into ~/.ssh/authorized_keys
[ec2-user]$ vi ~/.ssh/authorized_keys
# After adding the public key into authorized_key then exits from vi text editor
# Then add these commands are used to change the permissions of these files
[ec2-user]$ chmod 600 ~/.ssh/authorized_keys
[ec2-user]$ chmod 600 ~/.ssh/id_rsa
Now, copy the contents of the private key
[ec2-user]$ cat ~/.ssh/id_rsa
# Copy the private key somewhere for later use
5. Automating Deployment with GitHub Actions to AWS EC2
It is an integrated continuous integration and continuous deployment (CI / CD) platform provided by GitHub .
Since our workflow will use GitHub’s core features (GitHub packages), GitHub requires you to create an access token, which will be used as a GitHub user ID for all your interactions.
Go to the Personal access tokens area in the Developer settings of your GitHub profile and click Generate new token.

To configure GitHub Actions, start by adding a new directory called .github
to the root of your project, then within this directory, add another directory called workflows
.
Now to configure the workflow, which is a set of one or more jobs, create a new file in the workflows directory called main.yml
.
You can also run the following command in your root directory.
mkdir .github && cd .github
mkdir workflows && cd workflows
touch main.yml
main.yml
name: Continuous Integration and Delivery
on:
push:
branches: [main]
env:
#
WEBSITE_IMAGE: ghcr.io/$(echo $GITHUB_REPOSITORY | tr '[:upper:]' '[:lower:]')/website
NGINX_IMAGE: ghcr.io/$(echo $GITHUB_REPOSITORY | tr '[:upper:]' '[:lower:]')/nginx
REGISTRY: ghcr.io
jobs:
build:
name: Build the Docker Image
runs-on: ubuntu-latest
steps:
- name: checkout main
uses: actions/checkout@v3
- name: Set environment variables to .env
run: |
echo "WEBSITE_IMAGE=$(echo ${{env.WEBSITE_IMAGE}} )" >> $GITHUB_ENV
echo "NGINX_IMAGE=$(echo ${{env.NGINX_IMAGE}} )" >> $GITHUB_ENV
- name: Log in to GitHub Packages
env:
PERSONAL_ACCESS_TOKEN: ${{ secrets.PERSONAL_ACCESS_TOKEN }}
run: echo ${PERSONAL_ACCESS_TOKEN} | docker login ghcr.io -u ${{ secrets.NAMESPACE }} --password-stdin
- name: Pull images
run: |
docker pull ${{ env.WEBSITE_IMAGE }} || true
docker pull ${{ env.NGINX_IMAGE }} || true
- name: Build images
run: |
docker compose -f docker-compose.ci.yml build
- name: Push images
run: |
docker push ${{ env.WEBSITE_IMAGE }}
docker push ${{ env.NGINX_IMAGE }}
checking-secrets:
name: Checking secrets
runs-on: ubuntu-latest
needs: build
outputs:
secret_key_exists: ${{steps.check_secrets.outputs.defined}}
steps:
- name: Check for Secrets availabilities
id: check_secrets
shell: bash
run: |
if [[ -n "${{ secrets.PRIVATE_KEY }}" && -n "${{secrets.AWS_EC2_IP_ADDRESS}}" && -n "${{secrets.AWS_HOST_USER}}" ]]; then
echo "defined=true" >> $GITHUB_OUTPUT;
else
echo "defined=false" >> $GITHUB_OUTPUT;
fi
deploy:
name: Deploy to AWS EC2
runs-on: ubuntu-latest
needs: checking-secrets
if: needs.checking-secrets.outputs.secret_key_exists == 'true'
steps:
- name: Checkout main
uses: actions/checkout@v3
- name: Add environment variables to .env
run: |
echo WEBSITE_IMAGE=${{ env.WEBSITE_IMAGE }} >> .env
echo NGINX_IMAGE=${{ env.NGINX_IMAGE }} >> .env
echo NAMESPACE=${{ secrets.NAMESPACE }} >> .env
echo PERSONAL_ACCESS_TOKEN=${{ secrets.PERSONAL_ACCESS_TOKEN }} >> .env
- name: Add the private SSH key to the ssh-agent
env:
SSH_AUTH_SOCK: /tmp/ssh_agent.sock
run: |
mkdir -p ~/.ssh
ssh-agent -a $SSH_AUTH_SOCK > /dev/null
ssh-keyscan github.com >> ~/.ssh/known_hosts
ssh-add - <<< "${{ secrets.PRIVATE_KEY }}"
- name: Deploy images on AWS EC2
env:
SSH_AUTH_SOCK: /tmp/ssh_agent.sock
run: |
scp -o StrictHostKeyChecking=no -r ./.env ./docker-compose.prod.yml ${{secrets.AWS_HOST_USER}}@${{ secrets.AWS_EC2_IP_ADDRESS }}:
ssh -o StrictHostKeyChecking=no ${{secrets.AWS_HOST_USER}}@${{ secrets.AWS_EC2_IP_ADDRESS }} << EOF
docker-compose down --rmi all -v
docker login ghcr.io -u ${{secrets.NAMESPACE}} -p ${{secrets.PERSONAL_ACCESS_TOKEN}}
docker pull $WEBSITE_IMAGE
docker pull $NGINX_IMAGE
docker-compose --env-file=.env -f docker-compose.prod.yml up -d
docker logout
EOF
Find the complete gist.
So, in the main.yml
we define :
- set the environment variables
- Three jobs to run:
a. build job consist of:
- Set various environment variables
- Login in the Github package by using the PERSONAL_ACCESS_TOKEN created
- Pull images for caching, build the images and push to Github Registry
b. checking-secret consist of :
- checking if the deploy variables are existing
Notes about the secrets
- secrets.PERSONAL_ACCESS_TOKEN
- secrets.NAMESPACE
- secrets.PRIVATE_KEY
- secret.AWS_EC2_IP_ADDRESS
- secrets.AWS_HOST_USER
All of the secrets need to be set in your repository’s secrets (Settings > Secrets) and use your Github username or your organization name as your NAMESPACE and your personal access token as PERSONAL_ACCESS_TOKEN .
c. deploy job consist of:
- Add the environment variable to .env file which will be created on the AWS EC2 instance.
- Add the private key to ssh-agent for password less.
- Deploy the build the images on AWS EC2 instance.
Once you are done, commit and push your code to the Github to trigger the workflow to run, so now you should see the images in the Github Packages.
Please make sure your on main branch
because the workflow’s can only trigger on the main
branch other you can it to your preferences.
Workflow successfully finished :

Once all our jobs have been executed, navigate to the IP of your instance, and you should see the React application running :

And there you have it, this guide has equipped you with the skills you need to successfully build, deploy and automate ReactJS applications using Docker, NGINX and GitHub Actions on AWS EC2.
If you would like to improve your skills, click on the following link on Master React App Deployment: AWS EC2, Route 53, SSL & GitHub Actions,A comprehensive manual which walks you through the deployment process, leveraging the power of Docker, AWS EC2, Route 53, SSL encryption and GitHub actions to build a robust CI/CD pipeline.
Keep exploring, stay curious and keep building on this foundation to create even more sophisticated and powerful projects. Happy coding!
Thanks for reading through and I hope you liked what you read here. Feel free to connect with me on LinkedIn, twitter and GitHub