Streamlining Deployment: A Step-by-Step Guide to Docker Compose for 3-Tier Applications

Streamlining Deployment: A Step-by-Step Guide to Docker Compose for 3-Tier Applications

posted 7 min read

In today’s digital world, the need to deploy complex applications seamlessly, and efficiently is important for businesses seeking to stay ahead of the competition. Recently, container technology has become the best solution for engineers and businesses to deploy applications.

In this article, we will go into the deployment of a 3-tier solution using Docker Compose, a powerful tool that helps to streamline the build of interconnected services. We will walk you through all the steps of the way from configuring services, to deployment, and best practices in each phase.

Prerequisites for implementing Docker Compose for 3-Tier Applications.

  • Basic understanding of Docker.
  • Docker installed.
  • Docker Compose installed.
  • Basic knowledge of Docker Compose.
  • Understanding of a 3-tier architecture.
  • Application code for frontend, backend, and database.
  • Database knowledge (NoSQL) e.g., MongoDB.

Steps for implementing Docker Compose

Dockerizing the Frontend Project

This project consists of a Frontend application (client) built with ReactJS, and a Backend Application that serves API (server) to the Frontend application from a MongoDB database. The code snippet is given using the link.

To containerize, and start this project, we will first work on the frontend part of the code. We will write a Dockerfile for the frontend.

Within your frontend project in the application code above, create a file called "Dockerfile". This Dockerfile will tell the Docker daemon the steps to follow to containerize the frontend application. Writing a Dockerfile is the first step to fully containerizing your application.

For our frontend build, we will implement the Multi-stage build pattern. The multi-stage build pattern allows you to separate the build environment from the Javascript runtime environment. This means that you can include the necessary dependencies for the build in one stage, and only the build artifacts are transferred to the final stage. This keeps your final image lightweight and secure.

The Dockerfile is built using the following lines of codes

 FROM node:latest AS builder
    
    
    WORKDIR /app
    
    
    COPY package.json .
    
    
    RUN npm install
    
    
    COPY . .
    
    
    RUN npm run build
    
    
    # Stage 2
    
    
    FROM nginx:1.25.2-alpine-slim
    
    
    COPY --from=builder /app/build /usr/share/nginx/html
    
    
    EXPOSE 80
    
    
    CMD ["nginx", "-g", "daemon off;"]
  

Code Explanation.

FROM node:latest AS builder:
This line specifies the base image for the first stage of the build process. Here, it uses the latest Node.js image. In this line, we named the stage "builder" for later reference in this project.

WORKDIR"
This line sets the working directory inside the container to /app. This means that all subsequent commands will be run from this directory.

COPY package.json .:
This line copies the 'package.json' file from your local machine to the container's working directory(/app). It also lists the project's dependencies.

RUN npm install:

This Installs the dependencies listed in package.json inside the container. This command runs npm install.

COPY. .:

This command copies all the remaining application code from your local machine to the container's working directory (/app).

RUN npm run build:

Runs the build script defined in package.json, and creates a production-ready build of the React application. The build output is stored in the build directory within the container's working directory (/app).

FROM nginx:1.25.2-alpine-slim:

This line starts the second stage of the build, using a lightweight Nginx image to serve the application.

COPY --from=builder /app/build /usr/share/nginx/html:

This command copies the build artifacts (static files) from the first stage (/app/build) to the Nginx web root directory (/usr/share/nginx/html). The --from=builder part tells Docker to copy files from the named stage "builder" which we tagged in the first part of the first stage.

EXPOSE 80:

This command instructs Docker to expose port 80 on the container. This is the default port for HTTP traffic that allows external access to the web server.

CMD ["nginx", "-g", "daemon off;"]:

Specifies the command to run when the container starts. It runs Nginx in the foreground with non-daemon mode to keep the container running.

Dockerizing the Backend Project

In the next phase of this project, we'll create a Dockerfile for the backend project. Since it is the backend, we will not use a multistage build. This is because it runs the code directly without generating any build artifacts. This approach ensures that all necessary dependencies are all in a single stage, and simplifies the configuration.

Navigate to the project's Backend directory. Once there, create a new file named "Dockerfile" to define the Docker configuration for the backend service. This Dockerfile will specify how to build and run the backend application within a Docker container.

FROM node:20-alpine3.17


WORKDIR /app


COPY package.json .


RUN npm install


COPY . .


EXPOSE 5000
CMD ["npm", "start"]

Code Explanation

FROM node:20-alpine3.17:
The Dockerfile uses Node.js 20 on an Alpine Linux base. Alpine Linux is a lightweight distribution, which helps keep the image size small.

WORKDIR /app:
This command sets the working directory to /app. All subsequent commands will be executed from this directory.

COPY package.json .:

This copies package.json to the container. The package.json file contains the metadata and dependencies for the Node.js application.

RUN npm install:

This installs Node.js dependencies. These dependencies are necessary for the application to run.

COPY . .:

This copies the rest of the application code to the container.

EXPOSE 5000:
Instruct that the container listens on port 5000.

CMD ["npm", "start"]:
Runs npm start to start the backend server.

Implementing Docker Compose to build the containers.

Now that our Docker files for the frontend and backend are ready, we will use the official Docker image for MongoDB as our database.

Next, let's create a Docker Compose file.

  • Create a file called docker-compose.yml in the root directory of your project.
  • The docker-compose.yml file will look like this:
version: '3.9'

services:
  frontend:
    build:
      context: ./client
      dockerfile: Dockerfile
    container_name: frontend
    ports:
      - "80:80"
    depends_on:
      - backend

  backend:
    build:
      context: ./server
      dockerfile: Dockerfile
    container_name: backend
    ports:
      - "5000:5000"
    env_file: ./.env
    environment:
      - DB_HOST=mongodb_server
      - DB_USER=$MONGODB_USER
      - DB_PASSWORD=$MONGODB_PASSWORD
      - DB_NAME=$MONGODB_DATABASE
      - DB_PORT=$MONGODB_DOCKER_PORT
    depends_on:
      - mongodb

  mongodb:
    image: mongo:latest
    container_name: mongodb_server
    env_file: ./.env
    environment:
      - MONGO_INITDB_ROOT_USERNAME=$MONGODB_USER
      - MONGO_INITDB_ROOT_PASSWORD=$MONGODB_PASSWORD
    ports:
      - "27017:27017"
    volumes:
      - ./mydata:/data/db

volumes:
  mydata:

In the Docker Compose file, we define the services we are building using the services command. Within this project, we built three containers: frontend, backend, and the database(MongoDB).

First, we define the frontend service. We specify the Dockerfile location with the build context and set the container name to "client". We mapped port 80 of the host machine, to port 80 on the container. We also included the "depends_on" to ensure that the backend service starts before the frontend.

Next, we would define the backend service. Just like the frontend service, we specify the Dockerfile location with the build context, and set the container name to "server". We mapped port 5000 of the host machine to port 5000 of the container.

For this service, we will configure environmental variables using a .env file via the env_file keyword. The environmental variables configure the database connection details such as the database host, user, password, name, and port.

Improper handling of environmental variables can lead to data breaches if attackers gain access to the configuration. this can result to unauthorized access to database and sensitive data.

For the MongoDB service, we used the official MongoDB image from Docker Hub. We set the container name to mongodb_server and exposed port 27017. Environment variables for the MongoDB root username and password are set using the env_file keyword, which reads from the .env file.

We also implement volumes to persist MongoDB data, ensuring data backup and recovery. The volume mydata is mounted from the host machine to the container’s data directory.

Finally, we defined the named volume "mydata" under the volumes section. Data persistence ensures that data remains intact even if the container is stopped or restarted. Volumes provide a way to store data outside of the container's filesystem, reducing the risk of data loss or corruption.

Since we are not establishing any complex network in our containerized environment, we will use the default network provided by Docker Compose.

To start the services, we run the command docker-compose up -d. When using the command, the -d flag tells Docker Compose to run the containers in detached mode. This means that the containers will start and run in the background without keeping the terminal occupied. You will be able to continue using the terminal for other tasks and processes while the containers run in the background.

To access your application deployed using Docker Compose, open your web browser and type "http://localhost:80" in the address bar. This will connect to the host where Docker Compose is running your application

The application will come up on your browser

Best practices for Docker and Docker Compose

Here are some best practices for Docker and Docker Compose:

  • Use Official Images for your Builds: Use official Docker images from trusted sources like Docker Hub. These images are regularly updated and maintained.

  • Minimize Image Size: Keep all Docker Images as small as possible. You can Implement this by removing unnecessary dependencies.

  • Environment Variables: Use environment variables to configure applications dynamically. Avoid hardcoding configuration values in Dockerfiles or Docker Compose files.

  • Volume Mounts for Data Persistence: Use volume mounts to persist data outside of containers. This ensures data integrity and facilitates backup and recovery.

  • Dockerfile Best Practices: Follow best practices for writing Dockerfiles, such as using multi-stage builds, caching layers efficiently, and minimizing the number of layers.

  • Regular Updates: Keep Docker, Docker Compose, and base images up to date with the latest security patches and bug fixes to mitigate vulnerabilities.

Conclusion

Congratulations, you have successfully containerized your full-stack application with ‘docker-compose”. Using Docker Compose offers developers, and DevOps engineers a way to streamline application deployment and provide application stability. By following the guide written in this article, you have learned to deploy interconnected services, improve performance, and ensure seamless deployment across different environments.

If you read this far, tweet to the author to show them you care. Tweet a Thanks
Impressive step by step article. I think docker is one of the easier ways of deploying app using containers. Kudos
Great article! The step-by-step guide for setting up a 3-tier application with Docker Compose is super helpful. I especially appreciate the detailed explanations of each command in the Dockerfile. Thanks for sharing!

More Posts

A Practical guide to Async and Await for JavaScript Developers

Mubaraq Yusuf - Jan 8

Convert Tkinter Python App to .Exe File [pyinstaller] Step by Step Guide

Tejas Vaij - Apr 1, 2024

Mastering Docker: Simplified Guide for Developers - A Game-Changing Tool Explained

Renuka Patil - Dec 5, 2024

Tkinter ToDo GUI Application (Step by Step Guide)

Tejas Vaij - Mar 27, 2024

Securing Your Node.js Application: A Comprehensive Guide by Sekurno

Sekurno - Jan 14
chevron_left