Make Easy Your Deployment with Docker
Deployment is a thing that Software Engineering usually do to make their application available and accessible anywhere and anytime. Deployment also a required process for prepare our application to run and operate in specific environment. Usually it involves installation, configuration and other steps to make the application run well. We can do deployment automatically using CI/CD Job (like Gitlab CI or Travis CI) or manually access the deployment environment.
Before we do a deployment, we must know what environment that our application will be deployed. Why? Development environment and Deployment environment might have a completely different system, for example, you may develop your app using Windows and the server for deployment using Linux. We can’t treat the application that running in Windows like application in Linux because they have different system.
Sometimes we develop an application but we never think about how we deploy it. So sometimes the application work in development environment (our PC) but it doesn’t work on the server. We need to configure the environment for the deployment. But we need to do this task again and again if change the server.
Here comes Docker, Docker is a containerization tool designed to make creation, deployment, and running an application more easier in any platform that has installed docker engine. Containerization allow us to make a package that contain our application and all the dependencies. Container will run in isolated space in your OS, it uses OS-level virtualization to run.
Docker vs Virtual Machine
Docker System
Docker Main Concept
There are 3 main concept that you should know before you use Docker, there are Image, Container and Repository.
- Docker Image
A Docker image is an immutable (unchangeable) file that contains the source code, libraries, dependencies, tools, and other files needed for an application to run. Each image has its own identifier, there are tag and id - Docker Container
A Docker container is a virtualized run-time environment where users can isolate applications from the underlying system. These containers are compact, portable units in which you can start up an application quickly and easily. - Docker Repositories
Docker Repositories are used to store or host the same images with multiple tags or versions.
To install docker you can open this
To test your docker installation
docker --version
Docker Basic Command
- docker pull <image-name>
This command will download the image from repository - docker run <image-name>
This command will create and run a new container that use the image that you choose - docker start <container-id or container-name>
This command will start the targeted container - docker stop <container-id or container-name>
This command will stop the targeted container - docker build -f <path-to-Dockerfile> -t <tag-name> <build-context>
This command use to build an image
Example Deployment using Docker to Heroku
In my study in 6th semester, me and my team develop a web app called Crowd+, it is an marketplace for data annotation service.
Before we do the deployment, first we make the Dockerfile in the root directory of the project. The Dockerfile contains all the command for docker engine to build an image. In this case, i use React Client Side Rendering (CSR). Because React CSR generates static file after build, so i need a web server to serve it. I use Nginx to serve the static files. The Dockerfile will be look like this
# build stage
FROM node:lts-alpine as build-env
ARG BE_SERVER
ENV REACT_APP_BE_SERVER=$BE_SERVER
ENV NODE_ENV=production
RUN \
apk update && \
apk add git gcc g++
ADD . /opt/app
WORKDIR /opt/app
RUN \
yarn install && \
yarn build
FROM nginx:alpine
COPY --from=build-env /opt/app/build /var/www
COPY --from=build-env /opt/app/envsubt.sh /etc/nginx/templates/envsubt.sh
COPY --from=build-env /opt/app/default.conf /etc/nginx/templates/default.conf.template
WORKDIR /var/www
RUN apk add --no-cache bash
ENV PORT=80
ENV API=https://localhost:8080/api/v1
CMD ["/bin/sh", "-c", "/etc/nginx/templates/envsubt.sh && nginx -g \"daemon off;\""]
I use gitlab in this project to handle the CI/CD, so i create an .gitlab-ci.yml to automatically deploy the project once we do a push to the gitlab. The .gitlab-ci.yml file look like this
stages:
- build
- release
Build-Staging:
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: ['']
script:
- mkdir -p /kaniko/.docker
- echo "{\"auths\":{\"registry.heroku.com\":{\"username\":\"_\",\"password\":\"$HEROKU_TOKEN\"}}}" > /kaniko/.docker/config.json
- |-
/kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --build-arg BE_SERVER=$BE_SERVER --destination registry.heroku.com/datalyst-fe/web:latest
only:
- staging
Release-Staging:
stage: release
image: ruby:2.4
before_script:
- wget -qO- https://cli-assets.heroku.com/install-ubuntu.sh | sh
script:
- export HEROKU_API_KEY=$HEROKU_APIKEY
- |-
heroku container:release -a datalyst-fe web
only:
- staging
In this CI/CD, in the build stage, i use kaniko to build the docker image and push to heroku registry and after that in the release stage, i just tell the heroku to use the latest image.