Dockerfile
A Dockerfile is a script containing instructions for building a Docker image.
The Dockerfile defines the base image, the application's source code, dependencies, and configurations needed for the service to run.
Create a new file named `Dockerfile` in the root directory of your microservice.

Dockerfile Instructions
Docker supports over 15 different Dockerfile instructions for adding content to your image and setting configuration parameters. Here are some of the most common ones you’ll use.
FROM
FROM ubuntu:20.04
FROM
is usually the first line in your Dockerfile. It refers to an existing image which becomes the base for your build. All subsequent instructions apply on top of the referenced image’s filesystem.
COPY
COPY main.js /app/main.js
COPY
adds files and folders to your image’s filesystem.
It copies between your Docker host and the work-in-progress image. Containers using the image will include all the files you’ve copied in.
ADD
ADD http://example.com/archive.tar /archive-content
ADD
works similarly to COPY
but additionally supports remote file URLs and automatic archive extraction.
Archives will be extracted into the destination path in your container. Decompression of gzip
, bzip2
, and xz
formats is supported.
RUN
RUN apt-get update && apt-get install -y nodejs
RUN
runs a command inside the image you’re building.
It creates a new image layer on top of the previous one; this layer will contain the filesystem changes that the command applies.
RUN
instructions are most commonly used to install and configure packages that your image requires.
ENV
ENV PATH=$PATH:/app/bin
The ENV
instruction is used to set environment variables that will be available within your containers.
Its argument looks similar to a variable assignment in your shell: specify the name of the variable and the value to assign, separated by an equals character.
LABEL
LABEL maintainer="yourname@example.com"LABEL version="1.0"
Add metadata to your image using the LABEL
instruction. It’s useful for providing information about the image, such as the maintainer or version.
WORKDIR
WORKDIR /usr/src/app
You can specify your working directory inside the container using the WORKDIR instruction.
Any other instruction after that in the dockerfile, will be executed on that particular working directory only.
EXPOSE
EXPOSE 8080
The EXPOSE instruction inside the dockerfile informs that the container is listening to the specified port in the network. The default protocol is TCP.
CMD
CMD echo "Welcome to TutorialsPoint"
If you want to run a docker container by specifying a default command that gets executed for all the containers of that image by default, you can use a CMD command.
In case you specify a command during the docker run command, it overrides the default one.
Specifying more than one CMD instructions, will allow only the last one to get executed.
ENTRYPOINT
ENTRYPOINT ["/bin/echo", "Welcome to Guvi"]
The difference between ENTRYPOINT and CMD is that, if you try to specify default arguments in the docker run command, it will not ignore the ENTRYPOINT arguments.
The exec form of an ENTRYPOINT command is −
ENTRYPOINT [“<executable-command>”, “<parameter 1>”, “<parameter 2>”, ….]
If you have used the exec form of the ENTRYPOINT instruction, you can also set additional parameters with the help of CMD command.
VOLUME
VOLUME ["/data"]
The VOLUME
instruction creates a mount point with the specified name and marks it as holding externally mounted volumes from native host or other containers.
The value can be a JSON array, VOLUME ["/var/log/"]
, or a plain string with multiple arguments, such as VOLUME /var/log
or VOLUME /var/log /var/db
.
USER
USER UID[:GID]
The USER
instruction sets the user name (or UID) and optionally the user group (or GID) to use as the default user and group for the remainder of the current stage.
The specified user is used for RUN
instructions and at runtime, runs the relevant ENTRYPOINT
and CMD
commands.
ARG
ARG <name>[=<default value>]
The ARG
instruction defines a variable that users can pass at build-time to the builder with the docker build
command using the --build-arg <varname>=<value>
flag.
Simple Nginx Dockerfile
# Use the official NGINX base image
FROM nginx:latest
# Copy custom configuration file from the current directory to the nginx configuration directory
COPY nginx.conf /etc/nginx/nginx.conf
# Copy static website files to the default nginx html directory
COPY . /usr/share/nginx/html
# Expose port 80 to the outside world
EXPOSE 80
# Start nginx server
CMD ["nginx", "-g", "daemon off;"]

Multi-stage Dockerfile
Docker multi-stage build is a feature that allows you to have more than one FROM
statement, each representing a stage in your Dockerfile.
Each stage starts with a fresh image that can be used to perform specific tasks.
With multi-stage Dockerfiles, you can also share data between build stages.
This way, you can build the application in one stage and copy only the necessary components that the application needs to run to the final image, resulting in smaller and more optimized Docker images.

Each stage in the Dockerfile will generate its container image, but at the end of the build, Docker will commit only one of these images into the local container registry.
By default, this will be the image produced by the last stage in the Docker file. If you want the image from a different stage, you can indicate using the target=<stage name>
with the docker build
command.
When to use Docker multi-stage build?
Multi-stage builds are great when you need to create an artifact or binary.
Building such requires a lot of dependencies. However, once the binary is built, you don't need the dependencies to run it.
You should consider using Docker multi-stage builds when your application has a complex build process and several dependencies or when you want to separate the build and runtime environments.
Why use Docker multi-stage build?
Generated images are smaller.
More secure containers.
Faster deployment.
Sample Script
# Use node:16-alpine image as a parent image
FROM node:16-alpine
# Create app directory
WORKDIR /usr/src/app
# Copy package.json files to the working directory
COPY package*.json ./
# Install app dependencies
RUN npm install
# Copy the source files
COPY . .
# Build the React app for production
RUN npm run build
# Expose port 3000 for serving the app
EXPOSE 3000
# Command to run the app
CMD ["npm", "start"]
Sample Script with Multi-stage method
# First stage - Building the application
# Use node:16-a;pine image as a parent image
FROM node:16-alpine AS build
# Create app directory
WORKDIR /usr/src/app
# Copy package.json files to the working directory
COPY package*.json ./
# Install app dependencies
RUN npm install
# Copy the source files
COPY . .
# Build the React app for production
RUN npm run build
# Second stage - Serve the application
FROM nginx:alpine
# Copy build files to Nginx
COPY --from=build /usr/src/app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Last updated