This blog post introduces Dockerfiles in Docker.

In the last article, we introduced Docker and the notion of images and containers. In this article, we will discuss how we can create and manage images and containers, using the example of a web server built with NodeJS, listening on port 4000.
Dockerfiles
To create an image, we create a Dockerfile that specifies the configuration of the image, which is read by the Docker engine to build each layer. The following is an example Dockerfile to create an image for the web server:
# Node version 23 on Alpine Linux 3.20
FROM 23-alpine3.20
# Working directory
WORKDIR /app
# Copy the source code to the working directory
COPY . .
# Installing dependencies (listed in package.json)
RUN npm install
# Command to run in the container
CMD ["node", "app.js"]
In the above, we start from specifying the parent image using the FROM
keyword. As the web server is in Node.JS,
we pull the official node image 23-alpine3.20
, which has Node version 23 pre-installed on alpine Linux 3.20.
Then, we set up a working directory (WORKDIR
) to copy over the source code with COPY
and run npm install
to install the dependencies.
The CMD
keyword sets up the command to run in the container , whereas RUN
specifies the command to run for building the image.
The CMD
keyword accepts a list of strings instead of a string for accepting variables if needed.
Managing Images & Containers
To build an image from the Dockerfile, we can use docker build -t <image_name> <relative_path>
where -t
is the flag for setting a tag (or name), and <relative_path>
is the relative path to the Dockerfile.
Here, we can use names like myserver:v1
for versioning the images. We can check the list of images and their details
using docker images
. After building an image, we can run the image and spin up a container
using docker run --name <container_name> <image_name>
with various optional flags. As the container runs
a web server listening on port 4000 of the container, we need to map port 4000 of the container to any port of the localhost,
which can be done by adding -p <local_port>:<container_port>
. We can also detach the process and prevent
the process from blocking the terminal with -d
flag.

Once we spin up the container and request data from port of the localhost we mapped the container port to,
we should see the data coming back from the server in the container. For stopping and restarting a container,
we can use docker stop <container_name>
and docker start <container_name>
, respectively.
Also, we can use docker ps
to see all the running containers, and we can use -a
flag to see all the containers
including the ones not running. Finally, we can remove an image and a container with docker image rm <image_name>
and docker container rm <container_name>
.
(Docker has many other commands for different tasks such as docker exec -it
that let us interact with the shell,
and there are also more useful tags for many commands in Docker. For more information on them,
refer to the official documentation.)
Layer Caching
When building an image, Docker temporarily stores the image layers in cache so that similar images can be created more efficiently. For example, when we only make changes to the source code and create a new image with the same Dockerfile above, Docker automatically deduces that it does not need to rerun the first two layers and loads the cached image layers instead. We can take advantage of this caching to make image creation faster as follows.
FROM 23-alpine3.20
WORKDIR /app
# Dependency installed before copying over the entire source code
COPY pacakge.json .
RUN npm install
COPY . .
CMD ["node", "app.js"]
By copying package.json
and installing dependencies before copying the entire source code,
we can avoid re-running dependency installation when creating a new image after only making changes to the source code
(it will rerun the dependency installation if Docker detects changes in package.json
).
Volumes
When developing a web server and making code changes frequently, it becomes troublesome to create a new image
and run the new image to spin up a new container at every code change, even with layer caching that can speed up image creation.
To fix this issue, we can allow containers to access files on the local machine using the volume flag in docker run
with -v <absolute_local_path>:<container_path>
. When using volumes, containers use the file in <absolute_local_path>
when referring to a file in <container_path>
.
Typically, we can simply add a volume mapping the entire working directory of the container to the entire project folder
on the local device. However, there are cases where we want to choose which individual local files should and should not
be accessible to the container (for example, we might want the container to use its own node_modules
set up with npm install
,
rather than an old node_module
in the local device). In such cases, we can either add volumes for individual files or
add a volume for the entire project folder and use anonymous volumes to force the container to use its own file
for a certain path like -v <container_path>
. Here, It's essential to note that even with volumes, we still need to create
a new image to reflect all changes made when deploying and sharing the image.
Conclusion
In this article, we introduced Dockerfile, some commands to manage images and containers, layer caching, and volume,
which covers most of the fundamental concepts needed to start using Docker. The Docker image created can be shared
on Docker Hub by creating an account, logging in with docker login
, and pushing it with docker push
. As we
covered the basics, in the next article, we will itroduce a useful feature that makes it easier to use Docker.
Resources
- Net Ninja. 2022. Docker Crash Course #5 - The Dockerfile. YouTube.
- Net Ninja. 2022. Docker Crash Course #7 - Starting & Stopping Containers. YouTube.
- Net Ninja. 2022. Docker Crash Course #8 - Layer Caching. YouTube.
- Net Ninja. 2022. Docker Crash Course #9 - Managing Images & Containers. YouTube.
- Net Ninja. 2022. Docker Crash Course #10 - Volumes. YouTube.
- Net Ninja. 2022. Docker Crash Course #13 - Sharing Images on Docker Hub. YouTube.