Quantcast
Channel: All – akquinet – Blog
Viewing all articles
Browse latest Browse all 133

Where to start with Docker – Part 2

$
0
0
Photo by Rubaitul Azad on Unsplash

Welcome back to Part 2 of our Docker series! In Part 1, we covered the basics of Docker, how it simplifies software deployment, and the benefits of containerization. Now, we’re ready to dive into the practical side.

In this section, you’ll learn how to create your own Docker image and run it as a container. We’ll introduce Docker’s core functionality. This includes how to package applications and define dependencies. You will also learn how to build an environment that remains consistent from development to production. By the end, you’ll be equipped to build and deploy containers confidently. This will bring you one step closer to mastering Docker.

Finally, in Part 3, we’ll focus on security, covering best practices for hardening your Docker environment. These practices will help keep your containers secure and compliant with modern standards.

Install Docker

To build our first container image, we need to install Docker (or an alternative image-building engine). While there are other options, we’ll focus on Docker here. For the latest installation steps, it’s best to follow the official Docker documentation.

After installation, Docker may need to be started, depending on your operating system:

On Linux: Use systemctl to start Docker. For most Linux distributions, you can start Docker by running:

sudo systemctl start docker

To ensure Docker starts automatically at boot, enable it with:

sudo systemctl enable docker

If you are not using Docker Desktop on Linux, you may need to add your user to the Docker group. This allows you to run Docker commands without sudo. After making this change, log out and log back in for it to take effect:

# Confirm current user
echo ${USER}

# Add user to Docker group
sudo usermod -aG docker ${USER}

On Other Operating Systems like macOS and Windows: Docker Desktop is the standard way to install and run Docker. Once installed, simply open Docker Desktop, and Docker will start running automatically.

With Docker installed and running, we’re ready to start creating and running our own containers in the next sections.

Key Components of Docker

Docker is composed of several core components, each with a specific role in managing and running containers. Here’s a quick overview of these components, illustrated in the Docker architecture diagram:

https://docs.docker.com/get-started/docker-overview/#docker-architecture

Docker Client: The client is the main way users interact with Docker. When you run commands like docker run, the Docker client sends these commands to the Docker daemon.

Docker Daemon (dockerd): The daemon is a background service. It listens for Docker API requests and manages Docker objects like images, containers, networks, and volumes. It’s responsible for building, running, and managing containers on your system.

Docker Registry: A registry is a storage location for Docker images. By default, Docker is configured to use Docker Hub, a public registry accessible to anyone. You can also set up a private registry if needed. When you use docker pull or docker run, Docker retrieves the required images from your configured registry. Similarly, when you use docker push, Docker uploads the image to the specified registry.

With these components working together, Docker provides a streamlined process for creating, storing, and running containers.

Difference between Docker Image and Docker Container

A Docker image and a Docker container serve different roles in the Docker ecosystem, although they’re closely related. Here’s a breakdown of their differences:

Docker image Docker container
An immutable (unchangeable) file that contains the source code, libraries, dependencies, tools, and other files needed to run an application.A virtualized runtime environment that isolates the application from the underlying system. Containers are lightweight, portable units where applications can start up quickly.
A Dockerfile is required to create a Docker image.A Docker image is required to create a Docker container.
Docker images can be shared between users (e.g., via Docker Hub or other registries).Docker containers are not directly shareable but can be created from a shared Docker image on multiple systems.

A Docker image is the blueprint. Meanwhile, a Docker container is the live, running instance created from that blueprint. Think of it like a recipe (the image) and the prepared dish (the container). Once you’ve built the image, you can run multiple containers from it. Each container operates independently but is based on the same underlying setup.

Build a Dockerfile

To create a Docker image, we need a Dockerfile. Let’s write our first Dockerfile and break down its key components.

# Parent Image
FROM alpine:3.20.3

# Set individual labels
LABEL maintainer=info@akquinet.de
LABEL version="0.0.1"

# ARGs are available only during the build process
ARG NGINX_PACKAGE="nginx"

# ENVs are stateful and persist during runtime
ENV NGINX_VERSION="1.26.2-r0"

# Install tools required for the project
# Run `docker build --no-cache .` to update dependencies
RUN adduser -D --uid 2000 nginx && \
    apk update && \
    apk upgrade && \
    apk --no-cache add --update $NGINX_PACKAGE=$NGINX_VERSION && \
    mkdir -p /run/nginx && \
    touch /run/nginx/nginx.pid && \
    mkdir /www && \
    echo "server {listen 0.0.0.0:80; listen [::]:80; location / { root /www;index index.html;}}" > /etc/nginx/http.d/default.conf && \
    echo "Hello World!" > /www/index.html && \
    chown -R nginx:nginx /var/lib/nginx && \
    chown -R nginx:nginx /www && \
    chown -R nginx:nginx /run/nginx/

# Expose network ports
EXPOSE 80
EXPOSE 443

# Set the user for running the container
USER 2000

# Specify default executable.
ENTRYPOINT [ "nginx" ]
# Default command to run when the container starts
CMD ["-g", "daemon off;"]

Dockerfile Breakdown

FROM alpine:3.20.3

The FROM instruction specifies the base image from which you’re building your image. Here, we’re using alpine:3.20.3 as a minimal, lightweight base image. Base images provide the foundational OS layers for your container. In most cases, these images are created and maintained by trusted sources, including OS vendors. You can also create a base image from scratch. Use FROM scratch for creating minimal images with only essential dependencies.

If you need to completely control the contents of your image, you can create your own base image from a Linux distribution of your choosing, or use the special FROM scratch base:

FROM scratch

The scratch image is typically used to create minimal images containing only just what an application needs. See Create a minimal base image using scratch.

LABEL maintainer=info@akquinet.de
LABEL version="0.0.1"

Labels add metadata to Docker objects like images, containers, and volumes. They are key-value pairs stored as strings and can include information such as version, licensing, and maintainer contact details. Labels are useful for organizing images and adding descriptive tags.

ARG NGINX_PACKAGE="nginx"

The ARG instruction defines a build-time variable. These arguments are available only during the build process and aren’t preserved after the image is built. Here, NGINX_PACKAGE is set to “nginx” but can be modified during the build to specify other packages if needed.

ENV NGINX_VERSION="1.26.2-r0"

The ENV instruction sets an environment variable that remains in the container at runtime. Environment variables are useful for setting configuration options or default values. Here, NGINX_VERSION specifies the version of NGINX to install, ensuring consistency across builds.

RUN adduser -D --uid 2000 nginx && \
    apk update && \
    apk upgrade && \
    apk --no-cache add --update $NGINX_PACKAGE=$NGINX_VERSION && \
    mkdir -p /run/nginx && \
    touch /run/nginx/nginx.pid && \
    mkdir /www && \
    echo "server {listen 0.0.0.0:80; listen [::]:80; location / { root /www; index index.html;}}" > /etc/nginx/http.d/default.conf && \
    echo "Hello World!" > /www/index.html && \
    chown -R nginx:nginx /var/lib/nginx && \
    chown -R nginx:nginx /www && \
    chown -R nginx:nginx /run/nginx/

The RUN command executes commands and creates a layer during the image build process. In this example, we’re using RUN to:

  • Create a new user named nginx with UID 2000.
  • Update and upgrade the Alpine package repository.
  • Install the specified version of NGINX.
  • Set up the necessary directories and permissions for NGINX.

Each RUN statement creates a new layer in the image, allowing Docker to cache and reuse layers efficiently. Multiple RUN statements can help organize steps and leverage Docker’s caching for faster builds. Here, the --no-cache option ensures a fresh install, avoiding any cached data from previous layers.

EXPOSE 80
EXPOSE 443

The EXPOSE instruction specifies the network ports that the container will listen on at runtime. Here, ports 80 (HTTP) and 443 (HTTPS) are exposed, allowing access to the NGINX web server from outside the container. This instruction doesn’t publish the ports but informs Docker that these ports are intended for use.

# Which user is running this contianer?
USER 2000

The USER instruction sets the user for subsequent instructions and the container’s runtime. Running containers as a non-root user is a recommended security practice to limit the container’s privileges. Here, we set the user to nginx, created earlier with UID 2000.

# Specify default executable.
ENTRYPOINT [ "nginx" ]
# Default command to run when the container starts
CMD ["-g", "daemon off;"]

The ENTRYPOINT defines the main executable (in this case, nginx) that the container will run. Using ENTRYPOINT ensures that nginx is always the main process, even if the container’s command is overridden.

CMD sets default arguments for nginx. Here, -g "daemon off;" keeps NGINX in the foreground. This is necessary for Docker to monitor and maintain the running container.

Build the image

Now that we understand the core concepts of a Dockerfile, we’re ready to build our Docker image. This process will package the application, dependencies, and configuration defined in the Dockerfile into a single, portable image.

To build the Docker image, use the following command:

docker build --progress=plain --no-cache -t nginx-alpine .

Here’s a breakdown of the command:

  • docker build: This command initiates the build process using the Dockerfile in the current directory.
  • --progress=plain: Set type of progress output (autoplainttyrawjson). Use plain to view container output, as by default Docker trunks the logs.
  • --no-cache: This flag forces Docker to execute each command in the Dockerfile without using cached layers. This is useful for ensuring a fresh build, especially if dependencies may have changed.
  • -t nginx-alpine: The -t flag assigns a tag (or name) to the image, in this case, nginx-alpine. Tagging makes it easier to reference the image later.
  • .: The . specifies the build context, which is the current directory. Docker will use this directory to look for the Dockerfile and any other required files.

Once the build is complete, you’ll see the nginx-alpine image listed when you run docker images.

Run the Container

With the image built, we can now create and run a container from it. Use the following command to start the container:

docker run -p 8080:80 nginx-alpine

Here’s what each part of this command does:

  • docker run: This command creates and starts a new container from the specified image.
  • -p 8080:80: This flag maps port 80 in the container. NGINX is listening on this port. It is mapped to port 8080 on your local machine. You can access the application on port 8080 of your host system.
  • nginx-alpine: This is the name of the image we just built.

After running this command, Docker will start the container, and NGINX will serve the web application.

Test the Application

To confirm that the container is running successfully, open a web browser and navigate to:

http://127.0.0.1:8080

You should see the message “Hello World!” displayed on the page. This shows that the NGINX server is operational within the container. It is serving content from the /www directory we created in the Dockerfile.

Additional Information

Show Container Information: To list running containers and see details like container IDs, names, status, and port mappings, use:

docker ps

Stopping the Container: To stop the container, press CTRL+C if it’s running in the foreground. If it’s running in detached mode, use docker stop <container_id>.

Detached Mode: To run the container in the background (detached mode), add the -d flag:

docker run -d -p 8080:80 nginx-alpine

Viewing Logs: You can view the logs for the container with:

docker logs <container_id>

With these steps, you’ve successfully built and run your first Docker container using a custom Dockerfile. This is a foundational skill for developing and deploying containerized applications.

Multi-Stage Builds

Multi-stage builds allow us to create optimized Docker images by separating the build environment from the runtime environment. During the build process, you often need various dependencies, tools, and libraries. However, these are usually unnecessary for running the final application and only add bulk to the image.

With multi-stage builds, you can compile your application in one stage. Then, copy the final binary or artifacts to a minimal image. This process results in a lean, production-ready container.

Example: Building a “Hello World” Go Project with Multi-Stage Builds

To illustrate this, let’s build a simple “Hello World” application in Go. We will create a multi-stage Dockerfile to handle the build and runtime stages separately.

Step 1: Create the Go Application

First, create a file named main.go with the following content:

package main

import (
  "fmt"
)

func main() {
  fmt.Println("hello world")
}

Then, initialize the Go module by running the following command:

go mod init gotest && go mod tidy

If you don’t have Go installed locally, you can use Docker to initialize the module:

docker run -ti --entrypoint='' --workdir=/app -v $(pwd):/app golang:1.23.3 go mod init gotest
docker run -ti --entrypoint='' --workdir=/app -v $(pwd):/app golang:1.23.3 go mod tidy

Step 2: Write the Multi-Stage Dockerfile

Now that we have main.go and go.mod, let’s create a Dockerfile for a multi-stage build. In this example, we’ll use the official Go image for the build stage (cause it has all Build dependencies). Then, we will compile the Go application. Finally, we will copy the resulting binary into a minimal image for deployment. We don’t need the Build dependencies for the Running Production Container.

Here’s the Dockerfile:

# syntax=docker/dockerfile:1

# Build Stage
FROM golang:1.23.3 AS build

# Set the working directory
WORKDIR /workspace

# Copy project files and build the binary
COPY . /workspace
RUN go mod download && \
    go build -o /bin/project

# Deployment Stage
FROM scratch

# Copy the binary from the build stage
COPY --from=build /bin/project /bin/project

# Set the default command
ENTRYPOINT ["/bin/project"]

# Set the user to a non-root user for security
USER 2000

Multi-Stage Dockerfile Breakdown

Let’s walk through each part of the Dockerfile to understand how multi-stage builds work:

  • First Stage (Build Stage):
    • FROM golang:1.23.3 AS build: This line specifies the base image for the build stage. It gives it a name (build) that we can reference later.
    • WORKDIR /workspace: Sets /workspace as the working directory inside the container.
    • COPY . /workspace: Copies the current directory’s contents into the container, allowing us to access project files.
    • RUN go mod download && go build -o /bin/project: This command installs dependencies. It compiles the Go code into a binary called project. This binary is placed in the /bin directory.
  • Second Stage (Deployment Stage):
    • FROM scratch: The scratch image is an empty base image. It is ideal for creating minimal containers. These containers only contain the application binary and necessary libraries.
    • COPY --from=build /bin/project /bin/project: Copies the compiled binary from the build stage into the scratch stage.
    • ENTRYPOINT ["/bin/project"]: Sets the default command for the container to execute the binary.
    • USER 2000: Runs the application as a non-root user, which is a recommended security practice.

Building and Running the Multi-Stage Docker Image

Now, you can build and run your optimized Docker image:

Build the Image:

docker build -t hello-go-multistage .

Run the Container:

docker run hello-go-multistage

When you run this command, the container should output hello world. This demonstrates that the application is working within the optimized, minimal image.

Benefits of Multi-Stage Builds

  • Reduced Image Size: We copy only the necessary binary into a minimal image. This process removes unnecessary build dependencies. As a result, the image becomes smaller.
  • Security: Using a minimal base image like scratch reduces the attack surface, as the container has only the essential files.
  • Performance: Smaller images are quicker to download, deploy, and start up, making them ideal for production environments.

Using multi-stage builds can significantly streamline and improve your containerized applications, making them more secure, efficient, and easier to deploy.

Common Troubleshooting Tips

Working with Docker is generally smooth, but beginners may encounter common issues. Here are some troubleshooting tips to help you overcome frequent obstacles and keep your workflow moving.

Permission Errors

When running Docker commands, you might encounter permission errors, especially on Linux if you’re not using Docker Desktop.

Solution: Add your user to the Docker group to avoid using sudo with each command

sudo usermod -aG docker ${USER}

After adding yourself to the Docker group, log out and log back in for changes to take effect.

Additional ResourceDocker Permissions Guide & Docker Permission requirements for Windows

Port Conflicts

If the port you’re mapping in docker run -p is already in use by another service or container, Docker will throw an error.

Solution: Check which ports are currently in use with

sudo lsof -i -P -n | grep LISTEN

Alternatively, use a different port mapping with the -p option, for example docker run -p 8081:80.

Disk Space Issues

Docker images, containers, and volumes can accumulate over time, consuming significant disk space.

Solution: Clean up unused Docker resources with

docker system prune -a

This command removes stopped containers, unused networks, all images without at least one container associated to them, and build cache. Be cautious, as it will delete images not used by containers.

Additional ResourceDocker Cleanup Guide or docker-prune.sh

Container Not Starting (Common Application Errors)

Sometimes a container may fail to start due to application errors, missing environment variables, or misconfigured files.

Solution: View container logs to diagnose the issue

docker logs <container_id>

This will help you identify configuration issues or missing dependencies.

Docker Daemon Not Running

You might see an error like “Cannot connect to the Docker daemon” if the Docker daemon isn’t running.

Solution: Start the Docker service:

  • On Linux: Run sudo systemctl start docker
  • On macOS/Windows: Open Docker Desktop, which automatically starts the daemon.

Additional ResourceDocker Daemon Troubleshooting

Network Connectivity Issues in Containers

Sometimes, containers may fail to access network resources due to DNS or network configuration issues.

Solution: Use Docker’s internal DNS resolver. Try setting the DNS manually by adding --dns=<DNS_SERVER> to docker run, or check Docker’s network settings with

docker network ls
docker inspect <network_id>

These quick troubleshooting steps can save you time and keep your Docker workflow smooth. For more complex issues, consult Docker’s documentation or community forums.

Image Registries

Once you’ve built an optimized Docker image, you’ll often want to share it with others. You may also want to deploy it across different environments. This is where image registries come in. A Docker image registry is a storage system for container images. It allows others to pull and run these images easily.

How Image Registries Work

Image registries function as centralized repositories where Docker images are stored and managed. Each registry holds a collection of repositories, and each repository can contain multiple tagged versions of an image. By storing images in a registry, you can share them publicly or privately. This makes it simple to deploy containers across teams and environments.

  • Docker Hub: Docker Hub is the default public registry where anyone can push and pull images. It’s a popular choice for open-source projects and shared images.
  • Private Registries: For additional security or internal use, you can set up a private registry. This could be Docker Hub’s private repositories or others such as Amazon ECR, Google Container Registry, or Azure Container Registry. These registries store images that are accessible only to specific users or teams.

Pushing an Image to a Registry

To share an image, you’ll first need to tag it with the repository name and optionally a version tag. Then, you can push it to a registry. Here’s a step-by-step guide to pushing your Docker image to Docker Hub:

Log in to Docker Hub (or another registry if needed):

docker login

Tag the Image: Use the docker tag command to give your image a repository name and tag.

docker tag nginx-alpine your-dockerhub-username/nginx-alpine:latest

Push the Image: Once tagged, push the image to Docker Hub (or another registry):

docker push your-dockerhub-username/nginx-alpine:latest

After pushing, your image will be stored in the registry. You or anyone with access can pull and run it on other systems.

Pulling Images from a Registry

To run the image on another system, simply use the docker pull command:

docker pull your-dockerhub-username/nginx-alpine:latest

This command retrieves the latest version of your image from the registry. It makes the image available locally. This allows you to spin up containers as needed.

Benefits of Using Image Registries

  • Accessibility: Registries make it easy to share images across different environments or with other team members.
  • Versioning: You can maintain multiple tagged versions of an image, enabling rollbacks or specific deployments.
  • Scalability: Using a registry enables you to pull images on-demand. This capability makes deployments in cloud environments straightforward. It also simplifies deployments in orchestrated platforms like Kubernetes.

By leveraging image registries, you can streamline your workflow. They simplify collaboration. They also improve the accessibility and scalability of your containerized applications.

Conclusion

In Part 2 of our Docker series, we took a hands-on approach to building Docker images. We explored the power of multi-stage builds. We broke down the Dockerfile to understand build and runtime stages separately. This enables us to create a minimal, efficient container for a simple Go application. This approach not only reduces image size but also enhances performance—two critical factors for production-ready containers.

With these skills, you’re now equipped to build, optimize, and run Docker containers that are lean and reliable. However, as you move toward deploying containers in real-world environments, security becomes essential. In Part 3, we’ll cover Docker security best practices. We will focus on how to harden your containers. We will also ensure they remain compliant with modern standards. Stay tuned for this crucial next step!

Helpful Links

For more information on Docker, containers, and getting started with image builds, check out these resources:

Some container and Image build engines

Docker is a popular choice for building and managing containers. However, there are several other tools in the container ecosystem worth exploring. Each of these tools offers unique features and capabilities:

  • Buildah
    A command-line tool for building OCI (Open Container Initiative) images and managing containers. Buildah is known for its flexibility and compatibility with other OCI-compliant tools.
  • Containerd
    A lightweight, reliable container runtime used by Docker under the hood. It focuses on simplicity and efficiency in managing container lifecycles.
  • Podman
    An open-source, daemonless container engine for Linux. Podman is often seen as a Docker alternative, particularly for its ability to run containers without root privileges.
  • BuildKit
    An advanced image-building engine for Docker that supports caching, parallel builds, and more efficient builds.
  • RunC
    An OCI-compliant container runtime tool that provides a lightweight, standardized interface for running containers. RunC is widely used as a low-level container runtime.
  • Kaniko
     An open-source tool for building container images inside a Kubernetes cluster. Kaniko is ideal for environments without Docker installed, such as Kubernetes clusters.

These alternative tools offer flexibility and specialized functionality. They allow you to tailor your container workflows to suit various environments and requirements.


Viewing all articles
Browse latest Browse all 133

Trending Articles