26 KiB
What is Docker?
Docker is a platform for building and running applications inside containers. Containers are lightweight and portable packages that include everything needed to run the application. Docker provides tools for building, managing, and deploying containers, and it offers benefits such as consistency, portability, efficiency, and security.
Installation
Linux
Debian (Engine)
Ubuntu (Desktop)
Fedora (Desktop)
Arch-based distro's (Desktop)
Other
Post-install
You may do not have the rights as a non-root user to edit and deploy docker. Please note that the steps below work for Debian/ Ubuntu. Here are the steps to change it:
If you didn't create a non-root user run following:
sudo useradd -s /bin/bash -m <username>
To give the user you have just created the right for sudo navigate to the /etc/sudoers and edit the file with nano or vim. It is reccomended to use the visudo because the performs synthax checking for preventing errors. In this example I will use the nano as following:
sudo nano /etc/sudoers
Now add the following to the bottom of the file:
<username> ALL=(ALL:ALL) ALL
Afterwise be shure that you logout from root and login as the new user.
Try to perform something with sudo. In this Example I tried to update Debian like this:
sudo apt update
The steps above were successful if you haven't ancounted any errors. Now we need to add the user to the docker group so that we could perform the operations in docker without giving the full access to the system.
Note
Do not run docker with root rights
Check if the docker group has already been created:
getent group docker
The output should be something like this:
docker:x:975
If Terminal didn't give any output give create a new group called docker:
sudo groupadd docker
Now add your user to the group and activate the changes immidiatly:
sudo usermod -aG docker $USER
newgrp docker
To test if everything was configured correctly enter following:
docker run --name test hello-world
Tools for developing
Visual Studio
Visual Studio Remote WSL
Microsoft Docker Extension for VS Code
VS with Docker Development Tools
Eclipse
GitLab & Docker
Applications
Platform | Name | Link | Comment |
---|---|---|---|
Linux | Whaler | Link | Flatpak |
Terminal | Lazydocker | Link | \ |
Docker | Portainer | Link | Web-UI |
Docker | Rancher | Link | Web-UI + better for Kubernetes |
Useful
To see the full docker cli documentation follow this Link
There are Guides for every docker command that you can execute including stuff like docker compose-files, docker-files, API's and so on.
By running the "docker ps" command it doesn't display not running containers. With this alias, that you can put in your .bashrc, you can display all containers and see only the important stuff
alias dockerps='docker container ls -a --format "table {{.Image}}\t{{.Names}}\t{{.Ports}}\t{{.State}}"'
Curl
This command is a command-line tool that is used to transfer data from or to a server, using a variety of protocols such as HTTP, HTTPS, FTP, FTPS, and more. The curl command is available on most Unix-based operating systems and can be used to perform a wide range of tasks, such as downloading files, uploading data, testing APIs, and more. Below are some use cases and options for this comand:
# download the file and save it in the current directory:
curl -O http://example.com/myfile.txt
# download the file and save it in the current directory but under a different name:
curl -o mynewfile.txt http://example.com/myfile.txt
# for more options run folling:
curl --help
Images
A Docker image is a executable package that contains everything needed to run a piece of software. It includes: the code, runtime, system tools, libraries, and settings. Docker images are created from a docker files. By runnung "docker images" you would see following:
- Repository: The name of the repository from which the image was pulled.
- Tag: The specific version of the image.
- Image ID: A unique identifier for the image.
- Created: The date and time at which the image was created.
- Size: The size of the image in bytes.
Docker hub
To see all available containers go to hub.docker.com
or search for the containers via Terminal with:
docker search
Download (pull)
To download the image run following:
docker pull <repository>
If you haven't specified which version to pull it will pull the ":latest".
To download a specific version of the container:
docker pull <repository>:version
For example: to download the nginx webserver that runs alpine-linux underneath:
docker pull linuxserver/nginx
View
To view all images run following:
docker images
Delete
To delete a image run following:
docker image rm <container1> # To remove a single container image
docker image rm <container1>:1.04 # To remove specific version of an image
docker image rm <container1> <container2> # To remove a multiple containers images
Note
Do not delete an image when used by a running container
Prune
Prune is like delete but it deletes all images that are not used by any container
Use at your own risk
docker image prune
Volumes
You can create a volume to persist data and share data between containers and the host. A volume is a directory that is stored outside of the container's filesystem and is managed by Docker. After creating the volume, docker creates a new directory that you can mount to to a container.
Note
You don't need to create a container-volume if the container is a one time use only
Create
To create a volume paste following:
docker volume create <volume-name>
Inspect & List
To view all the containers or view info about a specific container paste following:
docker volume ls # list all volumes
docker volume inspect <volume-name> # list all information about a volume
Delete volume(s)
To delete a specific volume run following:
docker volume rm <container1> # To remove a single container
docker volume rm <container1> <container2> # To remove a multiple containers
To delete all volumes that are not in use by any container:
docker volume prune
Use at your own risk
docker run
This command is a All-in-One Tool. By executing this command you will pull the image (if not pulled), create the container and start it. This command consist of:
docker image pull <image>
docker container create <container-name>
docker container start <container-name>
You should run docker containers detached. Running container detached means you won't see the logs and interact with it in real time) To do so just add -d like so:
docker run -d --name <container-name> <image>
To specify the port run following:
docker run -p <port-host>:<port-container> --name <container-name> <image>
to display more options run following:
docker run --help
Note
- It does not matter which options and which sequence they are
- If you would like to add more options then it's better to do it with docker-compose since it is easier to edit a file instead of a command
This is how a docker run could look like:
docker run -d\
--name=calibre-web\
-e PUID=1000\
-e PGID=1000\
-e TZ=Etc/UTC\
-e DOCKER_MODS=linuxserver/mods:universal-calibre `#optional`\
-e OAUTHLIB_RELAX_TOKEN_SCOPE=1 `#optional`\
-p 8083:8083\
-v /PATH/TO/DATA:/config\
-v /PATH/TO/CALIBRE/LIBRARY:/books\
--restart unless-stopped\
lscr.io/linuxserver/calibre-web:latest
docker container
This command provides a set of sub-commands for creating, starting, stopping, and deleting, as well as for inspecting and managing their configuration, logs and performance. Here are some few:
List
With this command you can list all the container. You can also filter them if you don't like the standard view (best done with alias in linux)
docker container ls #list all running container
docker container ls -a #list all container
Create
This command creates the container witout starting it. For example:
docker container create --name my_container -p 8080:80 nginx:latest
Start & Stop, Kill
With these commands you can start and stop the container. By using docker container kill you will pull out the power plug from the container. Please use the kill-command with care since it the container does no cleanup before. Command as follows:
docker container start <container1> <container2> # start one or muliple container
docker container stop <container1> <container2> # stop one or multiple container
docker container kill <container1> <container2> # use with caution. may result errors
Pause & Unpause
Those commands are useful for temporarly suspending a containers processes without stopping or deleting the container, allowing to perform the maintenance or troubleshooting tasks while preserving the container's state
docker container ls #list all running container
docker container ls -a #list all container
Rename
To rename a single container paste following:
docker container <container_name_old> <container_name_new>
Exec
With this command you can execute commands into the container and get the output.
Here are some examples:
- to see what's in the home directory:
docker container exec my_container ls /home
- To make a connection to the containers- terminal
docker container exec -it <container> bash
Logs
This command allows you to see logs of the container. This command only works if the docker container was started with with the .json file or journald. If in deamon.json wasn't any specified configuration file then json-file would be the default driver. The deamon.json is located in /etc/docker. To see logs of the container run the first command. If you would like to specify the output use one or more commands below:
docker container logs <container> # show full log
docker container logs --details <container> # show all details
docker container logs -f <CONTAINER_NAME> # show real time log
docker container logs --tail <COUNT> <CONTAINER_NAME> # to reduce the output
Also available options are --since and --until. To know more run following:
docker container logs --help
Alternative
Most used commands are already build into docker. For example:
Instead of using the docker container start you can use the: docker start
Docker-File
With a docker-file you can create a container-image. In these following examples we will use a already finished container image and modify it to our needs.
Let's start with looking out of which parts the docker-file is build:
Command | Alternative | Explanation |
---|---|---|
FROM | - | Specifies the base image |
WORKDIR | - | Sets the working directory |
COPY | ADD | Copies the Files from host system into the container |
RUN | - | Executes Command in the container to install packages, update the system, or configure the environment |
EXPOSE | - | Specifies which ports the container shouls listen on |
CMD | ENTRYPOINT | Defines the default command to run when the container starts |
Important
CMD and RUN aren't the same things. RUN is used to execute the commands inside the container during the build prescess. This is typically used to execute something once like update and install packages.
If you need to install multiple things do not run multiple RUN commands. Instead run it like so:
RUN apt update && apt upgrade && apt install python3
Make sure to expose correct Port for the web-based application.
1. Example: Node.js Applicaton
# Use an official Node.js runtime as the base image
FROM node:14-alpine
# Set the working directory inside the container
WORKDIR /app
# Copy the package.json and package-lock.json files into the container
COPY package*.json ./
# Install the dependencies
RUN npm install
# Copy the application code into the container
COPY . .
# Expose port 3000 to the outside world
EXPOSE 3000
# Define the command to run the application when the container starts
CMD [ "npm", "start" ]
2. Example: Wordpress
# Use an official PHP runtime as the base image
FROM php:7.4-apache
# Set the working directory inside the container
WORKDIR /var/www/html
# Copy the WordPress files into the container
COPY . .
# Set the ownership of the WordPress files to the web server user
RUN chown -R www-data:www-data .
# Install the necessary PHP extensions for WordPress
RUN docker-php-ext-install mysqli pdo_mysql
# Expose port 80 to the outside world
EXPOSE 80
# Define the command to start the Apache web server
CMD [ "apache2-foreground" ]
3. Example: Apache Webserver
# Use an official Apache runtime as the base image
FROM httpd:2.4-alpine
# Copy the custom configuration file into the container
COPY httpd.conf /usr/local/apache2/conf/httpd.conf
# Expose port 80 to the outside world
EXPOSE 80
# Define the command to start the Apache web server
CMD [ "httpd-foreground" ]
Building and Publishing the container
After the creation of the docker-file to run the docker container you need to build the image and push it. There are two ways of pushing it, locally and in the docker-registy (Docker Hub).
To build and to publish the image locally:
docker build -t <myimage>
docker push <myimage>
To build and to publish the image into the docker hub:
# if not logged in login via Terminal:
docker login
# you can also specify the login credentials before, like so:
docker login -u <myusername> -p <mypassword>
docker buld -t <myimage>
docker push -t <myusername>/<myimage>
There is also the option to push the image under a different version. To do so add following:
# locally
docker push <myimage>:v1.0
# docker hub
docker push <myusername>/<myimage>:v1.0
Remember, if you don't specify the image it will push always as the :latest
docker compose
This tool is used to define and run single-, multi-container Docker applications. It allows you to define a set of services, each with its own configuration, and start and stop them all with a single command. Docker Compose uses a YAML file to define the services, networks, and volumes needed for the application.
Best way to install applications with docker compose is by editing the configuration file given by the publisher(s).
install
To install docker compose follow this Link
To proove that installation was successful run following:
docker-compose --version
To optimise the workflow I have created for every application and a group a directory and pasted inside the yaml file and the directories for the volumes. For example:
mkdir calibre-web
cd calibre-web
mkdir books config
nano docker-compose.yaml
deployment
Following commands are important to run the compose.yaml
docker-compose up # is like docker run/ docker container start
docker-compose down # stops the container-stack
single container
First let's look how to build a single container. In this example we will use the calibre-web. On the website publisher have posted following:
version: "2.1"
services:
calibre-web:
image: lscr.io/linuxserver/calibre-web:latest
container_name: calibre-web
environment:
- PUID=1000
- PGID=1000
- TZ=Etc/UTC
- DOCKER_MODS=linuxserver/mods:universal-calibre #optional
- OAUTHLIB_RELAX_TOKEN_SCOPE=1 #optional
volumes:
- /path/to/data:/config
- /path/to/calibre/library:/books
ports:
- 8083:8083
restart: unless-stopped
We see that the publisher have already specified the configuration for the deployment. The only thing that we have to do now is to customize the paths to the volumes and if we want the port. After the customization the docker-compose.yaml could look like so:
version: "2.1"
services:
calibre-web:
image: lscr.io/linuxserver/calibre-web:latest
container_name: calibre-web
environment:
- PUID=1000
- PGID=1000
- TZ=Etc/UTC
volumes:
- /home/admoin/callibre-web/config:/config
- /home/admoin/callibre-web/books:/books
ports:
- 80:8083
restart: unless-stopped
multiple container
To deploy multiple container I have created an wordpress example. As above you can customise the paths volumes and so on. In this example I used wordpress.
First create the directory and the .yaml-file:
mkdir ~/wordpress && cd ~/wordpress
mkdir /html /database
nano docker-compose.yaml
Now paste following into the file but remember to change the filepath to your username, DB user and password and the ip:
wordpress_wordpress:
image: wordpress
links:
- mariadb:mysql
environment:
- WORDPRESS_DB_PASSWORD=password
- WORDPRESS_DB_USER=root
ports:
- "public_ip:80:80"
volumes:
- /home/admoin/wordpress/html:/var/www/html
wordpress_mariadb:
image: mariadb
environment:
- MYSQL_ROOT_PASSWORD=password
- MYSQL_DATABASE=wordpress
volumes:
- /home/admoin/wordpress/database:/var/lib/mysql
For more information about docker compose follow this Link
Network
This command allows you to create and manage networks.
By default docker creates three build-in networks:*
- default bridge
- host
- none
Following one's docker doesn't ship but you can create them:
- user-defined bridge
- macvlan (L2, L3)
- ipvlan
Following you also can create but they are outdated or not used:
- docker swarm
- overlay
These are the commands to create or manage networks:
docker network create <network-type> # Creates the network
docker network rm <network-name> # Removes the network
docker network ls # list all networks
docker network inspect <network-name> # See detailed information
docker network connect <network-name> <container-name> # Connects the container to a network
docker network disconnect <network-name> <container-name> # Disconnects the container from a network
From the docker host there are two very useful commands when working with docker:
ip address show
ifconfig # similar to the command above
bridge link
Note
- Run --help for more options or use the docker docs to see full infromation about every docker command
- Helpful Youtube video Link
Please note that networkchuck didn't install correctly docker and runs every container with root that's because of the content. That also means that every container has full root rights.
default bridge
This network creates a virtual network on the host and connects the container to the network. This network allows multiple containers to communicate with each other but it requires that you configure port mappings to allow external access to the containers. Docker's default bridge has a ip address range. The docker host would get the address 172.17.0.1/16
All docker container get connected to this network if network hasn't been specified.
host
This netork allows a container to use the network stack of the host machine. When a container is started in host network mode, it shares the same network namespace as the host, which means that it ca access the same interfaces and services as the host. The containers communicate through the hosts MAC-address and get's it's own ip address.
Please note that this type of network communication is not not isolated, which means that your docker container can communicate freely with other devices or machines in the network
Run following commands to run a container in this network:
docker run --name <container-name> --network host nginx
none
As the name may already say this network disables networking for a container. When you start a container in none network mode, the container does not have access to any network interfaces and cannot communicate with other containers or external networks. This mode is useful to run and perform useful work.
docker run --name websrv-test --network host nginx
user-defined bridge
This network works similiar to the default bridge. The difference between them, is that a new virtual network is created and in this network you can specifiy the information like ip, range, gateway and so on. This network boosts the performance because the traffic between the container on the same network doesn't need to be routed through a NAT firewall. These networks provide a flexible and customizable way to manage container networking and it's useful tool for building scalable and reliable distributed applications.
To create this network run following:
docker network create <network-name>
This command not only creates a nw network but also sets up a new ip-range, subnet, gateway and so on. The new net-id would be 172.18.0.0/16 if not specified
macvlan
This network driver allows you to create a network interface in a container with a unique MAC aderess, making it appear as if the container is connected directly to the physical network as a separate device. Each container is assigned its own MAC address.
MACvlan has a few modes:
- bridge
In this mode each container is bridged to a physical network and the docker host routes traffic between the physical network and the containers - passthru
This mode passes the network card to the containers - private
In this mode every container can communicate with other devices but not to the containers in the same network
By default when you create the macvlan network without specifying the mode ot will use the bridge mode.
Let's create a macvlan:
# bridge mode
sudo docker network create -d macvlan\
--subnet 192.168.0.0/24\
--gateway 191.168.0.1\
-o parent=enp0s3.0\
mynetwork-bridge
# passthru mode
docker network create -d macvlan\
--subnet=192.168.1.0/24\
--gateway=192.168.1.1\
--aux-address="mode=passthru"\
-o parent=enp0s3.1\
mynetwork-passthru
# private mode
docker network create -d macvlan\
--subnet=192.168.2.0/24\
--gateway=192.168.2.1\
--aux-address="mode=private"\
-o parent=enp0s3.2\
mynetwork-private
Note
In this network you can also specify the sub interface
ipvlan
This network allows you to create a network interface that shares the same MAC address as the parent interface, but has a separate and unique IP address. It provides a way to create multiple containers with their own IP addresses, while using the same physical or virtual network interface.
L2
In this mode every container can communicate with ither devices on the network as if the were physically connected. Every container gets its own uniqe IP and MAC address.
docker network create -d ipvlan\
--subnet 10.7.1.0/24\
--gateway 10.7.1.3\
-o parent=enp0s3\
ip-vlan_l2
L3
In this mode each container has its own IP address but shares the same MAC address as the parent interface. Containers communcate with other devices on the network using IP routing.
docker network create -d ipvlan\
--subnet 192.168.94.0/24\
-o parent=enp0s3 -o ipvlan_mode=L3\
--subnet 192.168.95.0/24\
ip-vlan_l3
To make the container able to talk to the global netwoek you need t create a static route
L2bridge
This is a hybrid mode that combines the aspects of the L2, L3 modes. Wach container has its own IP address and shares the same MAC address as the parent interface. Containers within this network can communicate with other devices on the network through IP routing while usinf Layer 2 bridging.
docker network create -d ipvlan\
--subnet 192.168.101.0/24\
-o parent=enp0s3 -o ipvlan_mode=mode=l2bridge\
ip-vlan_l2bridge
other
There are two more network options: the overlay netowork and the docker swarm. But these are not commonly used and/or are outdated. If you want to learn how the docker swarm works I would reccomend you to look into Kubernetes, because this type of containerisation has more options.