Install and configure Docker for use on a single machine
Start with the
official installation guide.
# sudo apt update
# sudo apt install curl ca-certificates gnupg
sudo mkdir --parents /etc/apt/keyrings
curl --show-error --fail --silent \
--location https://download.docker.com/linux/ubuntu/gpg | \
sudo gpg --dearmor --output /etc/apt/keyrings/docker.gpg
echo \
"deb \
[arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu \
$(source /etc/os-release && echo $VERSION_CODENAME) stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io
Stop Docker and
configure.
sudo systemctl stop docker.socket docker.service containerd.service
sudo touch /etc/docker/daemon.json
Put Docker data into a separate storage device.
Specify the
replace_data_directory
directory according to your setup.
data_directory="replace_data_directory" # data_directory=/data/services
sudo mkdir --parents "$data_directory"
sudo chmod --recursive a+rwX "$data_directory/" # decide on your permissions
# edit /etc/docker/daemon.json
{
"data-root": "replace_data_directory/docker"
}
sudo mv /var/lib/docker/ "$data_directory/"
# make a fix for plugins and tools that
# do not respect the data-root option
sudo ln --symbolic "$data_directory/docker/" /var/lib/docker
Limit logs.
# edit /etc/docker/daemon.json
{
... ,
"log-opts":
{
"max-size": "100m",
"max-file": "5"
}
}
Set up permissions.
# requires relogging or su --login $USER or newgrp docker
sudo usermod --append --groups docker $USER
Docker is already configured to start automatically with the system
by default on some distributions.
sudo systemctl enable containerd.service docker.service
Start Docker. Check that it works.
sudo systemctl start containerd.service docker.service docker.socket
# or simply restart to apply permissions and to check automatic start in one go
sudo reboot now
docker run --rm hello-world
docker run --interactive --tty --rm alpine nslookup www.google.com
docker run --interactive --tty --rm alpine ping -c2 www.google.com
docker info
Install the
local-persist volume plugin.
It enables user managed volumes that
combine the best of both bind mounts and Docker managed volumes.
Volumes, being a historical evolution of the concept, have many advantages over bind mounts.
They are safely shared between multiple running containers and
fix various permission issues that occur with bind mounts.
Volumes get conveniently prepopulated from the image on container initialization, whereas
bind mounts leave the host directory as it was: empty.
But the default local volume driver has one fatal flaw.
As volumes are managed by Docker,
the data resides in the Docker data directory and becomes coupled with the volume lifetime.
And there is no practical way to change its location in the host filesystem.
The plugin comes to the rescue.
Docker volumes created with the local-persist driver behave similarly to
persistent volumes in Kubernetes.
curl --show-error --fail --silent \
--location https://raw.githubusercontent.com/\
MatchbookLab/local-persist/master/scripts/install.sh | \
sudo bash
# alternatively, install to run from within a container.
# this option makes docker slow to start, because
# volumes are restored before containers and there is no timeout setting.
sudo mkdir --parents "$data_directory/local_persist"
docker run --detach --restart unless-stopped \
--name local_persist \
--network none \
--mount type=bind,source=/run/docker/plugins/,destination=/run/docker/plugins/ \
--mount \
type=bind,\
source="$data_directory/local_persist/",\
destination=/var/lib/docker/plugin-data/ \
--mount type=bind,source="$data_directory/",destination="$data_directory/" \
cwspear/docker-local-persist-volume-plugin
# create a volume using the plugin
sudo mkdir --parents "$data_directory/volume_1"
docker volume create \
--name volume_1 \
--driver local-persist \
--opt mountpoint="$data_directory/volume_1/"
Install and use Docker Compose to manage services.
Alternatively, consider migrating to Kubernetes even for a single machine.