Docker for DevOps : Docker Volumes & Networks

Docker for DevOps : Docker Volumes & Networks

Docker volumes: Imagine a secure storage locker, accessible from any container, anytime. That's what Docker volumes offer. They act as persistent storage units, independent of the container's lifespan. This means your data stays safe and sound even if you rebuild, restart, or delete a container. Need to share data between containers? Mount the volume, and voila! Seamless collaboration ensues.

Docker networks: Think of a highway connecting your containers, facilitating swift communication. Docker networks make this a reality. They provide a virtual network environment for your containers to interact seamlessly, allowing them to exchange data and communicate with each other regardless of their physical location. Imagine the possibilities - distributed applications, microservices, and multi-container deployments, all running smoothly thanks to Docker networks.

The benefits are :

  • Data persistence: Say goodbye to data loss and hello to persistent storage with Docker volumes. Your data remains safe and accessible, no matter what happens to your containers.

  • Data sharing: Collaboration is key, and Docker volumes facilitate it effortlessly. Share data between containers, allowing them to work together seamlessly.

  • Scalability: As your needs grow, so can your Docker volumes and networks. Scale them up or down with ease, ensuring they adapt to your ever-changing requirements.

  • Simplified deployments: Docker networks simplify complex deployments. With a single network, your containers can communicate effortlessly, reducing configuration headaches and streamlining application development.

For more details visit official documention here.
A. Docker Volume B. Dcoker Netwroks
For tasks I've solved below follow 90DaysOfDevOps shared by TrainwithShubham!

Task 1:

  • Create a multi-container docker-compose file which will bring UP and bring DOWN containers in a single shot ( Example - Create application and database container )

Lets create a multi-container Docker Compose file that will bring up and bring down containers in a single shot:

YAML

version: "3.8"

services:
  app:
    image: your-app-image:latest
    ports:
      - "3306:80"
    depends_on:
      - db
  db:
    image: mysql:latest
    environment:
      MYSQL_ROOT_PASSWORD: "your_password"
      MYSQL_DATABASE: "your_database"
    volumes:
      - db_data:/var/lib/mysql
volumes:
  db_data:

Above yaml file is explained below:

  • version: This specifies the Docker Compose file version.

  • services: This section defines the services that will be run by Docker Compose.

  • app: This defines the app service.

    • image: This specifies the image to be used for the app service. You need to replace "your-app-image:latest" with the actual image name of your application.

    • ports: This maps the container port 80 to the host port 3306.

    • depends_on: This specifies that the app service depends on the db service, which means the db service must be running before the app service can start.

  • db: This defines the database service.

    • image: This specifies the image to be used for the database service.

    • environment: This sets the environment variables for the database service.

      • MYSQL_ROOT_PASSWORD: This sets the root password for the MySQL database.

      • MYSQL_DATABASE: This sets the name of the database to be created.

    • volumes: This mounts a volume named "db_data" to the container's /var/lib/mysql directory. This directory stores the database data, so it will persist even if the container is stopped or restarted.

  • volumes: This section defines the volume named "db_data".

♦️To run this Docker Compose file:

  1. Save it as a file named docker-compose.yml.

  2. Open a terminal window and navigate to the directory that contains the file.

  3. Run the following command to bring up the services in detached mode:

docker-compose up -d

♦️To bring down the services, run the following command:

docker-compose down

This will stop the containers and remove them and the volume.

Task 2:

  • 2.1 Learn how to use Docker Volumes and Named Volumes to share files and directories between multiple containers.

  • 2.2Create two or more containers that read and write data to the same volume using the docker run --mount command.

  • 2.3Verify that the data is the same in all containers by using the docker exec command to run commands inside each container.

  • 2.4Use the docker volume ls command to list all volumes and docker volume rm command to remove the volume when you're done.

    2.1. Using Docker Volumes and Named Volumes:

    Docker volumes provide a way to share data between containers. They are persistent storage locations that are independent of the container's lifecycle. This means that data stored in a volume will persist even if the container is stopped, restarted, or deleted.

    There are two types of volumes:

    • Anonymous volumes: These are automatically created by Docker when a container is run with a volume mount. They are not named and are only accessible to the container they are mounted in.

    • Named volumes: These are explicitly created by users with the docker volume create command. They have a name and can be mounted in any container.

2.2. Creating Containers with a Shared Volume:

Here's how to create two or more containers that read and write data to the same volume using the docker run --mount command:

  1. Create a named volume:
    docker volume create shared-data
  1. Run two containers:

docker run --name container1 -v shared-data:/data --rm ubuntu echo "Hello from container 1" > /data/message.txt docker run --name container2 -v shared-data:/data --rm ubuntu cat /data/message.txt

Explanation:

  • --name container1 and --name container2: These options give names to the containers for easier identification.

  • -v shared-data:/data: This option mounts the volume named "shared-data" at the path "/data" inside the container.

  • --rm: This option tells Docker to remove the container when it exits.

  • ubuntu: This specifies the image to be used for the container. You can use any image that you like.

  • echo "Hello from container 1" > /data/message.txt: This command writes the message "Hello from container 1" to the file "message.txt" in the "/data" directory.

  • cat /data/message.txt: This command prints the contents of the file "message.txt" in the "/data" directory.

2.3. Verifying Data Consistency:

You can use the docker exec command to run commands inside each container and verify that the data is the same:

    docker exec container1 cat /data/message.txt
    docker exec container2 cat /data/message.txt

Both commands should print the message "Hello from container 1".

2.4. Listing and Removing Volumes:

  • Use the docker volume ls command to list all volumes:
    docker volume ls
  • Use the docker volume rm command to remove the volume when you're done:
    docker volume rm shared-data

Benefits of Using Volumes:

  • Data persistence: Data stored in a volume persists even if the container is stopped, restarted, or deleted.

  • Data sharing: Volumes can be mounted in multiple containers, allowing them to share data.

  • Scalability: Volumes can be easily scaled up or down to meet changing storage needs.

Takeaway

Docker volumes are a powerful tool for sharing data between containers. They allow you to persist data, share it between multiple containers, and scale your storage as needed.