Loading Now

How to Install and Use Docker Compose on latest CentOS

How to Install and Use Docker Compose on latest CentOS

Docker Compose revolutionises the management of Docker applications with multiple containers, particularly on CentOS production Servers. Rather than managing numerous docker run commands and memorising environment variables and network settings, Compose allows you to configure your complete application stack using a straightforward YAML file. This guide will assist you in installing Docker Compose on the latest CentOS versions, setting up your first multi-container application, and addressing common obstacles you may encounter.

Comprehending Docker Compose Architecture

Operating as a layer atop the Docker Engine, Docker Compose efficiently coordinates numerous containers via a declarative configuration method. In contrast to Kubernetes, which is designed for cluster management, Compose excels in orchestrating multi-container applications on a single host. The Compose file specifies services, networks, and volumes, with the Compose runtime converting these specifications into Docker API calls.

The architecture comprises three primary elements:

  • Compose CLI – handles docker-compose.yml files and interacts with the Docker daemon
  • Docker Engine – responsible for managing the container lifecycle
  • Container runtime – executes the actual application processes

When you execute docker-compose up, Compose generates isolated environments using project names and automatically facilitates service discovery through DNS resolution within custom bridge networks.

Requirements and System Prerequisites

Before you start the installation, ensure your CentOS system meets the necessary conditions. Docker Compose requires Docker Engine 1.13.1 or higher and is optimally run on a system with at least 2GB of RAM for moderate workloads.

# Check CentOS version
cat /etc/centos-release

# Verify available memory
free -h

# Check Docker installation
docker --version
systemctl status docker

If Docker has not yet been installed, you’ll need to do so first:

# Install Docker on CentOS
sudo yum install -y yum-utils
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo yum install docker-ce docker-ce-cli containerd.io
sudo systemctl start docker
sudo systemctl enable docker

# Include your user in the docker group (logout/login needed)
sudo usermod -aG docker $USER

How to Install Docker Compose on CentOS

There are various methods to install Docker Compose on CentOS, each with its own benefits. Here’s a summary:

Installation Method Advantages Drawbacks Ideal For
Binary Download Latest version, straightforward Manual updates needed Production Servers
pip Install Easy updates available Conflicts with Python dependencies Development environments
Package Manager Good integration with the system Versions may be outdated Corporate settings

Method 1: Binary Installation (Preferred)

The most reliable way to install Docker Compose is by downloading the latest stable release directly from GitHub:

# Download the latest Docker Compose binary
sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

# Grant execution permissions
sudo chmod +x /usr/local/bin/docker-compose

# Create symlink for easier access
sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose

# Confirm successful installation
docker-compose --version

Method 2: Python pip Installation

If you prefer managing Docker Compose using Python’s pip tool:

# Install Python pip if not already available
sudo yum install python3-pip

# Install Docker Compose using pip
sudo pip3 install docker-compose

# Confirm successful installation
docker-compose --version

Building Your First Docker Compose Application

Let’s create a practical multi-tier web application to showcase Docker Compose functionality. This example will feature a Python Flask web app, a Redis cache, and a PostgreSQL database – a commonly used architectural pattern in real-world applications.

Start by establishing the project structure:

mkdir webapp-stack && cd webapp-stack
mkdir app

# Create the Flask application
cat > app/app.py << 'EOF'
from flask import Flask, jsonify
import redis
import psycopg2
import os

app = Flask(__name__)
redis_client = redis.Redis(host="redis", port=6379, decode_responses=True)

@app.route("https://Digitalberg.net/")
def hello():
    count = redis_client.incr('hits')
    return jsonify({
        'message': f'Hello! This page has been visited {count} times',
        'status': 'success'
    })

@app.route('/health')
def health():
    try:
        # Verify Redis connection
        redis_client.ping()
        # Verify PostgreSQL connection
        conn = psycopg2.connect(
            host="postgres",
            database=os.environ['POSTGRES_DB'],
            user=os.environ['POSTGRES_USER'],
            password=os.environ['POSTGRES_PASSWORD']
        )
        conn.close()
        return jsonify({'status': 'healthy', 'services': ['redis', 'postgres']})
    except Exception as e:
        return jsonify({'status': 'unhealthy', 'error': str(e)}), 500

if __name__ == '__main__':
    app.run(host="0.0.0.0", port=5000, debug=True)
EOF

# Create requirements file
cat > app/requirements.txt << 'EOF'
Flask==2.3.3
redis==4.6.0
psycopg2-binary==2.9.7
EOF

# Create Dockerfile for the web app
cat > app/Dockerfile << 'EOF'
FROM python:3.9-slim

WORKDIR /app

COPY requirements.txt .
RUN pip install --cache-dir -r requirements.txt

COPY . .

EXPOSE 5000

CMD ["python", "app.py"]
EOF

Now, let’s create the docker-compose.yml file that brings everything together:

cat > docker-compose.yml << 'EOF'
version: '3.8'

services:
  web:
    build: ./app
    ports:
      - "5000:5000"
    environment:
      - POSTGRES_DB=webapp
      - POSTGRES_USER=webuser
      - POSTGRES_PASSWORD=webpass123
    depends_on:
      - redis
      - postgres
    restart: unless-stopped
    volumes:
      - ./app:/app
    networks:
      - webapp-network

  redis:
    image: redis:7-alpine
    ports:
      - "6379:6379"
    volumes:
      - redis-data:/data
    networks:
      - webapp-network
    restart: unless-stopped

  postgres:
    image: postgres:15-alpine
    environment:
      - POSTGRES_DB=webapp
      - POSTGRES_USER=webuser
      - POSTGRES_PASSWORD=webpass123
    volumes:
      - postgres-data:/var/lib/postgresql/data
    ports:
      - "5432:5432"
    networks:
      - webapp-network
    restart: unless-stopped

volumes:
  redis-data:
  postgres-data:

networks:
  webapp-network:
    driver: bridge
EOF

Launching and Managing Your Application Stack

Having configured everything, let’s start the application stack and review key management commands:

# Start all services in detached mode
docker-compose up -d

# View active services
docker-compose ps

# Inspect service logs
docker-compose logs web
docker-compose logs -f redis  # Follow logs live

# Scale a specific service
docker-compose up -d --scale web=3

# Execute commands in active containers
docker-compose exec web python -c "import redis; print(redis.__version__)"
docker-compose exec postgres psql -U webuser -d webapp

# Stop all services
docker-compose stop

# Stop and remove containers and networks
docker-compose down

# Remove everything, including volumes
docker-compose down -v

Test your application by accessing the endpoints:

# Check the main endpoint
curl http://localhost:5000/

# Verify health status
curl http://localhost:5000/health

# Monitor Redis activity
docker-compose exec redis redis-cli monitor

Practical Use Cases and Applications

Docker Compose excels in several situations where single-host orchestration is practical:

  • Development frameworks – Recreate production infrastructure locally, ensuring consistent configurations for team members
  • CI/CD workflows – Quickly generate testing environments for integration testing
  • Small to medium production setups – Single-server applications featuring multiple microservices
  • Edge computing – Lightweight orchestration suitable for IoT gateways and edge devices
  • Staging setups – Economical pre-production testing with equivalent service configurations

For a more complex real-world scenario, consider an e-commerce backend with monitoring:

cat > ecommerce-compose.yml << 'EOF'
version: '3.8'

services:
  nginx:
    image: nginx:alpine
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf
      - ./ssl:/etc/nginx/ssl
    depends_on:
      - api
      - frontend

  api:
    build: ./backend
    environment:
      - DATABASE_URL=postgresql://postgres:secret@postgres:5432/ecommerce
      - REDIS_URL=redis://redis:6379
      - JWT_SECRET=your-jwt-secret-here
    depends_on:
      - postgres
      - redis
    deploy:
      replicas: 2

  frontend:
    build: ./frontend
    environment:
      - API_BASE_URL=http://api:3000

  postgres:
    image: postgres:15
    environment:
      - POSTGRES_DB=ecommerce
      - POSTGRES_PASSWORD=secret
    volumes:
      - postgres-data:/var/lib/postgresql/data
      - ./init.sql:/docker-entrypoint-initdb.d/init.sql

  redis:
    image: redis:7-alpine
    volumes:
      - redis-data:/data

  prometheus:
    image: prom/prometheus
    ports:
      - "9090:9090"
    volumes:
      - ./prometheus.yml:/etc/prometheus/prometheus.yml

  grafana:
    image: grafana/grafana
    ports:
      - "3000:3000"
    environment:
      - GF_SECURITY_ADMIN_PASSWORD=admin
    volumes:
      - grafana-data:/var/lib/grafana

volumes:
  postgres-data:
  redis-data:
  grafana-data:
EOF

Comparing Docker Compose with Alternatives

It’s essential to know when to utilise Docker Compose over other orchestration instruments to make informed architectural choices:

Tool Ideal Use Case Learning Difficulty Scalability Ready for Production
Docker Compose Applications on a single host Low Limited Small/Medium applications
Kubernetes Multi-host arrangements High Excellent Enterprise-ready
Docker Swarm Simple clustering solutions Medium Good Moderate complexity
Podman Compose Rootless containers Low Limited Security-focused

Performance-wise, Docker Compose adds minimal overhead relative to running containers directly. Benchmarks indicate a CPU overhead of approximately 2-3% and negligible memory impact due to the orchestration layer itself.

Best Practices and Security Tips

Adhering to established practices will prevent hours of debugging and mitigate security risks:

Managing Configuration

# Use environment files for sensitive information
cat > .env << 'EOF'
POSTGRES_PASSWORD=your-secure-password-here
JWT_SECRET=your-jwt-secret
API_KEY=your-api-key
EOF

# Reference it in docker-compose.yml
services:
  app:
    environment:
      - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
    env_file:
      - .env

Security Recommendations

  • Never expose database ports to the host unless absolutely necessary
  • Specify particular image tags instead of using ‘latest’ to ensure reproducible builds
  • Run containers as non-root users whenever feasible
  • Implement health checks for all services
  • Utilise secrets management for production environments
# Example with security improvements
version: '3.8'

services:
  web:
    image: myapp:1.2.3  # Specific version
    user: "1000:1000"   # Non-root user
    read_only: true     # Read-only filesystem
    tmpfs:
      - /tmp
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
      interval: 30s
      timeout: 10s
      retries: 3
    secrets:
      - db_password

secrets:
  db_password:
    file: ./secrets/db_password.txt

Enhancing Performance

# Optimise for production
services:
  web:
    deploy:
      resources:
        limits:
          cpus: '0.5'
          memory: 512M
        reservations:
          cpus: '0.25'
          memory: 256M
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "3"

Troubleshooting Common Problems

Even seasoned developers face typical issues with Docker Compose. Here are proven solutions:

Port Conflicts

# Error: Port already in use
# Solution: Identify what is using the port
sudo netstat -tulpn | grep :5432
sudo ss -tulpn | grep :5432

# Terminate the conflicting process or adjust ports in the compose file
services:
  postgres:
    ports:
      - "5433:5432"  # Use a different host port

Network Connectivity Issues

# Troubleshoot network connectivity between services
docker-compose exec web ping postgres
docker-compose exec web nslookup redis

# Inspect Docker networks
docker network ls
docker network inspect webapp-stack_default

# Force network recreation
docker-compose down
docker network prune
docker-compose up -d

Volume Permission Issues

# Resolve common volume permission concerns
# Method 1: Employ init containers
services:
  init:
    image: alpine
    command: chown -R 1000:1000 /data
    volumes:
      - app-data:/data
    
  app:
    depends_on:
      - init
    volumes:
      - app-data:/app/data

# Method 2: Specify user in Dockerfile
# In your Dockerfile:
RUN adduser -D -s /bin/sh appuser
USER appuser

Memory and Resource Issues

# Monitor resource consumption
docker-compose top
docker stats

# Set resource limits to prevent one service from consuming all resources
services:
  database:
    image: postgres:15
    deploy:
      resources:
        limits:
          memory: 1G
        reservations:
          memory: 512M

For comprehensive troubleshooting and advanced configuration options, refer to the official Docker Compose documentation. The Docker Compose GitHub repository also has valuable examples and community-sourced solutions for more complex situations.

Docker Compose simplifies intricate multi-container applications into manageable, reproducible deployments. While it doesn’t replace comprehensive orchestration platforms such as Kubernetes, it excels in development setups and single-host production contexts. The path to success involves understanding its constraints, adhering to security recommendations, and maximising its strengths for suitable use cases. Start with straightforward configurations and layer on complexity as your application needs grow.



This article draws from various online sources. We acknowledge and appreciate the contributions of original authors, publishers, and websites. While every effort has been made to properly credit the source material, any accidental oversight or omission does not constitute a copyright violation. All trademarks, logos, and images mentioned belong to their respective owners. If you believe any content infringes upon your copyright, please contact us immediately for review and prompt action.

This article is intended for informational and educational purposes only and does not infringe on copyright owners’ rights. If any copyrighted material has been used without proper credit or in violation of copyright laws, it is unintentional and will be rectified promptly upon notification. Please note that the republishing, redistribution, or reproduction of any content in any form is prohibited without the author’s and website owner’s express written permission. For permissions or further inquiries, please contact us.