Skip to main content
Daniel J Glover
Back to Blog

Docker Compose Self-Hosted Services Guide

11 min read
Article overview
Written by Daniel J Glover

Practical perspective from an IT leader working across operations, security, automation, and change.

Published 7 April 2026

11 minute read with practical, decision-oriented guidance.

Best suited for

Leaders and operators looking for concise, actionable takeaways.

There is a certain satisfaction in running your own stack. Not because self-hosting is always the right choice, but because the discipline of deploying, securing, and maintaining your own services teaches you things that clicking through cloud consoles never does. You understand what a reverse proxy is doing when you have configured one. You understand secrets management when you have broken something by leaving credentials in a compose file.

I run a self-hosted stack at home and have built similar setups for small IT teams. This guide covers eight services worth running yourself, with working Docker Compose configurations, security notes, and an honest view of where self-hosting earns its keep versus where managed services are the right call.

Before you start, two things. First: Docker Compose is the right tool here. Not Kubernetes, not Nomad, not whatever the current trend is. For a small team or a serious home lab, Compose gives you readable declarative configuration, simple rollback, and low operational overhead. Second: put these services on a segmented network. A flat network where your documentation tool, your password manager, and your monitoring stack all share a broadcast domain with everything else is a security problem waiting to happen. Read my home lab network segmentation guide if you need the VLAN setup first.

The Foundations Before the Services

Every service in this list shares a common infrastructure requirement: a reverse proxy. Running each service on a different host port (:8080, :8443, :3000) is fine for development, but it is a maintenance problem at any scale. You end up with port-mapping spreadsheets, inconsistent TLS handling, and no central place to manage access.

Traefik solves this. It is a container-aware reverse proxy that integrates directly with Docker and manages TLS certificates automatically via Let's Encrypt. Every service below assumes Traefik is running as the entry point.

Here is a minimal Traefik setup you can build everything else on top of:

# traefik/docker-compose.yml
services:
  traefik:
    image: traefik:v3.0
    container_name: traefik
    restart: unless-stopped
    command:
      - "--api.insecure=false"
      - "--providers.docker=true"
      - "--providers.docker.exposedbydefault=false"
      - "--entrypoints.web.address=:80"
      - "--entrypoints.websecure.address=:443"
      - "--certificatesresolvers.letsencrypt.acme.tlschallenge=true"
      - "[email protected]"
      - "--certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json"
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - ./letsencrypt:/letsencrypt
    networks:
      - proxy
 
networks:
  proxy:
    external: true

Create the proxy network once with docker network create proxy, then every service joins it with networks: [proxy] and gets a Traefik label for routing.

Security note: Mounting the Docker socket gives Traefik significant access to the host. On a shared or multi-tenant host, use the Traefik socket proxy pattern instead. For a single-operator home lab or small team server, the direct mount is acceptable.


1. Portainer - Container Management

If you are running more than two or three services, you want a management UI. Portainer gives you a browser-based view of running containers, volumes, networks, and compose stacks. It is not a replacement for knowing what your containers are doing, but it removes a lot of docker ps and log-tailing friction.

# portainer/docker-compose.yml
services:
  portainer:
    image: portainer/portainer-ce:latest
    container_name: portainer
    restart: unless-stopped
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - portainer_data:/data
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.portainer.rule=Host(`portainer.yourdomain.com`)"
      - "traefik.http.routers.portainer.entrypoints=websecure"
      - "traefik.http.routers.portainer.tls.certresolver=letsencrypt"
      - "traefik.http.services.portainer.loadbalancer.server.port=9000"
    networks:
      - proxy
 
volumes:
  portainer_data:
 
networks:
  proxy:
    external: true

Security note: Restrict Portainer to your management VLAN or VPN. Do not expose it directly to the internet. The admin account has root-equivalent access to everything Docker can reach.


2. BookStack - Documentation and Knowledge Base

BookStack is the best self-hosted wiki for IT teams. It uses a Book/Chapter/Page hierarchy that maps well to how IT documentation actually works: a Book for each system or area, Chapters for subsections, Pages for specific procedures. It supports Markdown, has decent search, and the permissions model is granular enough to be useful.

# bookstack/docker-compose.yml
services:
  bookstack:
    image: lscr.io/linuxserver/bookstack:latest
    container_name: bookstack
    restart: unless-stopped
    environment:
      - PUID=1000
      - PGID=1000
      - APP_URL=https://docs.yourdomain.com
      - DB_HOST=bookstack-db
      - DB_PORT=3306
      - DB_USER=bookstack
      - DB_PASS=${DB_PASS}
      - DB_DATABASE=bookstack
    volumes:
      - ./config:/config
    depends_on:
      - bookstack-db
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.bookstack.rule=Host(`docs.yourdomain.com`)"
      - "traefik.http.routers.bookstack.entrypoints=websecure"
      - "traefik.http.routers.bookstack.tls.certresolver=letsencrypt"
    networks:
      - proxy
      - internal
 
  bookstack-db:
    image: mariadb:10.11
    container_name: bookstack-db
    restart: unless-stopped
    environment:
      - MYSQL_ROOT_PASSWORD=${DB_ROOT_PASS}
      - MYSQL_DATABASE=bookstack
      - MYSQL_USER=bookstack
      - MYSQL_PASSWORD=${DB_PASS}
    volumes:
      - ./mysql:/var/lib/mysql
    networks:
      - internal
 
networks:
  proxy:
    external: true
  internal:
    internal: true

Note the internal: true network for the database. The database container has no external access. Only the BookStack application can reach it. This is a pattern worth applying to every service that has a database component.

Security note: Store DB_PASS and DB_ROOT_PASS in a .env file that is excluded from version control. Never put credentials directly in compose files. Use docker secret or a dedicated secrets manager for higher-security environments.


3. Vaultwarden - Password Management

Vaultwarden is a community-maintained reimplementation of the Bitwarden server in Rust. It is significantly lighter than the official Bitwarden deployment and runs comfortably on a single core with 256MB RAM. For a small team sharing infrastructure credentials, it is the right answer.

# vaultwarden/docker-compose.yml
services:
  vaultwarden:
    image: vaultwarden/server:latest
    container_name: vaultwarden
    restart: unless-stopped
    environment:
      - DOMAIN=https://vault.yourdomain.com
      - SIGNUPS_ALLOWED=false
      - ADMIN_TOKEN=${ADMIN_TOKEN}
    volumes:
      - ./vw-data:/data
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.vaultwarden.rule=Host(`vault.yourdomain.com`)"
      - "traefik.http.routers.vaultwarden.entrypoints=websecure"
      - "traefik.http.routers.vaultwarden.tls.certresolver=letsencrypt"
    networks:
      - proxy
 
networks:
  proxy:
    external: true

SIGNUPS_ALLOWED=false is essential. After creating the accounts you need, disable open registration entirely. There is no good reason to leave a password manager open to self-registration. Generate a strong ADMIN_TOKEN and keep it offline after initial setup.

Security note: Back up the vw-data volume regularly. If it goes, so do all stored credentials. Pair this with your backup strategy - my Proxmox backup and disaster recovery guide covers the underlying approach, which applies equally to Docker volumes.


4. Uptime Kuma - Monitoring

Uptime Kuma monitors URLs, TCP ports, Docker containers, and DNS entries, and sends alerts via Telegram, Slack, email, and a dozen other channels. It is not a replacement for proper observability tooling at scale, but for a self-hosted stack or a small team environment, it gives you exactly the signal you need: is the thing up, and if not, when did it go down?

# uptime-kuma/docker-compose.yml
services:
  uptime-kuma:
    image: louislam/uptime-kuma:1
    container_name: uptime-kuma
    restart: unless-stopped
    volumes:
      - uptime-kuma:/app/data
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.uptime-kuma.rule=Host(`status.yourdomain.com`)"
      - "traefik.http.routers.uptime-kuma.entrypoints=websecure"
      - "traefik.http.routers.uptime-kuma.tls.certresolver=letsencrypt"
    networks:
      - proxy
 
volumes:
  uptime-kuma:
 
networks:
  proxy:
    external: true

The public status page feature is genuinely useful if you run services that other people depend on. It gives you a clean, shareable page that shows historical uptime without exposing your monitoring configuration.


5. Gitea - Self-Hosted Git

If you keep infrastructure-as-code, Ansible playbooks, compose files, or scripts in version control, you want a self-hosted Git server. Gitea is lightweight, fast, and has a GitHub-like interface without requiring a dedicated server with significant resources. The whole thing runs on less than 256MB RAM.

# gitea/docker-compose.yml
services:
  gitea:
    image: gitea/gitea:latest
    container_name: gitea
    restart: unless-stopped
    environment:
      - USER_UID=1000
      - USER_GID=1000
      - GITEA__database__DB_TYPE=postgres
      - GITEA__database__HOST=gitea-db:5432
      - GITEA__database__NAME=gitea
      - GITEA__database__USER=gitea
      - GITEA__database__PASSWD=${DB_PASS}
    volumes:
      - ./gitea:/data
    depends_on:
      - gitea-db
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.gitea.rule=Host(`git.yourdomain.com`)"
      - "traefik.http.routers.gitea.entrypoints=websecure"
      - "traefik.http.routers.gitea.tls.certresolver=letsencrypt"
    networks:
      - proxy
      - internal
 
  gitea-db:
    image: postgres:15
    container_name: gitea-db
    restart: unless-stopped
    environment:
      - POSTGRES_DB=gitea
      - POSTGRES_USER=gitea
      - POSTGRES_PASSWORD=${DB_PASS}
    volumes:
      - ./postgres:/var/lib/postgresql/data
    networks:
      - internal
 
networks:
  proxy:
    external: true
  internal:
    internal: true

Security note: Disable public registration after creating your account. Go to Site Administration and set DISABLE_REGISTRATION=true. Store your SSH keys here and treat your compose file as the source of truth for your infrastructure. Commit every service configuration to a private repo and you have a proper baseline for disaster recovery.


6. Grafana with Prometheus - Metrics and Dashboards

Uptime Kuma tells you when things are down. Grafana and Prometheus tell you what happened in the minutes before they went down. Together they give you time-series metrics, customisable dashboards, and alerting based on thresholds rather than binary up/down status.

# monitoring/docker-compose.yml
services:
  prometheus:
    image: prom/prometheus:latest
    container_name: prometheus
    restart: unless-stopped
    volumes:
      - ./prometheus.yml:/etc/prometheus/prometheus.yml:ro
      - prometheus_data:/prometheus
    command:
      - "--config.file=/etc/prometheus/prometheus.yml"
      - "--storage.tsdb.retention.time=30d"
    networks:
      - proxy
      - internal
 
  grafana:
    image: grafana/grafana:latest
    container_name: grafana
    restart: unless-stopped
    environment:
      - GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_PASS}
      - GF_USERS_ALLOW_SIGN_UP=false
    volumes:
      - grafana_data:/var/lib/grafana
    depends_on:
      - prometheus
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.grafana.rule=Host(`metrics.yourdomain.com`)"
      - "traefik.http.routers.grafana.entrypoints=websecure"
      - "traefik.http.routers.grafana.tls.certresolver=letsencrypt"
    networks:
      - proxy
      - internal
 
volumes:
  prometheus_data:
  grafana_data:
 
networks:
  proxy:
    external: true
  internal:
    internal: true

Note that Prometheus is on the internal network only. It scrapes metrics from your services but has no business being publicly accessible. Only Grafana is exposed via Traefik. Pair this with Node Exporter on each host you want to monitor.


7. Nextcloud - File Sync and Collaboration

If your team shares files, Nextcloud is the self-hosted alternative to Google Drive or SharePoint. For a small IT team, the value is control rather than cost: your files stay on your hardware, you set the retention policy, and you know exactly who has access to what.

# nextcloud/docker-compose.yml
services:
  nextcloud:
    image: nextcloud:28
    container_name: nextcloud
    restart: unless-stopped
    environment:
      - POSTGRES_HOST=nextcloud-db
      - POSTGRES_DB=nextcloud
      - POSTGRES_USER=nextcloud
      - POSTGRES_PASSWORD=${DB_PASS}
      - NEXTCLOUD_ADMIN_USER=admin
      - NEXTCLOUD_ADMIN_PASSWORD=${ADMIN_PASS}
      - NEXTCLOUD_TRUSTED_DOMAINS=cloud.yourdomain.com
    volumes:
      - nextcloud_data:/var/www/html
    depends_on:
      - nextcloud-db
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.nextcloud.rule=Host(`cloud.yourdomain.com`)"
      - "traefik.http.routers.nextcloud.entrypoints=websecure"
      - "traefik.http.routers.nextcloud.tls.certresolver=letsencrypt"
    networks:
      - proxy
      - internal
 
  nextcloud-db:
    image: postgres:15
    container_name: nextcloud-db
    restart: unless-stopped
    environment:
      - POSTGRES_DB=nextcloud
      - POSTGRES_USER=nextcloud
      - POSTGRES_PASSWORD=${DB_PASS}
    volumes:
      - nextcloud_db:/var/lib/postgresql/data
    networks:
      - internal
 
volumes:
  nextcloud_data:
  nextcloud_db:
 
networks:
  proxy:
    external: true
  internal:
    internal: true

Security note: Run the Nextcloud security scan at https://scan.nextcloud.com after deployment. Common issues include missing security headers and out-of-date PHP modules. Enable two-factor authentication for all accounts before putting anything sensitive in there.


8. Homepage - Service Dashboard

When you are running eight services, you want a single place to see them all. Homepage is a customisable application dashboard that integrates with Docker to pull running container status automatically, plus widgets for services that expose an API.

# homepage/docker-compose.yml
services:
  homepage:
    image: ghcr.io/gethomepage/homepage:latest
    container_name: homepage
    restart: unless-stopped
    volumes:
      - ./config:/app/config
      - /var/run/docker.sock:/var/run/docker.sock:ro
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.homepage.rule=Host(`home.yourdomain.com`)"
      - "traefik.http.routers.homepage.entrypoints=websecure"
      - "traefik.http.routers.homepage.tls.certresolver=letsencrypt"
    networks:
      - proxy
 
networks:
  proxy:
    external: true

The config is YAML-based and stored in ./config. Add your services, bookmarks, and widgets there. The Docker integration reads container labels to show running state automatically.


The Security Baseline That Actually Matters

Compose configurations look clean in a blog post. The real discipline is in the operational details. Here is the baseline I apply to every self-hosted stack:

Secrets stay out of compose files. Use a .env file for environment-specific values. Add .env to your .gitignore immediately. If you are committing credentials to version control, you have already made the mistake.

Separate networks by trust level. Public-facing services on the proxy network. Database containers on internal-only networks. No container should be able to reach a database it has no reason to talk to.

Run scheduled security scanning. Trivy and Docker Bench for Security are both worth running regularly. I have written about automated security scanning for small IT teams if you want the broader framework.

Back up volumes, not just images. The image is replaceable. Your BookStack content, your Vaultwarden data, and your Gitea repositories are not. Back up named volumes to a separate location. Test restores. If you have not done a test restore, you have not got a working backup.

Pin image versions in production. image: nextcloud:28 is safer than image: nextcloud:latest in a production context. Latest can introduce breaking changes on pull. Review changelogs before updating and test in a non-production environment first.


When Self-Hosting Is Not the Answer

This guide covers services where the self-hosting tradeoff is favourable: documentation, monitoring, version control, file sync. The operational overhead is low, the data is yours, and you build real understanding of the systems involved.

Self-hosting is not the right answer for everything. Email, external DNS, and anything that needs genuine high availability requires infrastructure complexity that is hard to justify unless you are specifically building that capability. The value of self-hosting is not avoiding all managed services - it is being intentional about which ones you run yourself and understanding what you are taking on when you do.

For IT leaders thinking about self-hosted tooling at team scale, the questions that matter are: who maintains it when you are not there, what is the recovery procedure when it breaks, and is the operational overhead justified by the control you get in return? The services in this guide score well on all three. They are maintainable by a single engineer, the compose configurations are readable by anyone with basic Docker knowledge, and the data control they provide is real.

If you are running these on Proxmox, pair this guide with the Proxmox backup and disaster recovery setup to make sure your volumes have a recovery path. The best self-hosted stack is one that survives a bad week.

Share this post

About the author

DG

Daniel J Glover

IT Leader with experience spanning IT management, compliance, development, automation, AI, and project management. I write about technology, leadership, and building better systems.

Continue exploring

Keep building context around this topic

Jump to closely related posts and topic hubs to deepen understanding and discover connected ideas faster.

Browse all articles

Ready to Improve Your IT Operations?

Book a free 30-minute consultation to discuss your IT challenges. No commitment required - just a focused conversation about where you want to be.

Book a Free Consultation