Pihole, from 0 to home «production» with docker-compose

I will be installing Ubuntu Server 20.04 as my base distro for this. For ease of use, during the install, select Install of open ssh server.

Once installed, log in and…

  1. Update all the things.
# apt update
# apt upgrade

2. Set up automatic updates: https://linoxide.com/enable-automatic-updates-on-ubuntu-20-04/

3. Set up log2ram (Optional, recommended for Raspberry Pi’s using SDCards): https://github.com/azlux/log2ram

4. Install docker and docker-compose: https://pepdoesthings.wordpress.com/2021/08/12/starting-out-with-docker-and-docker-compose-installation-in-ubuntu-debian-based/

5. Set up pihole+unbound docker container; create from your user a docker directory and inside you will need a config directory and two files, docker-compose.yml and an .env file.

docker-compose.yml

version: '3.6'
services:
  pihole:
    container_name: pihole
    image: cbcrowe/pihole-unbound:2021.12.1
    hostname: piholevm
    domainname: piholevm.local
    environment:
      ServerIP: ${ServerIP}
      TZ: ${TZ}
      WEBPASSWORD: ${WEBPASSWORD}
      REV_SERVER: ${REV_SERVER}
      REV_SERVER_TARGET: ${REV_SERVER_TARGET}
      REV_SERVER_DOMAIN: ${REV_SERVER_DOMAIN}
      REV_SERVER_CIDR: ${REV_SERVER_CIDR}
      DNS1: 127.0.0.1#5335 # Hardcoded to our Unbound server
      DNS2: 127.0.0.1#5335 # Hardcoded to our Unbound server
      DNSSEC: "true" # Enable DNSSEC
    network_mode: host
    volumes:
      - ${DOCKERDIR}/config/pihole/etc-pihole:/etc/pihole/
      - ${DOCKERDIR}/config/pihole/etc-dnsmasq.d/:/etc/dnsmasq.d/
    restart: unless-stopped

.env

DOCKERDIR=/home/pep/docker
USERHOME=/home/pep

#PIHOLE
ServerIP=192.168.0.15 #Your own pihole server IP
TZ=Europe/Madrid
WEBPASSWORD=---the web password for your pihole---
REV_SERVER=true
REV_SERVER_DOMAIN=local
REV_SERVER_TARGET=192.168.0.1
REV_SERVER_CIDR=192.168.0.0/24

Now if you try to start your pihole it will start but it won’t fully work because Ubuntu comes with already a DNS services installed and running, so we need to dissable it first (or change its port):

$ sudo systemctl stop systemd-resolved
$ sudo systemctl disable systemd-resolved
$ sudo systemctl mask systemd-resolved

Ref: https://askubuntu.com/questions/191226/dnsmasq-failed-to-create-listening-socket-for-port-53-address-already-in-use

Once done this, you can now start your docker compose containers with (you need to be in your docker directory)

$ docker-compose up -d

6. Optional, if this is your secondary pihole, you can sync it with your first pihole: https://pepdoesthings.wordpress.com/2022/02/09/syncing-two-pihole-instances/

7. Set up automatic upgrades for your containers: https://pepdoesthings.wordpress.com/2022/02/08/update-your-docker-compose-containers-on-a-schedule/

8. Arrange crontab to update everything periodically:

0 4 * * 6 /home/pep/bin/update.containers.sh  >/dev/null 2>&1
0 5 * * 6 /home/pep/bin/piholesync.sh  >/dev/null 2>&1

Enable Passwordless Sudo For A Specific User in Linux

Just add a file (any filename would work but I personally just named it with the username) with the line

yourusername ALL=(ALL) NOPASSWD: ALL

to /etc/sudoers.d/

This works because in /etc/sudoers there is a line

#includedir /etc/sudoers.d

That will include whatever you throw in the directory.

Tested in Ubuntu 20.04.

Reference: https://timonweb.com/devops/how-to-enable-passwordless-sudo-for-a-specific-user-in-linux/

Syncing two pihole instances.

Why would you need two piholes?

I run a combo of pihole+unbound, which means that my pihole besides blocking some bad stuff also resolves DNS locally. What happens if your DNS server stop working? your whole internet goes down.

The solution then is add more piholes. My current setup is one pihole (in a raspberry pi) and a second pihole (in a VM; in Unraid if that matters). The problem is that if you make your own modifications to your primary pihole those modifications won’t be in your second pihole. Hence, syncing the two pihole instances.

Here is the script; probably won’t be as complete as Gravity Sync but wanted to give this a go myself anyway!

BKP_DATE=$(date +%Y%m%d_%H%M%S)
PIHOLE_ORIGIN_IP=192.168.0.10
PIHOLE_ORIGIN_USER=pi
LOCAL_USER=pep
PIHOLE_LOCAL_CONFIG=/home/pep/docker/config/pihole
GRAVITYDB_ORIGIN_PATH=/home/pi/docker/config/pihole/etc-pihole/gravity.db
GRAVITYDB_LOCAL_PATH=/home/pep/docker/config/pihole/etc-pihole/gravity.db
GRAVITYDB_LOCAL_BACKUP_PATH=/home/pep/docker/config/gravity.db.bkp.${BKP_DATE}
CNAMECONF_ORIGIN_PATH=/home/pi/docker/config/pihole/etc-dnsmasq.d/05-pihole-custom-cname.conf
CNAMECONF_LOCAL_PATH=/home/pep/docker/config/pihole/etc-dnsmasq.d/05-pihole-custom-cname.conf
echo "stoping pihole"
docker stop pihole
sudo chown -R $LOCAL_USER $PIHOLE_LOCAL_CONFIG
sudo chgrp -R $LOCAL_USER $PIHOLE_LOCAL_CONFIG
cp -v $GRAVITYDB_LOCAL_PATH $GRAVITYDB_LOCAL_BACKUP_PATH
echo "syncing pihole's…"
scp $PIHOLE_ORIGIN_USER@$PIHOLE_ORIGIN_IP:$GRAVITYDB_ORIGIN_PATH $GRAVITYDB_LOCAL_PATH
scp $PIHOLE_ORIGIN_USER@$PIHOLE_ORIGIN_IP:$CNAMECONF_ORIGIN_PATH $CNAMECONF_LOCAL_PATH
echo "restarting pihole"
docker start pihole
view raw piholesync.sh hosted with ❤ by GitHub

What this does is basically put your gravity.db database file in your secondary pihole (the one you need to run the script from) and do the same with your custom cname.conf file.

So far it is working fine for my needs.

You will need to run this from a passwordless sudo user.

Update your docker-compose containers on a schedule

Create a script update.containers.sh in your bin/ folder (can be your user own bin folder if it is added to your path).

# your docker folder path
DOCKER_DIR=/home/pep/docker
cd $DOCKER_DIR \
&& docker-compose pull \
&& docker-compose up –force-recreate –build -d \
&& docker image prune -f

And then add it to your crontab:

0 5 * * 6 /home/pep/bin/update.containers.sh  >/dev/null 2>&1

This would run the script at 5am on Saturdays. If you need help with crontab expressions you can check something like https://crontab.guru/

Of course I would not update production critical services/containers without supervision.

Starting out with docker and docker-compose. Installation in ubuntu/debian based.

Note: all meant to be done as root or with root privileges.

Update everything.

# apt-get update
# apt-get upgrade

Install prerequisites:

# apt install apt-transport-https ca-certificates curl software-properties-common

GPG key and official docker repo:

# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -

Para ubuntu focal fossa (20.04.x)
# add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"

Para debian bullseye (debian 11)
# add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian bullseye stable"

Para debian bookworm (debian 12)
# add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian bookworm stable"

# apt update

Install docker and docker-compose

# apt-get install docker-ce docker-compose

Post install tasks to avoid using sudo or root to manage docker containers:

# usermod -aG docker pep
* or do it for your user using sudo
# sudo usermod -aG docker $USER

Log out, log back in so all the permissions and stuff are applied.

Then we check if docker is running properly. Do this from your user, not root.

# docker run hello-world

And done!

Refs:

Update all your (debian) machines at once with Ansible

Ansible can be used for a bunch of stuff. As I am getting my feet wet with it, first task is something I don’t do as often as I should do out of lazyness.

You will need a hosts file:

[pro]
10.0.0.5       ansible_connection=ssh
10.0.0.6       ansible_connection=ssh

[prepro]
10.0.0.7       ansible_connection=ssh

Then you will need the playbook file, which is the recipe that defines what will be done. In this case I named this file apt-update-prepro.yml. The hosts line of the playbook could be «all» if we wanted to apply this playbook to all of our servers.

---
- hosts: prepro
  tasks:
    - name: Update cache 
      apt:
       update_cache: yes
         
    - name: Upgrade all packages on servers
      apt: 
        name: "*"
        state: latest

We need to have added ssh keys so we can connect to our servers from our working machine without passwords.

And finally the command that puts everything together:

ansible-playbook apt-update-prepro.yml -i hosts --user=root

Here we can define the user we are using with the param –user.