I have nine Docker stacks running on my home lab server (affectionately named "LaRussa"). Immich for photos, Pi-hole for DNS, Teslamate for tracking my Tesla, Obsidian sync, code servers, web UIsāthe whole ecosystem. They're all important. They all need updates. And they're all a pain to manage manually.
So I automated it. And I'm doing it at 3:17 AM on Tuesday and Saturday mornings. Yes, 3:17. Not 3:00. There's a reason for that oddness, and it reveals something interesting about how to think about home automation.
The Problem: Nine Stacks, Nine Headaches
Docker Compose is great. You run `docker compose pull` to grab the latest images, then `docker compose up -d` to restart with those updates. Takes about 2 minutes per stack if you do it manually.
Nine stacks, manually updated, is 18 minutes. Not terrible. Except I have to remember to do it. I have to remember which stacks I've updated. I have to check if something broke. I have to document what happened.
That's not managing infrastructure. That's creating friction.
The solution was obvious: automate it. But here's the thing about home automationāyou don't just automate the task. You automate it *intelligently*. And that's where the 3:17 AM comes in.
Why 3 AM? And Why 3:17?
First, why 3 AM? Because that's when nobody's using the server. I'm asleep. My family's asleep. If an update causes a temporary glitch or takes a few extra minutes, it doesn't matter. If something goes catastrophically wrong, I'll wake up to a working server anyway.
Now, why 3:17 AM and not 3:00 AM? Because 3:00 is when every other scheduled task in the world happens. Backups, syncs, database maintenanceāthe internet has a thousand cron jobs scheduled for midnight, 1 AM, 2 AM, 3 AM. By scheduling at 3:17, I'm not competing for system resources with everything else.
It's a tiny optimization that probably doesn't matter. But it shows the thinking: automation isn't just about delegating tasks. It's about delegating them smartly.
The Setup: Cron + Bash + Logging
Here's what I built:
Cron Job (runs Tuesdays and Saturdays at 3:17 AM)
17 3 * * 2,6 /path/to/update-docker-stacks.sh
Bash Script (does the actual work)
#!/bin/bash
STACKS=(
"bigkel"
"homepage"
"immich"
"obsidian"
"pihole"
"teslamate"
"unifi"
"vscode"
"web-ui"
)
LOG_FILE="/path/to/logs/docker-update-$(date +%Y-%m-%d).log"
echo "Starting Docker stack updates at $(date)" >> $LOG_FILE
for stack in "${STACKS[@]}"; do
cd /Volumes/LaRussa/$stack
echo "Updating $stack..." >> $LOG_FILE
docker compose pull >> $LOG_FILE 2>&1
docker compose up -d >> $LOG_FILE 2>&1
echo "$stack complete at $(date)" >> $LOG_FILE
done
echo "All stacks updated successfully at $(date)" >> $LOG_FILE
Obsidian Integration
After the updates finish, a separate script creates a timestamped entry in my Obsidian vault with the results. Did any stack fail to update? Did all images pull cleanly? What time did it finish? All logged, all searchable.
What Could Go Wrong? (And How I Protect Against It)
Automation is beautiful until it isn't. Here are the failure modes I thought about:
ā **An update breaks a service**
Solution: All three invertersāI mean, all nine services are non-critical to my actual living situation. If immich breaks, I lose photo sync but the house still works. If teslamate breaks, Tesla tracking stops but the Tesla still charges. The worst-case scenario is "I have to manually restart a container in the morning." That's acceptable.
ā **Partial failure (some services update, some don't)**
Solution: Logging. The bash script logs everything. If the vscode service times out but obsidian updates fine, I'll see exactly what happened. I can manually fix the vscode stack and note it in my logs.
ā **Network is down or Docker daemon is hung**
Solution: Error handling and timeout detection. If `docker compose pull` takes more than 5 minutes per stack (it shouldn't), something's wrong. The script detects this and logs it. I wake up, investigate, and fix it.
ā **Disk space fills up during updates**
Solution: Check disk space before starting. If free space is below 20%, abort the update and alert me. Don't create a catastrophe trying to be automatic.
Notice a pattern? Automation + monitoring = safety. Automation alone = disaster waiting to happen.
The Real Benefit: Consistency and Data
Yes, it saves me 18 minutes twice a week. That's nice but not life-changing. The real benefit is consistency and visibility.
By running updates on a predictable schedule, I know:
- When services were last updated
- Which updates caused problems (if I track them)
- Whether my automation is actually working
- What the failure modes look like when they happen
I also have a month-long record in my logs. "Did immich ever fail to update?" I can search my logs and find out. "How often do updates actually happen?" I can see the pattern.
That visibility is more valuable than the time saved.
Scaling to More Stacks
If I add a 10th stack, I just add it to the array. The automation scales effortlessly. If I wanted to update every night instead of twice a week, I change one line in cron. If I wanted to update different stacks on different schedules (maybe teslamate needs more frequent updates), I can create multiple cron jobs.
The beautiful part of automating with scripts is that small changes don't require architectural overhauls.
The Bigger Lesson: Smart Automation
I could have set this to run at 3 AM every day. I could have created a complex monitoring system with alerts and rollbacks. I could have spent weeks building the "perfect" automation.
Instead, I built something simple that runs twice a week at an off-peak time, logs everything, and handles common failure modes. 80/20 rule: I got 80% of the benefit with 20% of the complexity.
That's the lesson for any home automation: don't automate everything. Don't automate perfectly. Automate the things that save time and cause friction, and do it in the simplest way that still has reasonable safety guardrails.
Also, weird cron times (3:17 instead of 3:00) are underrated. Small optimization gestures add up.
How to Do This Yourself
- List all your Docker stacks
- Create a bash script that updates each one
- Test it manually first (seriously, don't skip this)
- Set up a cron job to run it at an off-peak time
- Add logging so you can see what happened
- Monitor the logs for the first month
- Once you trust it, mostly forget about it
That's it. You've automated one of the most tedious parts of managing a home lab.
Final Thoughts
Home automation is best when it removes friction without requiring constant attention. This Docker automation does exactly that. It runs without my involvement, logs results I can review, and fails gracefully if something goes wrong.
Is it necessary? No. Is it nice? Yes. Does it make my life objectively better? Debatable, but I'd argue yes.
And that's the whole point of home automation, really. Making life incrementally better through small, thoughtful systems.