Backups
Backup strategies for oCore including database backups, volume backups, and disaster recovery procedures.
A comprehensive backup strategy protects against data loss from hardware failure, human error, or corruption. oCore stores critical data in PostgreSQL and Docker volumes that both need regular backups.
What to Back Up
| Data | Location | Method |
|---|---|---|
| PostgreSQL database | ocore_postgres_data volume | pg_dump or continuous archiving |
| Backend data | ocore_backend_data volume | Volume snapshot or file copy |
| Environment file | .env | Copy to secure storage |
| TLS certificates | /etc/letsencrypt/ or Caddy data | Copy to secure storage |
| Custom configuration | postgresql.conf, Nginx config | Version control or copy |
The PostgreSQL database is the most critical asset. It contains all users, organizations, servers, instances, projects, deployments, and job history.
Database Backups with pg_dump
Manual Backup
# Dump the entire database to a compressed file
docker compose -f docker-compose.prod.yml exec -T postgres \
pg_dump -U ocore -Fc ocore > ocore_backup_$(date +%Y%m%d_%H%M%S).dump
# Verify the backup file
pg_restore --list ocore_backup_*.dump | head -20The -Fc flag produces a custom-format archive that supports selective restore and compression.
Automated Daily Backups
Create a backup script:
#!/bin/bash
set -euo pipefail
BACKUP_DIR="/opt/ocore/backups"
RETENTION_DAYS=30
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="${BACKUP_DIR}/ocore_${TIMESTAMP}.dump"
mkdir -p "${BACKUP_DIR}"
# Create the database dump
docker compose -f /opt/ocore/docker-compose.prod.yml exec -T postgres \
pg_dump -U ocore -Fc ocore > "${BACKUP_FILE}"
# Verify the backup is not empty
if [ ! -s "${BACKUP_FILE}" ]; then
echo "ERROR: Backup file is empty" >&2
exit 1
fi
echo "Backup created: ${BACKUP_FILE} ($(du -h "${BACKUP_FILE}" | cut -f1))"
# Remove backups older than retention period
find "${BACKUP_DIR}" -name "ocore_*.dump" -mtime +${RETENTION_DAYS} -delete
echo "Cleanup complete. Remaining backups:"
ls -lh "${BACKUP_DIR}"/ocore_*.dumpSchedule with cron:
# Run daily at 2:00 AM
0 2 * * * /opt/ocore/backup.sh >> /var/log/ocore-backup.log 2>&1Restoring from a Backup
# Stop the backend to prevent writes during restore
docker compose -f docker-compose.prod.yml stop backend frontend
# Drop and recreate the database
docker compose -f docker-compose.prod.yml exec -T postgres \
psql -U ocore -c "DROP DATABASE ocore;"
docker compose -f docker-compose.prod.yml exec -T postgres \
psql -U ocore -c "CREATE DATABASE ocore OWNER ocore;"
# Restore from the backup file
docker compose -f docker-compose.prod.yml exec -T postgres \
pg_restore -U ocore -d ocore < ocore_backup_20260226_020000.dump
# Restart all services
docker compose -f docker-compose.prod.yml up -dVolume Backups
Backend Data Volume
The ocore_backend_data volume contains the SSH host key. Back it up so SSH clients do not see host key change warnings after a restore.
# Backup the volume to a tar archive
docker run --rm \
-v ocore_backend_data:/data:ro \
-v /opt/ocore/backups:/backup \
alpine tar czf /backup/backend_data_$(date +%Y%m%d).tar.gz -C /data .Restore a Volume
# Stop services
docker compose -f docker-compose.prod.yml down
# Restore the volume
docker run --rm \
-v ocore_backend_data:/data \
-v /opt/ocore/backups:/backup \
alpine sh -c "rm -rf /data/* && tar xzf /backup/backend_data_20260226.tar.gz -C /data"
# Start services
docker compose -f docker-compose.prod.yml up -dOffsite Backups
Local backups protect against data corruption and human error. Offsite backups protect against hardware failure and disasters.
Amazon S3
#!/bin/bash
set -euo pipefail
BACKUP_DIR="/opt/ocore/backups"
S3_BUCKET="s3://your-bucket/ocore-backups"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="${BACKUP_DIR}/ocore_${TIMESTAMP}.dump"
# Create the backup
docker compose -f /opt/ocore/docker-compose.prod.yml exec -T postgres \
pg_dump -U ocore -Fc ocore > "${BACKUP_FILE}"
# Upload to S3
aws s3 cp "${BACKUP_FILE}" "${S3_BUCKET}/ocore_${TIMESTAMP}.dump" \
--storage-class STANDARD_IA
# Clean up local file after upload
rm "${BACKUP_FILE}"Google Cloud Storage
# Upload to GCS
gsutil cp "${BACKUP_FILE}" "gs://your-bucket/ocore-backups/ocore_${TIMESTAMP}.dump"Rsync to Remote Server
# Sync backups to a remote server
rsync -avz --delete /opt/ocore/backups/ \
backup-user@backup-server:/backups/ocore/Disaster Recovery
Recovery Checklist
When recovering from a complete server failure:
- Provision a new server meeting the requirements
- Install Docker and Docker Compose
- Restore configuration files (
.env,docker-compose.prod.yml, reverse proxy config) - Start PostgreSQL and restore the database from the latest backup
- Restore the backend data volume (SSH host key)
- Start all services and run any pending migrations
- Update DNS if the server IP has changed
- Verify the deployment by checking the health endpoint and logging in
Recovery Time Estimates
| Scenario | Estimated Recovery Time |
|---|---|
| Container crash (auto-restart) | Seconds |
| Server reboot | 1-2 minutes |
| Full restore from local backup | 15-30 minutes |
| Full restore from offsite backup | 30-60 minutes |
| New server + full restore | 1-2 hours |
Testing Backups
Regularly verify that your backups can actually be restored. Schedule a quarterly restore test:
# Restore to a test database (not the production one)
docker compose -f docker-compose.prod.yml exec -T postgres \
psql -U ocore -c "CREATE DATABASE ocore_restore_test;"
docker compose -f docker-compose.prod.yml exec -T postgres \
pg_restore -U ocore -d ocore_restore_test < latest_backup.dump
# Verify table counts
docker compose -f docker-compose.prod.yml exec -T postgres \
psql -U ocore -d ocore_restore_test -c "SELECT count(*) FROM users;"
# Clean up
docker compose -f docker-compose.prod.yml exec -T postgres \
psql -U ocore -c "DROP DATABASE ocore_restore_test;"A backup that has never been tested is not a backup.