oCoreoCore Docs

Backups

Backup strategies for oCore including database backups, volume backups, and disaster recovery procedures.

A comprehensive backup strategy protects against data loss from hardware failure, human error, or corruption. oCore stores critical data in PostgreSQL and Docker volumes that both need regular backups.

What to Back Up

DataLocationMethod
PostgreSQL databaseocore_postgres_data volumepg_dump or continuous archiving
Backend dataocore_backend_data volumeVolume snapshot or file copy
Environment file.envCopy to secure storage
TLS certificates/etc/letsencrypt/ or Caddy dataCopy to secure storage
Custom configurationpostgresql.conf, Nginx configVersion control or copy

The PostgreSQL database is the most critical asset. It contains all users, organizations, servers, instances, projects, deployments, and job history.

Database Backups with pg_dump

Manual Backup

# Dump the entire database to a compressed file
docker compose -f docker-compose.prod.yml exec -T postgres \
  pg_dump -U ocore -Fc ocore > ocore_backup_$(date +%Y%m%d_%H%M%S).dump

# Verify the backup file
pg_restore --list ocore_backup_*.dump | head -20

The -Fc flag produces a custom-format archive that supports selective restore and compression.

Automated Daily Backups

Create a backup script:

/opt/ocore/backup.sh
#!/bin/bash
set -euo pipefail

BACKUP_DIR="/opt/ocore/backups"
RETENTION_DAYS=30
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="${BACKUP_DIR}/ocore_${TIMESTAMP}.dump"

mkdir -p "${BACKUP_DIR}"

# Create the database dump
docker compose -f /opt/ocore/docker-compose.prod.yml exec -T postgres \
  pg_dump -U ocore -Fc ocore > "${BACKUP_FILE}"

# Verify the backup is not empty
if [ ! -s "${BACKUP_FILE}" ]; then
  echo "ERROR: Backup file is empty" >&2
  exit 1
fi

echo "Backup created: ${BACKUP_FILE} ($(du -h "${BACKUP_FILE}" | cut -f1))"

# Remove backups older than retention period
find "${BACKUP_DIR}" -name "ocore_*.dump" -mtime +${RETENTION_DAYS} -delete

echo "Cleanup complete. Remaining backups:"
ls -lh "${BACKUP_DIR}"/ocore_*.dump

Schedule with cron:

# Run daily at 2:00 AM
0 2 * * * /opt/ocore/backup.sh >> /var/log/ocore-backup.log 2>&1

Restoring from a Backup

# Stop the backend to prevent writes during restore
docker compose -f docker-compose.prod.yml stop backend frontend

# Drop and recreate the database
docker compose -f docker-compose.prod.yml exec -T postgres \
  psql -U ocore -c "DROP DATABASE ocore;"
docker compose -f docker-compose.prod.yml exec -T postgres \
  psql -U ocore -c "CREATE DATABASE ocore OWNER ocore;"

# Restore from the backup file
docker compose -f docker-compose.prod.yml exec -T postgres \
  pg_restore -U ocore -d ocore < ocore_backup_20260226_020000.dump

# Restart all services
docker compose -f docker-compose.prod.yml up -d

Volume Backups

Backend Data Volume

The ocore_backend_data volume contains the SSH host key. Back it up so SSH clients do not see host key change warnings after a restore.

# Backup the volume to a tar archive
docker run --rm \
  -v ocore_backend_data:/data:ro \
  -v /opt/ocore/backups:/backup \
  alpine tar czf /backup/backend_data_$(date +%Y%m%d).tar.gz -C /data .

Restore a Volume

# Stop services
docker compose -f docker-compose.prod.yml down

# Restore the volume
docker run --rm \
  -v ocore_backend_data:/data \
  -v /opt/ocore/backups:/backup \
  alpine sh -c "rm -rf /data/* && tar xzf /backup/backend_data_20260226.tar.gz -C /data"

# Start services
docker compose -f docker-compose.prod.yml up -d

Offsite Backups

Local backups protect against data corruption and human error. Offsite backups protect against hardware failure and disasters.

Amazon S3

/opt/ocore/backup-s3.sh
#!/bin/bash
set -euo pipefail

BACKUP_DIR="/opt/ocore/backups"
S3_BUCKET="s3://your-bucket/ocore-backups"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="${BACKUP_DIR}/ocore_${TIMESTAMP}.dump"

# Create the backup
docker compose -f /opt/ocore/docker-compose.prod.yml exec -T postgres \
  pg_dump -U ocore -Fc ocore > "${BACKUP_FILE}"

# Upload to S3
aws s3 cp "${BACKUP_FILE}" "${S3_BUCKET}/ocore_${TIMESTAMP}.dump" \
  --storage-class STANDARD_IA

# Clean up local file after upload
rm "${BACKUP_FILE}"

Google Cloud Storage

# Upload to GCS
gsutil cp "${BACKUP_FILE}" "gs://your-bucket/ocore-backups/ocore_${TIMESTAMP}.dump"

Rsync to Remote Server

# Sync backups to a remote server
rsync -avz --delete /opt/ocore/backups/ \
  backup-user@backup-server:/backups/ocore/

Disaster Recovery

Recovery Checklist

When recovering from a complete server failure:

  1. Provision a new server meeting the requirements
  2. Install Docker and Docker Compose
  3. Restore configuration files (.env, docker-compose.prod.yml, reverse proxy config)
  4. Start PostgreSQL and restore the database from the latest backup
  5. Restore the backend data volume (SSH host key)
  6. Start all services and run any pending migrations
  7. Update DNS if the server IP has changed
  8. Verify the deployment by checking the health endpoint and logging in

Recovery Time Estimates

ScenarioEstimated Recovery Time
Container crash (auto-restart)Seconds
Server reboot1-2 minutes
Full restore from local backup15-30 minutes
Full restore from offsite backup30-60 minutes
New server + full restore1-2 hours

Testing Backups

Regularly verify that your backups can actually be restored. Schedule a quarterly restore test:

# Restore to a test database (not the production one)
docker compose -f docker-compose.prod.yml exec -T postgres \
  psql -U ocore -c "CREATE DATABASE ocore_restore_test;"

docker compose -f docker-compose.prod.yml exec -T postgres \
  pg_restore -U ocore -d ocore_restore_test < latest_backup.dump

# Verify table counts
docker compose -f docker-compose.prod.yml exec -T postgres \
  psql -U ocore -d ocore_restore_test -c "SELECT count(*) FROM users;"

# Clean up
docker compose -f docker-compose.prod.yml exec -T postgres \
  psql -U ocore -c "DROP DATABASE ocore_restore_test;"

A backup that has never been tested is not a backup.

Was this page helpful?