oCoreoCore Docs

Databases

Create, back up, restore, neutralize, and tune databases for your Odoo instances.

Every Odoo instance in oCore runs on a PostgreSQL database. oCore provides comprehensive database management including creation, deletion, backup, restore, neutralization (for staging environments), import/export, and performance tuning. This page covers all database operations you will need for day-to-day management.

Database Management

Manage databases for your Odoo instance.

Open in Dashboard

Creating and Dropping Databases

Creating a Database

When an instance is provisioned, oCore creates a default PostgreSQL database. You can create additional databases for multi-database setups.

curl -X POST https://ocore.example.com/api/instances/{instanceId}/database \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "odoo_staging",
    "template": "template0"
  }'
ocore db create --instance INSTANCE_UUID --name odoo_staging

Dropping a Database

Irreversible

Dropping a database permanently deletes all data. Create a backup first if you might need the data later.

curl -X DELETE https://ocore.example.com/api/instances/{instanceId}/database \
  -H "Authorization: Bearer $TOKEN"
ocore db drop --instance INSTANCE_UUID --name odoo_staging

Database Backups and Restores

oCore provides both manual and scheduled database backups. For comprehensive backup management including destinations, schedules, and retention policies, see the Backups page.

Quick Manual Backup

Create an on-demand backup of a specific environment's database:

curl -X POST https://ocore.example.com/api/environments/{envId}/backups \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"backupType": "db"}'

Backup types:

  • db -- Database-only backup (faster, smaller)
  • full -- Database + filestore (complete instance state)

Restoring a Backup

Restore a backup to the same environment, a new environment, or a different existing environment:

curl -X POST https://ocore.example.com/api/backups/{backupId}/restore \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "targetType": "same"
  }'
ocore backup restore --backup BACKUP_UUID --target same

Restore Target Types

TargetDescription
sameRestore to the same environment (overwrites current data)
newRestore to a new environment (requires newEnvironmentName)
otherRestore to a different existing environment (requires targetEnvironmentId)

Point-in-Time Recovery (PITR)

If WAL (Write-Ahead Log) archiving is enabled on your backup schedule, you can restore to any point in time:

curl -X POST https://ocore.example.com/api/backups/{backupId}/restore \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "targetType": "same",
    "pitrTimestamp": "2026-02-26T12:00:00Z"
  }'

WAL Archiving

PITR requires WAL archiving to be enabled in the backup schedule. See Backups for configuration details.

Database Neutralization

Neutralization sanitizes a production database for use in staging or development environments. It disables production-specific features to prevent accidental side effects like sending real emails or processing real payments.

What Neutralization Does

  • Disables outgoing mail servers
  • Deactivates scheduled actions (cron jobs)
  • Disables payment acquirers
  • Resets webhook URLs
  • Optionally anonymizes partner data
curl -X POST https://ocore.example.com/api/environments/{envId}/neutralization/neutralize \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"anonymize": false}'
ocore db neutralize --environment ENV_UUID

After Cloning Production

Always neutralize a database after restoring a production backup to staging. This prevents the staging environment from sending emails to real customers or processing real transactions.

Database Import/Export

Exporting a Database

Export a database as a SQL dump or a compressed archive:

curl -X POST https://ocore.example.com/api/instances/{instanceId}/database/export \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"format": "sql"}'

Supported formats:

  • sql -- Plain SQL dump
  • zip -- Compressed archive with database dump and filestore

Importing a Database

Import a database from an uploaded file:

curl -X POST https://ocore.example.com/api/instances/{instanceId}/database/import/upload \
  -H "Authorization: Bearer $TOKEN" \
  -F "file=@database.sql" \
  -F "format=sql"

Import Overwrites Data

Importing a database replaces the existing data in the target environment. Create a backup before importing.

Database Performance Tuning

oCore provides comprehensive PostgreSQL tuning through the dashboard, including an auto-tuning pipeline that calculates optimal parameters based on your server hardware.

PG Parameter Tuning

The PG Settings panel on the database tab displays all current PostgreSQL parameters with key settings highlighted at the top:

ParameterCategoryRequires
shared_buffersMemoryRestart
effective_cache_sizeMemoryReload
work_memMemoryReload
maintenance_work_memMemoryReload
max_connectionsConnectionsRestart
autovacuum_max_workersAutovacuumRestart (PG 16+)
autovacuum_vacuum_scale_factorAutovacuumReload
autovacuum_analyze_scale_factorAutovacuumReload

Restart vs Reload

Parameters marked Restart require a PostgreSQL restart to take effect. Parameters marked Reload can be applied with SELECT pg_reload_conf() without downtime. oCore handles this automatically when you apply changes.

The settings panel supports:

  • Searching and filtering all PG parameters by name
  • Collapsible view with key settings always visible
  • Current values alongside recommended values from auto-tuning
  • Maintenance history log showing past VACUUM, ANALYZE, and REINDEX operations

Auto-Tuning Pipeline

oCore calculates optimal PostgreSQL parameters based on your server hardware, Odoo worker count, number of co-hosted instances, and pgBouncer configuration:

ParameterFormulaConstraints
shared_buffers25% of per-instance RAMMin 128 MB, max 8 GB
effective_cache_size75% of per-instance RAMMin 256 MB
work_mem25% of per-instance RAM / (max_connections * 2)4-64 MB
maintenance_work_mem5% of per-instance RAMMin 64 MB, max 2 GB
max_connectionsDepends on pgBouncer (see below)--
autovacuum_max_workers3 (4 for 16+ CPU cores)--
autovacuum_*_scale_factor0.05 (aggressive)--

Per-instance RAM is calculated by dividing total server RAM by the number of instances on the server.

Connection Calculation

Connection limits are calculated differently depending on whether pgBouncer is enabled:

With pgBouncer:

  • db_maxconn (Odoo): workers * 2, minimum 16
  • max_connections (PG): pool_size * 2 + 3 (superuser reserved)

Without pgBouncer:

  • db_maxconn (Odoo): workers + cron_threads + 5 reserved, minimum 16
  • max_connections (PG): db_maxconn + 10 overhead

Database Metrics

oCore collects database performance metrics including:

  • Connection count and pool utilization
  • Query execution times
  • Table sizes and index usage
  • Cache hit ratios

View these metrics on the instance detail page under the Performance tab.

Database Maintenance

oCore can run automated maintenance tasks:

  • VACUUM -- Reclaim storage from deleted rows
  • ANALYZE -- Update query planner statistics
  • REINDEX -- Rebuild indexes for optimal performance
curl -X POST https://ocore.example.com/api/instances/{instanceId}/database/maintenance/vacuum \
  -H "Authorization: Bearer $TOKEN"

Slow Query Analysis

oCore automatically collects slow queries from pg_stat_statements and provides analysis tools to identify and fix performance bottlenecks.

How It Works

  1. A background worker periodically connects to each running instance via SSH
  2. The worker queries pg_stat_statements inside the PostgreSQL container for queries exceeding the threshold
  3. Results are stored as snapshots with per-query statistics
  4. Regression detection compares current performance against historical averages

Configuration

Slow query collection is configured per instance via the database config:

SettingDefaultDescription
Threshold500msMinimum average query time to be considered "slow"
Regression Factor3.0xA query running this many times slower than its historical average is flagged as a regression

Slow Query Table

The slow query dashboard shows the latest collection snapshot with:

ColumnDescription
QueryNormalized SQL text from pg_stat_statements
Avg TimeAverage execution time per call
Max TimeWorst-case execution time
Total TimeCumulative time spent on this query
CallsNumber of times the query was executed
RowsTotal rows returned
Std DevStandard deviation of execution time
RegressionFlag when current average exceeds 3x historical average

The table supports:

  • Sorting by any column (avg time, max time, total time, calls)
  • Pagination for large result sets
  • Filtering to show only regressions
  • Click-through to view query history over time

Regression Detection

When a query's current average execution time exceeds the historical average by the regression factor (default 3x), it is flagged as a regression. Regressions are:

  • Highlighted in the slow query table with a warning badge
  • Available as a filtered view showing only regressed queries
  • Logged with the current and historical averages for investigation

Query History

Click any query to view its performance history over the last 30 days, showing how average time, max time, and call count have changed over time. This helps correlate performance changes with deployments or data growth.

Data Retention

Slow query snapshots are retained for 30 days by default. A cleanup job periodically removes older records to prevent unbounded database growth.

Connection Pooling (pgBouncer)

oCore supports optional pgBouncer connection pooling for each instance. pgBouncer multiplexes many application connections through fewer PostgreSQL connections, reducing memory usage and improving connection handling under load.

Configuration

The pgBouncer configuration panel lets you:

SettingDescription
EnabledToggle pgBouncer on or off for the instance
Pool Modesession (default), transaction, or statement
Pool SizeMaximum number of server connections per database
Max Client ConnectionsMaximum number of client connections
Default Pool SizeConnections per user/database pair

Pool Mode for Odoo

Odoo uses session pool mode by default because Odoo relies on session-level state (advisory locks, temp tables). Use transaction mode only if you have verified your workload is compatible.

How It Works

When pgBouncer is enabled:

  1. Odoo connects to pgBouncer instead of PostgreSQL directly
  2. pgBouncer maintains a pool of PostgreSQL connections
  3. Client connections are multiplexed through the pool
  4. Connection limits are recalculated automatically (see Connection Calculation)

Enabling pgBouncer

Toggle pgBouncer from the database settings panel in the dashboard, or via API:

curl -X PUT https://ocore.example.com/api/instances/{instanceId}/pgbouncer \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "enabled": true,
    "poolMode": "session",
    "poolSize": 20
  }'

When toggling pgBouncer on or off, oCore automatically recalculates max_connections and db_maxconn to match the new configuration.

Read Replicas

oCore supports PostgreSQL streaming replication for read-heavy workloads, failover, and direct SQL reporting.

Creating a Replica

Create a read replica on a target server:

curl -X POST https://ocore.example.com/api/instances/{instanceId}/replicas \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"serverId": "TARGET_SERVER_UUID"}'

The replica setup process:

  1. Creates a ReadReplica record with status configuring
  2. Generates a unique replication slot name (e.g., ocore_myapp_a1b2c3d4)
  3. Runs pg_basebackup from the primary to the replica server
  4. Configures standby.signal for hot standby mode
  5. Starts the replica container
  6. Transitions to streaming status when replication begins

Replica Status

StatusDescription
configuringInitial setup in progress
streamingActively receiving WAL from primary
errorReplication broken or lag exceeds threshold

Health Monitoring

oCore periodically checks replica health by querying pg_stat_replication on the primary:

MetricDescription
Replay Lag (bytes)How far behind the replica is in WAL bytes
Replay Lag (seconds)Estimated time the replica is behind
Write/Flush/Replay LSNLog sequence numbers for each replication stage
StateCurrent replication state from PostgreSQL

If replay lag exceeds 1 hour, the replica is automatically flagged as error status.

Managing Replicas

The replica manager in the dashboard shows all replicas for an instance with their status, lag metrics, server assignment, and last health check time.

# List replicas
curl https://ocore.example.com/api/instances/{instanceId}/replicas \
  -H "Authorization: Bearer $TOKEN"

# Delete a replica
curl -X DELETE https://ocore.example.com/api/instances/{instanceId}/replicas/{replicaId} \
  -H "Authorization: Bearer $TOKEN"

Deleting a replica:

  1. Drops the replication slot on the primary (best-effort)
  2. Stops and removes the replica container
  3. Deletes the ReadReplica record

Odoo Read-Splitting

Odoo does not natively route read queries to replicas. Read replicas serve as hot standbys for failover and for direct SQL reporting queries. Full read-splitting would require PgPool-II or application-level routing changes.

Required Permissions

Database operations require manage:environments permission. Viewing database metrics requires view:environments.

Troubleshooting

Database connection refused

  • Check that PostgreSQL is running inside the instance container
  • Verify the database credentials in the instance environment variables
  • Check the PostgreSQL logs for max connection errors
  • Increase DB_MAXCONN if the connection pool is exhausted

Restore fails with "out of disk space"

  • Check available disk space on the server: df -h
  • Delete old backups or snapshots to free space
  • Restore to a server with more disk capacity

Slow queries

  • Check the Slow Query Analysis dashboard for queries exceeding the threshold
  • Look for regressions -- queries that have become significantly slower than their historical average
  • Run VACUUM ANALYZE to update statistics
  • Check for missing indexes on frequently queried columns
  • Use the Auto-Tuning Pipeline to calculate optimal shared_buffers and work_mem values
  • Enable pgBouncer if connection overhead is contributing to slow queries

Neutralization incomplete

  • Check the neutralization logs for errors
  • Some custom modules may have features not covered by default neutralization
  • Write a custom neutralization script for module-specific cleanup

Import fails with encoding errors

  • Ensure the SQL dump uses UTF-8 encoding
  • If importing from an older Odoo version, check for schema incompatibilities
  • Try importing to a fresh database instead of overwriting
Was this page helpful?