Database Tuning
Optimize PostgreSQL performance for oCore with connection pooling, memory settings, and vacuum configuration.
oCore uses PostgreSQL as its primary data store. The default PostgreSQL configuration is conservative and designed for compatibility, not performance. Tuning a few key settings can significantly improve responsiveness under production workloads.
Key postgresql.conf Settings
These settings should be adjusted based on your server's available RAM and workload. The examples below assume a server with 8 GB of RAM dedicated to PostgreSQL.
Memory
# Shared memory buffer pool (25% of total RAM)
shared_buffers = 2GB
# Memory per query operation (sorting, hashing)
work_mem = 64MB
# Memory for maintenance operations (VACUUM, CREATE INDEX)
maintenance_work_mem = 512MB
# Planner's estimate of OS disk cache (50-75% of total RAM)
effective_cache_size = 6GB| Setting | Rule of Thumb | Default | Purpose |
|---|---|---|---|
shared_buffers | 25% of RAM | 128 MB | Shared memory for caching data pages |
work_mem | 32-128 MB | 4 MB | Per-operation memory for sorts and joins |
maintenance_work_mem | 256-512 MB | 64 MB | Memory for VACUUM and index creation |
effective_cache_size | 50-75% of RAM | 4 GB | Planner hint for OS cache size |
Write-Ahead Log (WAL)
# Increase WAL buffers for write-heavy workloads
wal_buffers = 64MB
# Checkpoint completion target (spread I/O over more time)
checkpoint_completion_target = 0.9
# Maximum WAL size before triggering a checkpoint
max_wal_size = 2GB
min_wal_size = 512MBQuery Planning
# Cost estimates for random I/O (lower for SSD)
random_page_cost = 1.1
# Parallel query workers
max_parallel_workers_per_gather = 2
max_parallel_workers = 4Set random_page_cost to 1.1 on SSD storage (default 4.0 assumes spinning disks). This helps the query planner choose index scans more often.
Connection Pooling
oCore's backend maintains a connection pool to PostgreSQL. The default maximum open connections is managed by the Ent ORM client. In production, ensure PostgreSQL can handle the expected connection count.
# Maximum concurrent connections
max_connections = 200For deployments with many concurrent users, consider placing PgBouncer between oCore and PostgreSQL:
[databases]
ocore = host=127.0.0.1 port=5432 dbname=ocore
[pgbouncer]
listen_addr = 0.0.0.0
listen_port = 6432
auth_type = md5
auth_file = /etc/pgbouncer/userlist.txt
# Transaction-level pooling (recommended for oCore)
pool_mode = transaction
default_pool_size = 50
max_client_conn = 500Then point oCore's DATABASE_URL at PgBouncer:
DATABASE_URL=postgres://ocore:password@localhost:6432/ocore?sslmode=disableVacuum Configuration
PostgreSQL's autovacuum process reclaims storage from deleted and updated rows. oCore performs frequent updates to instance status, job records, and session data, so autovacuum should be responsive.
# Enable autovacuum (on by default)
autovacuum = on
# Lower thresholds for more frequent vacuuming
autovacuum_vacuum_threshold = 50
autovacuum_analyze_threshold = 50
# Scale factors (fraction of table size triggering vacuum)
autovacuum_vacuum_scale_factor = 0.05
autovacuum_analyze_scale_factor = 0.02
# Allow more parallel autovacuum workers
autovacuum_max_workers = 4
# Reduce vacuum cost delay for faster cleanup
autovacuum_vacuum_cost_delay = 2msThese settings ensure autovacuum runs more frequently on smaller tables (like job queue records) and does not fall behind on high-churn tables.
Applying Configuration Changes
Docker Compose
If using the bundled PostgreSQL container, mount a custom configuration file:
services:
postgres:
image: postgres:16-alpine
volumes:
- ocore_postgres_data:/var/lib/postgresql/data
- ./postgresql.conf:/etc/postgresql/postgresql.conf:ro
command: postgres -c config_file=/etc/postgresql/postgresql.confExternal PostgreSQL
Edit the postgresql.conf file (typically at /etc/postgresql/16/main/postgresql.conf) and restart:
sudo systemctl restart postgresqlSome settings (like shared_buffers) require a full restart while others (like work_mem) can be applied with a reload:
sudo systemctl reload postgresqlMonitoring PostgreSQL Performance
Check Active Connections
SELECT count(*) FROM pg_stat_activity WHERE state = 'active';Check Table Bloat
SELECT
schemaname || '.' || relname AS table,
pg_size_pretty(pg_total_relation_size(relid)) AS total_size,
n_dead_tup AS dead_rows,
last_autovacuum
FROM pg_stat_user_tables
ORDER BY n_dead_tup DESC
LIMIT 10;Check Slow Queries
Enable the pg_stat_statements extension for query performance tracking:
-- Enable the extension
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
-- View slowest queries
SELECT
query,
calls,
mean_exec_time::numeric(10,2) AS avg_ms,
total_exec_time::numeric(10,2) AS total_ms
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 10;Sizing Guide
| Deployment Size | shared_buffers | work_mem | effective_cache_size | max_connections |
|---|---|---|---|---|
| Small (4 GB RAM) | 1 GB | 32 MB | 3 GB | 100 |
| Medium (8 GB RAM) | 2 GB | 64 MB | 6 GB | 200 |
| Large (16 GB RAM) | 4 GB | 128 MB | 12 GB | 300 |
These are starting points. Monitor query performance and adjust based on your actual workload. The pg_stat_statements extension is invaluable for identifying bottlenecks.