oCoreoCore Docs

Database Tuning

Optimize PostgreSQL performance for oCore with connection pooling, memory settings, and vacuum configuration.

oCore uses PostgreSQL as its primary data store. The default PostgreSQL configuration is conservative and designed for compatibility, not performance. Tuning a few key settings can significantly improve responsiveness under production workloads.

Key postgresql.conf Settings

These settings should be adjusted based on your server's available RAM and workload. The examples below assume a server with 8 GB of RAM dedicated to PostgreSQL.

Memory

postgresql.conf
# Shared memory buffer pool (25% of total RAM)
shared_buffers = 2GB

# Memory per query operation (sorting, hashing)
work_mem = 64MB

# Memory for maintenance operations (VACUUM, CREATE INDEX)
maintenance_work_mem = 512MB

# Planner's estimate of OS disk cache (50-75% of total RAM)
effective_cache_size = 6GB
SettingRule of ThumbDefaultPurpose
shared_buffers25% of RAM128 MBShared memory for caching data pages
work_mem32-128 MB4 MBPer-operation memory for sorts and joins
maintenance_work_mem256-512 MB64 MBMemory for VACUUM and index creation
effective_cache_size50-75% of RAM4 GBPlanner hint for OS cache size

Write-Ahead Log (WAL)

postgresql.conf
# Increase WAL buffers for write-heavy workloads
wal_buffers = 64MB

# Checkpoint completion target (spread I/O over more time)
checkpoint_completion_target = 0.9

# Maximum WAL size before triggering a checkpoint
max_wal_size = 2GB
min_wal_size = 512MB

Query Planning

postgresql.conf
# Cost estimates for random I/O (lower for SSD)
random_page_cost = 1.1

# Parallel query workers
max_parallel_workers_per_gather = 2
max_parallel_workers = 4

Set random_page_cost to 1.1 on SSD storage (default 4.0 assumes spinning disks). This helps the query planner choose index scans more often.

Connection Pooling

oCore's backend maintains a connection pool to PostgreSQL. The default maximum open connections is managed by the Ent ORM client. In production, ensure PostgreSQL can handle the expected connection count.

postgresql.conf
# Maximum concurrent connections
max_connections = 200

For deployments with many concurrent users, consider placing PgBouncer between oCore and PostgreSQL:

pgbouncer.ini
[databases]
ocore = host=127.0.0.1 port=5432 dbname=ocore

[pgbouncer]
listen_addr = 0.0.0.0
listen_port = 6432
auth_type = md5
auth_file = /etc/pgbouncer/userlist.txt

# Transaction-level pooling (recommended for oCore)
pool_mode = transaction
default_pool_size = 50
max_client_conn = 500

Then point oCore's DATABASE_URL at PgBouncer:

DATABASE_URL=postgres://ocore:password@localhost:6432/ocore?sslmode=disable

Vacuum Configuration

PostgreSQL's autovacuum process reclaims storage from deleted and updated rows. oCore performs frequent updates to instance status, job records, and session data, so autovacuum should be responsive.

postgresql.conf
# Enable autovacuum (on by default)
autovacuum = on

# Lower thresholds for more frequent vacuuming
autovacuum_vacuum_threshold = 50
autovacuum_analyze_threshold = 50

# Scale factors (fraction of table size triggering vacuum)
autovacuum_vacuum_scale_factor = 0.05
autovacuum_analyze_scale_factor = 0.02

# Allow more parallel autovacuum workers
autovacuum_max_workers = 4

# Reduce vacuum cost delay for faster cleanup
autovacuum_vacuum_cost_delay = 2ms

These settings ensure autovacuum runs more frequently on smaller tables (like job queue records) and does not fall behind on high-churn tables.

Applying Configuration Changes

Docker Compose

If using the bundled PostgreSQL container, mount a custom configuration file:

docker-compose.prod.yml
services:
  postgres:
    image: postgres:16-alpine
    volumes:
      - ocore_postgres_data:/var/lib/postgresql/data
      - ./postgresql.conf:/etc/postgresql/postgresql.conf:ro
    command: postgres -c config_file=/etc/postgresql/postgresql.conf

External PostgreSQL

Edit the postgresql.conf file (typically at /etc/postgresql/16/main/postgresql.conf) and restart:

sudo systemctl restart postgresql

Some settings (like shared_buffers) require a full restart while others (like work_mem) can be applied with a reload:

sudo systemctl reload postgresql

Monitoring PostgreSQL Performance

Check Active Connections

SELECT count(*) FROM pg_stat_activity WHERE state = 'active';

Check Table Bloat

SELECT
  schemaname || '.' || relname AS table,
  pg_size_pretty(pg_total_relation_size(relid)) AS total_size,
  n_dead_tup AS dead_rows,
  last_autovacuum
FROM pg_stat_user_tables
ORDER BY n_dead_tup DESC
LIMIT 10;

Check Slow Queries

Enable the pg_stat_statements extension for query performance tracking:

-- Enable the extension
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;

-- View slowest queries
SELECT
  query,
  calls,
  mean_exec_time::numeric(10,2) AS avg_ms,
  total_exec_time::numeric(10,2) AS total_ms
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 10;

Sizing Guide

Deployment Sizeshared_bufferswork_memeffective_cache_sizemax_connections
Small (4 GB RAM)1 GB32 MB3 GB100
Medium (8 GB RAM)2 GB64 MB6 GB200
Large (16 GB RAM)4 GB128 MB12 GB300

These are starting points. Monitor query performance and adjust based on your actual workload. The pg_stat_statements extension is invaluable for identifying bottlenecks.

Was this page helpful?