oCoreoCore Docs

Scaling

Scale your oCore deployment vertically, distribute instances across multiple servers, and manage resource limits.

oCore is designed to run on a single server for most deployments. As your workload grows, you can scale vertically by adding resources, or horizontally by connecting additional servers for Odoo instance distribution.

Vertical Scaling

The simplest scaling strategy is increasing the resources on your existing server. oCore benefits most from additional RAM and CPU.

When to Scale Up

SymptomLikely BottleneckAction
Slow API responsesCPU or databaseIncrease vCPU count
Out-of-memory errorsRAMIncrease RAM, tune PostgreSQL
Slow file operationsDisk I/OUpgrade to NVMe SSD
SSH gateway timeoutsNetwork or CPUIncrease bandwidth or vCPU

Resource Allocation Guide

Managed InstancesvCPURAMStorage
1-1024 GB40 GB SSD
10-2548 GB100 GB SSD
25-50416 GB200 GB SSD
50-100832 GB500 GB NVMe
100+16+64+ GB1+ TB NVMe

The primary scaling factor is the number of active Odoo instances. Each instance consumes resources on its target server, but oCore itself needs resources for API handling, job processing, monitoring, and SSH gateway sessions.

Container Resource Limits

Set resource limits on oCore containers to prevent a single service from consuming all available resources:

docker-compose.prod.yml
services:
  backend:
    deploy:
      resources:
        limits:
          cpus: "4"
          memory: 4G
        reservations:
          cpus: "1"
          memory: 1G

  frontend:
    deploy:
      resources:
        limits:
          cpus: "2"
          memory: 2G
        reservations:
          cpus: "0.5"
          memory: 512M

  postgres:
    deploy:
      resources:
        limits:
          cpus: "4"
          memory: 8G
        reservations:
          cpus: "1"
          memory: 2G

Managing Multiple Servers

oCore supports connecting multiple target servers for distributing Odoo instances. The oCore control plane (backend + frontend + database) runs on one server, while Odoo instances are deployed to connected servers.

Architecture

Loading diagram...

Adding Servers

Add servers through the oCore dashboard or API. Each server needs:

  • SSH access from the oCore control plane
  • Docker installed and running
  • Sufficient resources for the Odoo instances it will host

Instance Distribution

When deploying a new Odoo instance, select which server should host it. Consider:

  • Available resources -- Check CPU, RAM, and disk usage on each server
  • Geographic proximity -- Place instances closer to their users
  • Workload isolation -- Separate high-traffic instances from development/staging

Separating PostgreSQL

For deployments managing 50+ instances, consider running PostgreSQL on a dedicated server or using a managed database service.

External PostgreSQL

Point oCore at an external database by updating the DATABASE_URL:

.env
DATABASE_URL=postgres://ocore:password@db.internal.example.com:5432/ocore?sslmode=require

Remove the postgres service from docker-compose.prod.yml and the depends_on reference from the backend service.

Managed Database Services

All major cloud providers offer managed PostgreSQL:

ProviderServiceNotes
AWSRDS for PostgreSQLAutomated backups, Multi-AZ
Google CloudCloud SQLAutomated backups, HA
DigitalOceanManaged DatabasesSimple setup, daily backups
HetznerManaged PostgreSQLCost-effective EU hosting

Use PostgreSQL 15 or 16 with the uuid-ossp extension enabled. Enable SSL for connections from the oCore backend.

Performance Optimization

Database Connection Limits

As you add more servers and instances, the backend processes more concurrent operations. Increase PostgreSQL's max_connections or use connection pooling:

postgresql.conf
max_connections = 300

Background Job Throughput

oCore uses River for background job processing (deployments, backups, health checks). The default worker count handles typical workloads. For large deployments with many concurrent operations, the job system scales with available CPU cores.

Monitoring at Scale

With multiple servers, centralize your monitoring:

  • Aggregate logs from all servers into a single destination (Loki, ELK, CloudWatch)
  • Monitor the oCore health endpoint from an external service
  • Track per-server resource usage through the oCore dashboard
  • Set up alerts for any server becoming unreachable

See the Monitoring guide for detailed configuration.

Scaling Checklist

Before scaling your deployment:

  • Identify the bottleneck (CPU, RAM, disk, network, database)
  • Review database tuning settings
  • Ensure backups are running successfully
  • Set up monitoring and alerting
  • Test your disaster recovery procedure
  • Document your server inventory and instance distribution
Was this page helpful?