Scaling
Scale your oCore deployment vertically, distribute instances across multiple servers, and manage resource limits.
oCore is designed to run on a single server for most deployments. As your workload grows, you can scale vertically by adding resources, or horizontally by connecting additional servers for Odoo instance distribution.
Vertical Scaling
The simplest scaling strategy is increasing the resources on your existing server. oCore benefits most from additional RAM and CPU.
When to Scale Up
| Symptom | Likely Bottleneck | Action |
|---|---|---|
| Slow API responses | CPU or database | Increase vCPU count |
| Out-of-memory errors | RAM | Increase RAM, tune PostgreSQL |
| Slow file operations | Disk I/O | Upgrade to NVMe SSD |
| SSH gateway timeouts | Network or CPU | Increase bandwidth or vCPU |
Resource Allocation Guide
| Managed Instances | vCPU | RAM | Storage |
|---|---|---|---|
| 1-10 | 2 | 4 GB | 40 GB SSD |
| 10-25 | 4 | 8 GB | 100 GB SSD |
| 25-50 | 4 | 16 GB | 200 GB SSD |
| 50-100 | 8 | 32 GB | 500 GB NVMe |
| 100+ | 16+ | 64+ GB | 1+ TB NVMe |
The primary scaling factor is the number of active Odoo instances. Each instance consumes resources on its target server, but oCore itself needs resources for API handling, job processing, monitoring, and SSH gateway sessions.
Container Resource Limits
Set resource limits on oCore containers to prevent a single service from consuming all available resources:
services:
backend:
deploy:
resources:
limits:
cpus: "4"
memory: 4G
reservations:
cpus: "1"
memory: 1G
frontend:
deploy:
resources:
limits:
cpus: "2"
memory: 2G
reservations:
cpus: "0.5"
memory: 512M
postgres:
deploy:
resources:
limits:
cpus: "4"
memory: 8G
reservations:
cpus: "1"
memory: 2GManaging Multiple Servers
oCore supports connecting multiple target servers for distributing Odoo instances. The oCore control plane (backend + frontend + database) runs on one server, while Odoo instances are deployed to connected servers.
Architecture
Adding Servers
Add servers through the oCore dashboard or API. Each server needs:
- SSH access from the oCore control plane
- Docker installed and running
- Sufficient resources for the Odoo instances it will host
Instance Distribution
When deploying a new Odoo instance, select which server should host it. Consider:
- Available resources -- Check CPU, RAM, and disk usage on each server
- Geographic proximity -- Place instances closer to their users
- Workload isolation -- Separate high-traffic instances from development/staging
Separating PostgreSQL
For deployments managing 50+ instances, consider running PostgreSQL on a dedicated server or using a managed database service.
External PostgreSQL
Point oCore at an external database by updating the DATABASE_URL:
DATABASE_URL=postgres://ocore:password@db.internal.example.com:5432/ocore?sslmode=requireRemove the postgres service from docker-compose.prod.yml and the depends_on reference from the backend service.
Managed Database Services
All major cloud providers offer managed PostgreSQL:
| Provider | Service | Notes |
|---|---|---|
| AWS | RDS for PostgreSQL | Automated backups, Multi-AZ |
| Google Cloud | Cloud SQL | Automated backups, HA |
| DigitalOcean | Managed Databases | Simple setup, daily backups |
| Hetzner | Managed PostgreSQL | Cost-effective EU hosting |
Use PostgreSQL 15 or 16 with the uuid-ossp extension enabled. Enable SSL for connections from the oCore backend.
Performance Optimization
Database Connection Limits
As you add more servers and instances, the backend processes more concurrent operations. Increase PostgreSQL's max_connections or use connection pooling:
max_connections = 300Background Job Throughput
oCore uses River for background job processing (deployments, backups, health checks). The default worker count handles typical workloads. For large deployments with many concurrent operations, the job system scales with available CPU cores.
Monitoring at Scale
With multiple servers, centralize your monitoring:
- Aggregate logs from all servers into a single destination (Loki, ELK, CloudWatch)
- Monitor the oCore health endpoint from an external service
- Track per-server resource usage through the oCore dashboard
- Set up alerts for any server becoming unreachable
See the Monitoring guide for detailed configuration.
Scaling Checklist
Before scaling your deployment:
- Identify the bottleneck (CPU, RAM, disk, network, database)
- Review database tuning settings
- Ensure backups are running successfully
- Set up monitoring and alerting
- Test your disaster recovery procedure
- Document your server inventory and instance distribution