RoadRunner
  • 🟠General
    • What is RoadRunner?
    • Features
    • Quick Start
    • Installation
    • Configuration
    • Contributing
    • Upgrade and Compatibility
  • πŸ‘·PHP Worker
    • Worker
    • Workers pool
    • Developer mode
    • Code Coverage
    • Debugging
    • Environment
    • Manual workers scaling
    • Auto workers scaling
    • RPC
  • 🟒Customization
    • Building RR with a custom plugin
    • Integrating with Golang Apps
    • Writing a Middleware
    • Writing a Jobs Driver
    • Writing a Plugin
    • Events Bus
  • πŸ”ŒPlugins
    • Intro into Plugins
    • Centrifuge (WebSockets)
    • Service (Systemd)
    • Configuration
    • Server
    • Locks
    • gRPC
    • TCP
  • 🌐Community Plugins
    • Intro into Community Plugins
    • Circuit Breaker
    • SendRemoteFile
    • RFC 7234 Cache
  • πŸ”΅App Server
    • Production Usage
    • RoadRunner with NGINX
    • RR as AWS Lambda
    • Docker Images
    • CLI Commands
    • Systemd
  • πŸ”Key-Value
    • Intro into KV
    • Memcached
    • In-Memory
    • BoltDB
    • Redis
  • πŸ“¦Queues and Jobs
    • Intro into Jobs
    • Google Pub/Sub
    • Beanstalk
    • In-Memory
    • RabbitMQ
    • BoltDB
    • Kafka
    • NATS
    • SQS
  • πŸ•ΈοΈHTTP
    • Intro into HTTP
    • Headers and CORS
    • Proxy IP parser
    • Static files
    • X-Sendfile
    • Streaming
    • gzip
  • πŸ“ˆLogging and Observability
    • OpenTelemetry
    • HealthChecks
    • Access Logs
    • AppLogger
    • Metrics
    • Grafana
    • Logger
  • πŸ”€Workflow Engine
    • Temporal.io
    • Worker
  • 🧩Integrations
    • Migration from RRv1 to RRv2
    • Spiral Framework
    • Yii
    • Symfony
    • Laravel
    • ChubbyPHP
  • πŸ§ͺExperimental Features
    • List of the Experimental Features
  • 🚨Error codes
    • CRC validation failed
    • Allocate Timeout
  • πŸ“šReleases
    • v2025.1.2
    • v2025.1.1
    • v2025.1.0
    • v2024.3.5
    • v2024.3.4
    • v2024.3.3
    • v2024.3.2
    • v2024.3.1
    • v2024.3.0
Powered by GitBook
On this page
  • State and memory
  • Useful Tips

Was this helpful?

Edit on GitHub
  1. App Server

Production Usage

PreviousRFC 7234 CacheNextRoadRunner with NGINX

Last updated 10 months ago

Was this helpful?

When utilizing RoadRunner in a production environment, it is important to consider various tips and suggestions to ensure optimal performance and stability.

State and memory

One crucial aspect to keep in mind is that state and memory are not shared between different worker instances, but they are shared for a single worker instance. As a result, it is essential to take precautions such as closing all descriptors and avoiding state pollution to prevent memory leaks and ensure application stability.

Here are some tips to keep in mind:

  • Make sure you close all descriptors (especially on fatal exceptions).

  • Watch out for memory leaksβ€”you need to be more selective about the components you use. Workers will be restarted in case of a memory leak, but it should not be difficult to avoid this problem altogether by designing your application properly.

  • Avoid state pollution (i.e., caching globals or user data in memory).

  • Database connections and any pipe/socket are the potential point of failure. An easy way to deal with this is to close all connections after every iteration. Note that this is not the most performant solution.

Consider calling gc_collect_cycles after every execution if you want to keep memory usage low. However, this will slow down execution a lot. Use with caution.

Useful Tips

  • Make sure you are NOT listening on 0.0.0.0 in the RPC service (unless in Docker).

  • Connect to a worker using pipes for better performance (Unix sockets are just a bit slower).

  • Adjust your pool timings to the values you like.

  • Number of workers = number of CPU threads in your system, unless your application is IO bound, then choose the number heuristically based on the available memory on the server.

  • Consider using max_jobs for your workers if you experience application stability memory issues over time.

  • RoadRunner has +40% performance when using keep-alive connections.

  • Set the memory limit at least 10-20% below max_memory_usage.

  • Since RoadRunner runs workers from cli, you need to enable OPcache in the CLI with opcache.enable_cli=1.

  • Make sure to use when running rr in a cloud environment.

  • Use the user option in the server plugin configuration to start worker processes from the specified user on Linux-based systems. Note that in this case RoadRunner should be started from the root to allow fork-exec processes from different users.

  • If your application uses mostly IO (disk, network, etc.), you can allocate as many workers as you have memory for the application. Workers are cheap. A hello-world worker uses no more than ~26Mb of RSS memory.

  • For CPU bound operation, see an average CPU load and choose the number of workers to consume 90-95% CPU. Leave a few percent for the GC of the GO (not necessary btw).

  • If you have ~const workers latency, you can calculate the number of workers needed to handle the target .

πŸ”΅
health check endpoint
load