Latency
Overview
This document provides a comprehensive guide to understanding, measuring, and optimizing latency in OpenAlgo. After extensive performance engineering, we've reduced platform overhead by 95% (from 117ms to 5-10ms), making OpenAlgo one of the fastest retail algo trading platforms available.

Latency Concepts
Three Types of Latency
1. Platform Latency (Internal Processing)
Definition: Time spent processing within OpenAlgo, excluding broker API calls.
Formula:
Components:
API key verification (cached): ~1ms
Request validation: ~1-2ms
Symbol lookup (cached): ~0.5ms
Response formatting: ~1ms
Async logging: ~1-2ms
Target: < 10ms Current Performance: 5-10ms ✅
2. Broker API Latency (External Processing)
Definition: Time spent communicating with and waiting for the broker's servers.
Formula:
Components:
Network latency (one-way): ~20-40ms
Broker order validation: ~5-10ms
Exchange submission: ~10-20ms
Network latency (return): ~20-40ms
Typical Range: 50-80ms Cannot be optimized by OpenAlgo (external dependency)

3. Total Client Roundtrip (End-to-End)
Definition: Complete time from client request to client receiving response.
Formula:
Breakdown Example:
Measurement Methodology
How OpenAlgo Measures Latency
OpenAlgo uses high-precision timestamps to track latency at multiple points:
What Gets Stored in the Database
The order_latency table stores comprehensive metrics:
Calculation Formulas Used in Code
Performance Metrics
Current Performance (Post-Optimization)
Live Mode
Improvement: 95% reduction in platform overhead
Sandbox/Analyze Mode
Improvement: 90% reduction in platform overhead
Performance by Order Type
PLACE
5-8ms
50-70ms
60-75ms
MODIFY
5-7ms
40-60ms
50-65ms
CANCEL
4-6ms
30-50ms
40-55ms
SMART
6-9ms
50-70ms
60-80ms
BASKET
7-10ms per order
50-70ms
60-80ms
Optimization Details
1. API Key Verification Caching
Problem: Argon2 verification taking 20-50ms per key, multiplied by number of keys.
Solution:
Security Maintained:
SHA256 hashing prevents plaintext storage
TTL ensures credentials expire
Cache invalidated on key changes
Invalid keys cached separately
Performance Gain: 90-100ms → 1ms (99% improvement)
2. Symbol Lookup Caching
Problem: Database query on every order for symbol validation.
Solution:
Rationale: Symbols rarely change during trading hours.
Performance Gain: 5-10ms → 0.5ms (90% improvement)
3. Request-Level Position Caching
Problem: Same position queried 4-5 times in a single order flow.
Solution:
Scope: Cache cleared after each request.
Performance Gain: 20-30ms saved per order (eliminated 3-4 redundant queries)
4. Asynchronous SocketIO Emissions
Problem: Main thread blocked while broadcasting to WebSocket clients.
Solution:
Performance Gain: 10-20ms (main thread no longer waits)
5. Async Logging and Alerts
Problem: Database logging and Telegram alerts blocking order response.
Solution:
Performance Gain: 5-10ms (operations run in background)
Monitoring and Troubleshooting
Using the Latency Dashboard
Navigate to /latency in your OpenAlgo instance:
Features:
Real-time order latency tracking
Last 100 orders with detailed breakdown
Color-coded performance indicators
Performance metrics
Average RTT (broker API time)
Success rate
SLA compliance (% orders under 150ms)
Detailed breakdown modal
Click any order to see full latency breakdown
Platform overhead vs broker API time
Validation, response, and overhead metrics
Performance Indicators
Troubleshooting High Latency
If Platform Overhead > 15ms:
Check cache hit rates
Look for database query issues
Profile specific endpoints
If Broker API > 100ms:
Check server location
Mumbai servers should see 50-70ms
Other locations may see 80-120ms
Test broker connectivity
Check broker API status
Look for broker-side slowdowns
Verify API rate limits not exceeded
If Client RTT >> Total Latency:
Network issues between client and OpenAlgo
Flask server overloaded
Check CPU/memory usage
Consider scaling up
TLS/SSL handshake overhead
Use keep-alive connections
Enable HTTP/2
Best Practices
For Optimal Performance
Host close to broker infrastructure
Mumbai for Indian brokers
Singapore for some international brokers
Use adequate server resources
Minimum: 2 cores, 4GB RAM
Recommended: 4 cores, 8GB RAM for production
Enable caching appropriately
Monitor cache sizes
Use connection pooling for databases
Already configured for PostgreSQL
SQLite uses NullPool (appropriate for file-based DB)
For Development
Don't use ngrok for latency testing
Adds 500-700ms of overhead
Fine for development, not performance measurement
Test with realistic data
Use actual symbols and exchanges
Test during market hours for realistic broker latency
Profile before optimizing
Use the latency dashboard
Check after each optimization
Compare before/after metrics
For Trading Strategies
Know your strategy's latency requirements
HFT: < 10ms (needs co-location)
Scalping: < 100ms (OpenAlgo is suitable ✅)
MFT: < 200ms (OpenAlgo is excellent ✅)
LFT: < 1000ms (OpenAlgo is more than sufficient ✅)
Focus on strategy logic, not micro-optimization
50ms vs 60ms rarely matters for retail strategies
Strategy robustness matters more
Test under realistic conditions
Market hours have different latency than off-hours
High volatility affects broker processing time
Formula Reference
Quick Reference
Estimation Formulas
Conclusion
With 95% reduction in platform overhead, OpenAlgo's latency is now limited by external factors:
Broker API response time (50-80ms) - Primary bottleneck
Network latency (20-40ms each way) - Geography-dependent
Platform processing (5-10ms) - Optimized ✅
For retail and institutional traders running MFT/LFT strategies, this performance is more than adequate. Focus on strategy development, risk management, and execution consistency rather than chasing microseconds.
Remember: The fastest trade isn't always the most profitable one. Strategy quality matters far more than latency for 99% of traders.
Last updated
Was this helpful?