Rate Limiting Guide
Rate Limiting Guide
Overview
The SCORM API implements rate limiting to ensure fair usage and protect system resources. Rate limits are applied per tenant and can be enforced either locally (in-memory) or distributed (via Redis).
Rate Limit Configuration
Default Limits
| User Type | Requests per Hour | Requests per Minute |
|---|---|---|
| API Key (read scope) | 1,000 | 20 |
| API Key (write scope) | 500 | 10 |
| API Key (admin scope) | 5,000 | 100 |
| Clerk (web users) | 2,000 | 40 |
Custom Limits
Rate limits can be customized per tenant via environment variables or tenant configuration:
# Per-tenant overrides (optional)
SCORM_RATE_LIMIT_MAX_REQUESTS=2000
SCORM_RATE_LIMIT_WINDOW_SECONDS=3600
Rate Limit Headers
All API responses include rate limit information in headers:
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 999
X-RateLimit-Reset: 1640995200
Header Descriptions
X-RateLimit-Limit: Maximum number of requests allowed in the current windowX-RateLimit-Remaining: Number of requests remaining in the current windowX-RateLimit-Reset: Unix timestamp when the rate limit window resets
Rate Limit Enforcement
Local Rate Limiting
When Redis is not configured, rate limiting uses in-memory storage:
- Pros: Simple setup, no external dependencies
- Cons: Limits reset on server restart, not shared across instances
- Use Case: Single-instance deployments, development environments
Distributed Rate Limiting (Redis)
When Upstash Redis is configured, rate limiting is distributed:
- Pros: Shared limits across all instances, persistent across restarts
- Cons: Requires Redis configuration
- Use Case: Multi-instance deployments, production environments
Configuration:
UPSTASH_REDIS_REST_URL=https://your-redis.upstash.io
UPSTASH_REDIS_REST_TOKEN=your-token
Rate Limit Responses
Success Response
When within limits, requests proceed normally with rate limit headers:
HTTP/1.1 200 OK
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 999
X-RateLimit-Reset: 1640995200
{
"data": { ... }
}
Rate Limit Exceeded
When rate limit is exceeded, the API returns 429 Too Many Requests:
HTTP/1.1 429 Too Many Requests
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1640995200
Retry-After: 3600
{
"error": {
"code": "RATE_LIMIT_EXCEEDED",
"message": "Rate limit exceeded. Limit: 1000 requests per hour",
"details": {
"limit": 1000,
"remaining": 0,
"reset_at": "2025-01-12T10:00:00Z",
"retry_after_seconds": 3600
}
}
}
Handling Rate Limits
Retry Logic
When you receive a 429 response, implement exponential backoff:
async function makeRequestWithRetry(url: string, options: RequestInit, maxRetries = 3) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
const response = await fetch(url, options);
if (response.status === 429) {
const retryAfter = parseInt(response.headers.get('Retry-After') || '60');
const waitTime = retryAfter * Math.pow(2, attempt); // Exponential backoff
console.log(`Rate limited. Waiting ${waitTime} seconds before retry ${attempt + 1}/${maxRetries}`);
await new Promise(resolve => setTimeout(resolve, waitTime * 1000));
continue;
}
return response;
}
throw new Error('Max retries exceeded for rate limit');
}
Best Practices
- Monitor Rate Limit Headers: Always check
X-RateLimit-Remainingto avoid hitting limits - Implement Exponential Backoff: Use the
Retry-Afterheader for retry timing - Batch Requests: Combine multiple operations into single requests when possible
- Cache Responses: Cache frequently accessed data to reduce API calls
- Use Webhooks: Subscribe to webhooks instead of polling for updates
Rate Limit by Endpoint
Some endpoints have stricter rate limits due to resource intensity:
| Endpoint | Rate Limit |
|---|---|
POST /api/v1/packages | 10/hour (write scope) |
POST /api/v1/packages/multipart/* | 5/hour (write scope) |
PUT /api/v1/sessions/{id} | 100/minute (write scope) |
GET /api/v1/packages | 100/minute (read scope) |
GET /api/v1/sessions/{id} | 200/minute (read scope) |
Monitoring Rate Limits
Check Current Usage
# Make a request and check headers
curl -I https://api.scorm.com/api/v1/packages \
-H "X-API-Key: your-key" \
| grep -i "x-ratelimit"
Programmatic Monitoring
async function checkRateLimitStatus(apiKey: string) {
const response = await fetch('https://api.scorm.com/api/v1/packages', {
headers: { 'X-API-Key': apiKey }
});
return {
limit: parseInt(response.headers.get('X-RateLimit-Limit') || '0'),
remaining: parseInt(response.headers.get('X-RateLimit-Remaining') || '0'),
reset: new Date(parseInt(response.headers.get('X-RateLimit-Reset') || '0') * 1000)
};
}
Increasing Rate Limits
For Development
Rate limits can be increased for development/testing:
- Contact support with your tenant ID
- Provide justification for higher limits
- Limits can be temporarily increased for testing
For Production
Production rate limit increases require:
- Business justification
- Usage patterns analysis
- Potential upgrade to higher tier (if applicable)
Rate Limit Exemptions
Certain system endpoints are exempt from rate limiting:
/api/health: Health check endpoint/api/docs: API documentation endpoint
Troubleshooting
Issue: Hitting Rate Limits Frequently
Solutions:
- Review your API usage patterns
- Implement request batching
- Add response caching
- Use webhooks instead of polling
- Request rate limit increase if justified
Issue: Rate Limits Not Working
Check:
- Verify Redis configuration (if using distributed rate limiting)
- Check environment variables
- Review server logs for rate limiting errors
- Ensure middleware is properly configured
Issue: Rate Limits Reset Unexpectedly
Causes:
- Server restart (local rate limiting only)
- Redis connection issues (distributed rate limiting)
- Clock synchronization issues
Solutions:
- Use distributed rate limiting (Redis) for persistence
- Implement proper error handling and retries
- Monitor Redis connection health
Last Updated: 2025-01-12
Version: 1.0
For API authentication details, see API Key Security Guide.