Rate Limiting Guide

Rate Limiting Guide

Overview

The SCORM API implements rate limiting to ensure fair usage and protect system resources. Rate limits are applied per tenant and can be enforced either locally (in-memory) or distributed (via Redis).

Rate Limit Configuration

Default Limits

User TypeRequests per HourRequests per Minute
API Key (read scope)1,00020
API Key (write scope)50010
API Key (admin scope)5,000100
Clerk (web users)2,00040

Custom Limits

Rate limits can be customized per tenant via environment variables or tenant configuration:

# Per-tenant overrides (optional)
SCORM_RATE_LIMIT_MAX_REQUESTS=2000
SCORM_RATE_LIMIT_WINDOW_SECONDS=3600

Rate Limit Headers

All API responses include rate limit information in headers:

X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 999
X-RateLimit-Reset: 1640995200

Header Descriptions

  • X-RateLimit-Limit: Maximum number of requests allowed in the current window
  • X-RateLimit-Remaining: Number of requests remaining in the current window
  • X-RateLimit-Reset: Unix timestamp when the rate limit window resets

Rate Limit Enforcement

Local Rate Limiting

When Redis is not configured, rate limiting uses in-memory storage:

  • Pros: Simple setup, no external dependencies
  • Cons: Limits reset on server restart, not shared across instances
  • Use Case: Single-instance deployments, development environments

Distributed Rate Limiting (Redis)

When Upstash Redis is configured, rate limiting is distributed:

  • Pros: Shared limits across all instances, persistent across restarts
  • Cons: Requires Redis configuration
  • Use Case: Multi-instance deployments, production environments

Configuration:

UPSTASH_REDIS_REST_URL=https://your-redis.upstash.io
UPSTASH_REDIS_REST_TOKEN=your-token

Rate Limit Responses

Success Response

When within limits, requests proceed normally with rate limit headers:

HTTP/1.1 200 OK
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 999
X-RateLimit-Reset: 1640995200

{
  "data": { ... }
}

Rate Limit Exceeded

When rate limit is exceeded, the API returns 429 Too Many Requests:

HTTP/1.1 429 Too Many Requests
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1640995200
Retry-After: 3600

{
  "error": {
    "code": "RATE_LIMIT_EXCEEDED",
    "message": "Rate limit exceeded. Limit: 1000 requests per hour",
    "details": {
      "limit": 1000,
      "remaining": 0,
      "reset_at": "2025-01-12T10:00:00Z",
      "retry_after_seconds": 3600
    }
  }
}

Handling Rate Limits

Retry Logic

When you receive a 429 response, implement exponential backoff:

async function makeRequestWithRetry(url: string, options: RequestInit, maxRetries = 3) {
  for (let attempt = 0; attempt < maxRetries; attempt++) {
    const response = await fetch(url, options);
    
    if (response.status === 429) {
      const retryAfter = parseInt(response.headers.get('Retry-After') || '60');
      const waitTime = retryAfter * Math.pow(2, attempt); // Exponential backoff
      
      console.log(`Rate limited. Waiting ${waitTime} seconds before retry ${attempt + 1}/${maxRetries}`);
      await new Promise(resolve => setTimeout(resolve, waitTime * 1000));
      continue;
    }
    
    return response;
  }
  
  throw new Error('Max retries exceeded for rate limit');
}

Best Practices

  1. Monitor Rate Limit Headers: Always check X-RateLimit-Remaining to avoid hitting limits
  2. Implement Exponential Backoff: Use the Retry-After header for retry timing
  3. Batch Requests: Combine multiple operations into single requests when possible
  4. Cache Responses: Cache frequently accessed data to reduce API calls
  5. Use Webhooks: Subscribe to webhooks instead of polling for updates

Rate Limit by Endpoint

Some endpoints have stricter rate limits due to resource intensity:

EndpointRate Limit
POST /api/v1/packages10/hour (write scope)
POST /api/v1/packages/multipart/*5/hour (write scope)
PUT /api/v1/sessions/{id}100/minute (write scope)
GET /api/v1/packages100/minute (read scope)
GET /api/v1/sessions/{id}200/minute (read scope)

Monitoring Rate Limits

Check Current Usage

# Make a request and check headers
curl -I https://api.scorm.com/api/v1/packages \
  -H "X-API-Key: your-key" \
  | grep -i "x-ratelimit"

Programmatic Monitoring

async function checkRateLimitStatus(apiKey: string) {
  const response = await fetch('https://api.scorm.com/api/v1/packages', {
    headers: { 'X-API-Key': apiKey }
  });
  
  return {
    limit: parseInt(response.headers.get('X-RateLimit-Limit') || '0'),
    remaining: parseInt(response.headers.get('X-RateLimit-Remaining') || '0'),
    reset: new Date(parseInt(response.headers.get('X-RateLimit-Reset') || '0') * 1000)
  };
}

Increasing Rate Limits

For Development

Rate limits can be increased for development/testing:

  1. Contact support with your tenant ID
  2. Provide justification for higher limits
  3. Limits can be temporarily increased for testing

For Production

Production rate limit increases require:

  1. Business justification
  2. Usage patterns analysis
  3. Potential upgrade to higher tier (if applicable)

Rate Limit Exemptions

Certain system endpoints are exempt from rate limiting:

  • /api/health: Health check endpoint
  • /api/docs: API documentation endpoint

Troubleshooting

Issue: Hitting Rate Limits Frequently

Solutions:

  1. Review your API usage patterns
  2. Implement request batching
  3. Add response caching
  4. Use webhooks instead of polling
  5. Request rate limit increase if justified

Issue: Rate Limits Not Working

Check:

  1. Verify Redis configuration (if using distributed rate limiting)
  2. Check environment variables
  3. Review server logs for rate limiting errors
  4. Ensure middleware is properly configured

Issue: Rate Limits Reset Unexpectedly

Causes:

  • Server restart (local rate limiting only)
  • Redis connection issues (distributed rate limiting)
  • Clock synchronization issues

Solutions:

  • Use distributed rate limiting (Redis) for persistence
  • Implement proper error handling and retries
  • Monitor Redis connection health

Last Updated: 2025-01-12
Version: 1.0

For API authentication details, see API Key Security Guide.