Skip to main content

Overview

TXCloud implements rate limiting to ensure fair usage and maintain API stability. Rate limits vary by plan and endpoint type.

Rate Limit Tiers

PlanRequests/MinuteRequests/DayBurst Limit
Free601,00010
Starter30010,00050
Growth1,000100,000100
EnterpriseCustomCustomCustom
Burst limit is the maximum number of concurrent requests allowed.

Endpoint-Specific Limits

Some endpoints have additional limits:
EndpointLimitWindow
POST /identity/verify100/minPer API key
POST /transactions/score1,000/minPer API key
POST /watchlist/screen500/minPer API key
GET /*/analytics/*30/minPer API key

Rate Limit Headers

Every API response includes rate limit information in the headers:
HTTP/1.1 200 OK
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 950
X-RateLimit-Reset: 1705312800
X-RateLimit-Window: 60
HeaderDescription
X-RateLimit-LimitMaximum requests allowed in the window
X-RateLimit-RemainingRequests remaining in current window
X-RateLimit-ResetUnix timestamp when the window resets
X-RateLimit-WindowWindow duration in seconds

Handling Rate Limits

When you exceed the rate limit, you’ll receive a 429 Too Many Requests response:
{
  "error": {
    "code": "rate_limit_exceeded",
    "message": "Too many requests. Please retry after 30 seconds.",
    "type": "rate_limit_error",
    "retry_after": 30
  }
}

Implementing Retry Logic

async function makeRequestWithRetry(fn, maxRetries = 3) {
  for (let attempt = 0; attempt < maxRetries; attempt++) {
    try {
      return await fn();
    } catch (error) {
      if (error.status === 429 && attempt < maxRetries - 1) {
        const retryAfter = error.headers['retry-after'] || 30;
        console.log(`Rate limited. Retrying in ${retryAfter}s...`);
        await sleep(retryAfter * 1000);
        continue;
      }
      throw error;
    }
  }
}

// Usage
const verification = await makeRequestWithRetry(() => 
  txcloud.identity.verify({ ... })
);

Best Practices

Use exponential backoff for retries:
const delay = Math.min(1000 * Math.pow(2, attempt), 30000);
await sleep(delay);
Cache responses when possible to reduce API calls:
  • Cache verification results by ID
  • Cache user risk profiles
  • Cache configuration data
Use batch endpoints when available:
// Instead of multiple single calls
await txcloud.watchlist.screenBatch({
  entities: [entity1, entity2, entity3]
});
Track your API usage in the dashboard to predict rate limit issues.

Increasing Your Limits

Need higher rate limits? Options include:
  1. Upgrade your plan — Higher plans have higher limits
  2. Request a limit increase — Contact sales for custom limits
  3. Optimize your integration — Reduce unnecessary calls

Contact Sales

Discuss custom rate limits for enterprise needs

Monitoring Usage

Track your API usage in the dashboard:
// Get your current usage
const usage = await txcloud.developers.usage.summary({
  period: '30d'
});

console.log('Total requests:', usage.total_requests);
console.log('Rate limit hits:', usage.rate_limit_hits);
Set up alerts in your dashboard to get notified when you’re approaching rate limits.