Skip to main content

Rate Limiting

Planned Feature

[DEVELOPER NOTE] Rate limiting is not currently implemented in the VesuvioPay API. This documentation describes the planned rate limiting system that will be introduced in a future release. Current API usage is not rate-limited.

VesuvioPay implements rate limiting to ensure fair usage and maintain API performance for all users. This guide explains how rate limits work, how to handle them, and best practices for building resilient applications.

Rate Limit Tiers

VesuvioPay offers different rate limit tiers based on your subscription plan:

TierRequests per MinuteRequests per HourBurst Allowance
Standard1005,000150
Premium1,00050,0001,500
EnterpriseCustomCustomCustom
Burst Allowance

Burst allowance allows you to temporarily exceed your rate limit for short periods. This is useful for handling traffic spikes without errors.

Rate Limit Scopes

Rate limits are applied at the API key level, meaning:

  • Each API key has its own rate limit quota
  • Test and production keys have separate quotas
  • Multiple stores (with different API keys) have independent rate limits

Rate Limit Headers

[DEVELOPER NOTE] VesuvioPay includes rate limit information in response headers for all API requests:

HeaderDescriptionExample
X-RateLimit-LimitMaximum requests allowed per minute100
X-RateLimit-RemainingRemaining requests in current window87
X-RateLimit-ResetUnix timestamp when the limit resets1678901234
X-RateLimit-Retry-AfterSeconds until you can retry (only on 429)45

Example Response Headers

HTTP/1.1 200 OK
Content-Type: application/json
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 87
X-RateLimit-Reset: 1678901234

{
"success": true,
"data": { ... }
}

HTTP 429 Rate Limit Exceeded

When you exceed your rate limit, the API returns a 429 Too Many Requests status code:

Error Response Example

{
"success": false,
"message": "Rate limit exceeded. Please retry after 45 seconds.",
"errorCode": "RATE_LIMIT_EXCEEDED",
"retryAfter": 45,
"limit": 100,
"resetAt": "2024-03-15T14:30:00Z"
}

Response Headers on 429

HTTP/1.1 429 Too Many Requests
Content-Type: application/json
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1678901234
X-RateLimit-Retry-After: 45
Retry-After: 45

{
"success": false,
"message": "Rate limit exceeded. Please retry after 45 seconds.",
"errorCode": "RATE_LIMIT_EXCEEDED"
}

Handling Rate Limits

cURL Example

#!/bin/bash

make_request() {
response=$(curl -s -w "\n%{http_code}" \
-H "X-API-Key: sk_test_your_secret_key" \
https://api.vesuviopay.com/api/v1/customers/123e4567-e89b-12d3-a456-426614174000)

http_code=$(echo "$response" | tail -n1)
body=$(echo "$response" | sed '$d')

if [ "$http_code" -eq 429 ]; then
retry_after=$(echo "$body" | jq -r '.retryAfter')
echo "Rate limit exceeded. Waiting $retry_after seconds..."
sleep "$retry_after"
make_request # Retry
else
echo "$body"
fi
}

make_request

JavaScript Example

class VesuvioPayClient {
constructor(apiKey) {
this.apiKey = apiKey;
this.baseUrl = 'https://api.vesuviopay.com/api/v1';
}

async makeRequest(endpoint, options = {}) {
const url = `${this.baseUrl}${endpoint}`;
const headers = {
'X-API-Key': this.apiKey,
'Content-Type': 'application/json',
...options.headers
};

try {
const response = await fetch(url, {
...options,
headers
});

// Check rate limit headers
const remaining = response.headers.get('X-RateLimit-Remaining');
const limit = response.headers.get('X-RateLimit-Limit');

console.log(`Rate limit: ${remaining}/${limit} remaining`);

// Handle 429 rate limit exceeded
if (response.status === 429) {
const retryAfter = response.headers.get('X-RateLimit-Retry-After');
console.warn(`Rate limit exceeded. Retrying after ${retryAfter}s`);

await this.sleep(parseInt(retryAfter) * 1000);
return this.makeRequest(endpoint, options); // Retry
}

if (!response.ok) {
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
}

return await response.json();
} catch (error) {
console.error('Request failed:', error);
throw error;
}
}

sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}

async getCustomer(customerId) {
return this.makeRequest(`/customers/${customerId}`);
}
}

// Usage
const client = new VesuvioPayClient('sk_test_your_secret_key');

async function example() {
try {
const customer = await client.getCustomer('123e4567-e89b-12d3-a456-426614174000');
console.log('Customer:', customer);
} catch (error) {
console.error('Error:', error);
}
}

example();

C# Example

using System;
using System.Net;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Text.Json;
using System.Threading.Tasks;

public class VesuvioPayClient
{
private readonly HttpClient _httpClient;
private readonly string _apiKey;
private const string BaseUrl = "https://api.vesuviopay.com/api/v1";

public VesuvioPayClient(string apiKey)
{
_apiKey = apiKey;
_httpClient = new HttpClient();
_httpClient.DefaultRequestHeaders.Add("X-API-Key", _apiKey);
}

public async Task<T> MakeRequestAsync<T>(string endpoint, HttpMethod method = null)
{
method ??= HttpMethod.Get;
var url = $"{BaseUrl}{endpoint}";
var request = new HttpRequestMessage(method, url);

try
{
var response = await _httpClient.SendAsync(request);

// Read rate limit headers
if (response.Headers.TryGetValues("X-RateLimit-Remaining", out var remainingValues))
{
var remaining = remainingValues.FirstOrDefault();
var limit = response.Headers.GetValues("X-RateLimit-Limit").FirstOrDefault();
Console.WriteLine($"Rate limit: {remaining}/{limit} remaining");
}

// Handle 429 rate limit exceeded
if (response.StatusCode == HttpStatusCode.TooManyRequests)
{
var retryAfter = 60; // Default fallback

if (response.Headers.TryGetValues("X-RateLimit-Retry-After", out var retryValues))
{
retryAfter = int.Parse(retryValues.FirstOrDefault() ?? "60");
}

Console.WriteLine($"Rate limit exceeded. Retrying after {retryAfter}s");
await Task.Delay(retryAfter * 1000);

return await MakeRequestAsync<T>(endpoint, method); // Retry
}

response.EnsureSuccessStatusCode();

var content = await response.Content.ReadAsStringAsync();
return JsonSerializer.Deserialize<T>(content);
}
catch (Exception ex)
{
Console.WriteLine($"Request failed: {ex.Message}");
throw;
}
}

public async Task<Customer> GetCustomerAsync(string customerId)
{
return await MakeRequestAsync<Customer>($"/customers/{customerId}");
}
}

// Usage
public class Program
{
public static async Task Main()
{
var client = new VesuvioPayClient("sk_test_your_secret_key");

try
{
var customer = await client.GetCustomerAsync("123e4567-e89b-12d3-a456-426614174000");
Console.WriteLine($"Customer: {customer.Name}");
}
catch (Exception ex)
{
Console.WriteLine($"Error: {ex.Message}");
}
}
}

public class Customer
{
public string Id { get; set; }
public string Name { get; set; }
public string Email { get; set; }
}

Best Practices

1. Implement Exponential Backoff

Instead of simple retry logic, use exponential backoff to reduce load on the API:

async function makeRequestWithBackoff(endpoint, maxRetries = 5) {
let retries = 0;
let delay = 1000; // Start with 1 second

while (retries < maxRetries) {
try {
const response = await fetch(endpoint, {
headers: { 'X-API-Key': apiKey }
});

if (response.status === 429) {
retries++;
console.log(`Rate limited. Retry ${retries}/${maxRetries} after ${delay}ms`);
await sleep(delay);
delay *= 2; // Exponential backoff: 1s, 2s, 4s, 8s, 16s
continue;
}

return await response.json();
} catch (error) {
console.error('Request failed:', error);
throw error;
}
}

throw new Error('Max retries exceeded');
}

function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}

2. Monitor Rate Limit Headers

Track your rate limit usage proactively:

function checkRateLimitWarning(response) {
const remaining = parseInt(response.headers.get('X-RateLimit-Remaining'));
const limit = parseInt(response.headers.get('X-RateLimit-Limit'));

const percentUsed = ((limit - remaining) / limit) * 100;

if (percentUsed > 90) {
console.warn('⚠️ Rate limit usage > 90%! Slow down requests.');
} else if (percentUsed > 75) {
console.log('⚠️ Rate limit usage > 75%. Consider throttling.');
}
}

3. Implement Request Queuing

Use a queue to control request rate:

class RateLimitedQueue {
constructor(maxRequestsPerMinute) {
this.queue = [];
this.processing = false;
this.requestsThisMinute = 0;
this.maxRequestsPerMinute = maxRequestsPerMinute;
this.resetInterval = 60000; // 1 minute

setInterval(() => {
this.requestsThisMinute = 0;
}, this.resetInterval);
}

async add(requestFn) {
return new Promise((resolve, reject) => {
this.queue.push({ requestFn, resolve, reject });
this.process();
});
}

async process() {
if (this.processing || this.queue.length === 0) return;

if (this.requestsThisMinute >= this.maxRequestsPerMinute) {
console.log('Rate limit reached, waiting...');
setTimeout(() => this.process(), 1000);
return;
}

this.processing = true;
const { requestFn, resolve, reject } = this.queue.shift();

try {
this.requestsThisMinute++;
const result = await requestFn();
resolve(result);
} catch (error) {
reject(error);
} finally {
this.processing = false;
this.process(); // Process next item
}
}
}

// Usage
const queue = new RateLimitedQueue(100); // 100 requests per minute

async function fetchCustomer(customerId) {
return queue.add(() =>
fetch(`https://api.vesuviopay.com/api/v1/customers/${customerId}`, {
headers: { 'X-API-Key': apiKey }
}).then(r => r.json())
);
}

4. Batch Requests When Possible

Instead of making multiple individual requests, batch them when the API supports it:

// ❌ Bad: Multiple individual requests
for (const customerId of customerIds) {
await fetchCustomer(customerId); // 100 requests for 100 customers
}

// ✅ Good: Single batch request (when available)
const customers = await fetchCustomersBatch(customerIds); // 1 request

[DEVELOPER NOTE] Consider implementing batch endpoints for common operations like:

  • Fetching multiple customers
  • Bulk product updates
  • Multiple order status checks

5. Cache API Responses

Reduce API calls by caching responses:

class CachedVesuvioPayClient {
constructor(apiKey, cacheTTL = 300000) { // 5 minutes default
this.apiKey = apiKey;
this.cache = new Map();
this.cacheTTL = cacheTTL;
}

getCacheKey(endpoint) {
return endpoint;
}

async get(endpoint) {
const cacheKey = this.getCacheKey(endpoint);
const cached = this.cache.get(cacheKey);

if (cached && Date.now() - cached.timestamp < this.cacheTTL) {
console.log('Cache hit:', endpoint);
return cached.data;
}

console.log('Cache miss, fetching:', endpoint);
const response = await fetch(`https://api.vesuviopay.com/api/v1${endpoint}`, {
headers: { 'X-API-Key': this.apiKey }
});

const data = await response.json();

this.cache.set(cacheKey, {
data,
timestamp: Date.now()
});

return data;
}
}

6. Use Webhooks Instead of Polling

Instead of polling for changes, use webhooks to receive updates:

// ❌ Bad: Polling every 10 seconds
setInterval(async () => {
const order = await fetchOrder(orderId);
if (order.status === 'completed') {
handleOrderCompleted(order);
}
}, 10000); // 360 requests per hour!

// ✅ Good: Use webhooks
// Configure webhook in VesuvioPay dashboard
// Receive POST request when order is completed
app.post('/webhooks/vesuviopay', (req, res) => {
const { event, data } = req.body;

if (event === 'order.completed') {
handleOrderCompleted(data);
}

res.status(200).send('OK');
});
Webhook Benefits

Webhooks eliminate the need for polling, reducing API calls by 99% in many cases. See the Webhooks Guide for setup instructions.

7. Distribute Load Across Time

Avoid bursts by spreading requests over time:

async function processLargeDataset(items) {
const delayBetweenRequests = 1000; // 1 second between requests

for (const item of items) {
await processItem(item);
await sleep(delayBetweenRequests);
}
}

// Or use a proper job queue for background processing

8. Handle Rate Limits Gracefully

Provide clear feedback to users when rate limits are hit:

try {
const data = await apiClient.makeRequest('/customers/123');
} catch (error) {
if (error.statusCode === 429) {
showUserMessage(
'We\'re experiencing high traffic. Please try again in a moment.',
'warning'
);
} else {
showUserMessage('An error occurred. Please try again.', 'error');
}
}

Monitoring Rate Limit Usage

Track Your Usage

Implement monitoring to track your rate limit consumption:

class RateLimitMonitor {
constructor() {
this.stats = {
requests: 0,
rateLimited: 0,
avgRemaining: 0
};
}

recordRequest(response) {
this.stats.requests++;

if (response.status === 429) {
this.stats.rateLimited++;
}

const remaining = parseInt(response.headers.get('X-RateLimit-Remaining'));
this.stats.avgRemaining =
(this.stats.avgRemaining + remaining) / 2;
}

getStats() {
return {
...this.stats,
rateLimitPercentage:
(this.stats.rateLimited / this.stats.requests) * 100
};
}

shouldAlert() {
const stats = this.getStats();
return stats.rateLimitPercentage > 5 || stats.avgRemaining < 10;
}
}

Set Up Alerts

Configure alerts for rate limit issues:

  • Alert when remaining < 10% of limit
  • Alert when 429 errors exceed 5% of total requests
  • Track rate limit trends over time
  • Monitor different API endpoints separately

Rate Limit Optimization Strategies

1. Use Appropriate Tier

Upgrade to a higher tier if you consistently hit rate limits:

Standard (100/min)    → Premium (1,000/min)    → Enterprise (custom)

2. Optimize API Usage

  • Fetch only what you need: Use field filtering if available
  • Reduce unnecessary calls: Check if data is already available
  • Combine related operations: Use batch endpoints
  • Cache static data: Product catalogs, store settings, etc.

3. Use Multiple API Keys

For large applications with multiple services:

// Separate API keys for different services
const customerServiceClient = new VesuvioPayClient(CUSTOMER_SERVICE_KEY);
const inventoryServiceClient = new VesuvioPayClient(INVENTORY_SERVICE_KEY);
const orderServiceClient = new VesuvioPayClient(ORDER_SERVICE_KEY);
Important

Each API key is tied to a specific store. You can only use multiple keys if you have multiple stores or if you create separate API keys for the same store (when available).

Testing Rate Limits

When testing your rate limit handling:

// Simulate rate limit for testing
async function simulateRateLimit() {
const mockResponse = {
status: 429,
headers: new Headers({
'X-RateLimit-Limit': '100',
'X-RateLimit-Remaining': '0',
'X-RateLimit-Reset': String(Date.now() + 60000),
'X-RateLimit-Retry-After': '60'
}),
json: async () => ({
success: false,
message: 'Rate limit exceeded',
errorCode: 'RATE_LIMIT_EXCEEDED',
retryAfter: 60
})
};

return mockResponse;
}

// Test your retry logic
async function testRetryLogic() {
const response = await simulateRateLimit();
await handleResponse(response); // Should trigger retry logic
}

Frequently Asked Questions

What happens when I exceed my rate limit?

You'll receive a 429 Too Many Requests response with details on when you can retry. Your requests won't be lost, but you'll need to implement retry logic.

Are rate limits enforced per API key or per IP address?

Rate limits are enforced per API key. Each key has its own quota, regardless of the IP address making requests.

Do failed requests count against my rate limit?

Yes, all requests (successful or failed) count against your rate limit, including 400 and 500 errors.

Does the rate limit reset immediately after one minute?

[DEVELOPER NOTE] The implementation will use a sliding window approach, meaning the limit is calculated based on the last 60 seconds, not a fixed minute boundary.

Can I request a rate limit increase?

Yes! Enterprise customers can request custom rate limits. Contact sales@vesuviopay.com for details.

How can I check my current rate limit tier?

Check your VesuvioPay Dashboard under Settings > API Keys or inspect the X-RateLimit-Limit header in API responses.

Are webhooks subject to rate limits?

No, incoming webhook deliveries do not count against your API rate limits. However, VesuvioPay's outgoing webhook delivery has its own rate limiting to prevent overwhelming your servers.

What's the best way to handle rate limits in production?

Implement a combination of:

  1. Exponential backoff for retries
  2. Request queuing to control request rate
  3. Caching to reduce API calls
  4. Webhooks instead of polling
  5. Monitoring to detect issues early

Summary

Key Takeaways
  • Monitor rate limit headers to track your usage
  • Implement exponential backoff for retry logic
  • Use webhooks instead of polling when possible
  • Cache responses to reduce API calls
  • Batch requests when the API supports it
  • Upgrade your tier if you consistently hit limits

Next Steps