e-bon
e-bon.ro
API reference

Rate limits

How the e-bon API throttles requests, what the RateLimit headers mean, what a 429 response looks like, and how to back off correctly from your POS integration.

Rate limits

The e-bon API throttles incoming traffic to keep the service responsive for everyone. This page tells you which limits apply to your requests, how to read them from response headers, and how to retry safely when you hit one.

TL;DR — three independent counters (Global, Auth, Commands), all using IETF draft-7RateLimit-* headers, all returning the same flat {code, message, status} body on 429 with a Retry-After integer-seconds header. Read the headers proactively, honour Retry-After, add jitter.

Understand the three buckets

Every request is counted against one or more independent buckets. The bucket with the lowest remaining capacity is what you hit first.

BucketWindowLimitCounted byApplies to
Global10 min150 requestsAPI key (or client IP)Every request to https://api.e-bon.ro
Auth10 min30 requestsClient IPPOST /api/v1/auth/* (login, register, refresh)
Commands10 min50 requestsAPI key (or client IP)POST /api/v1/commands and device-scoped command endpoints

A few practical points:

  • One API key shares one Global bucket and one Commands bucket. Two hundred terminals behind the same key share the same 50 commands per 10 minutes.
  • The Auth bucket is keyed by IP only. Multiple users behind the same NAT or office Wi-Fi share that bucket — see Auth bucket and shared IPs below.
  • The buckets do not chain. A POST /api/v1/commands call counts against both Global and Commands at once.

Read the rate-limit headers

Every response — not just 429s — carries the IETF draft-7 RateLimit headers:

HeaderMeaning
RateLimit-LimitMaximum requests allowed in the current window for the bucket producing this response.
RateLimit-RemainingRequests still permitted in the current window. When this approaches 0, slow down.
RateLimit-ResetSeconds until the current window resets.

A successful command submission with the Commands bucket nearly exhausted looks like this:

HTTP/1.1 201 Created
RateLimit-Limit: 50
RateLimit-Remaining: 3
RateLimit-Reset: 412
Content-Type: application/json

{ "id": "cmd_abc123", "status": "pending" }

The headers always reflect the bucket that produced the response. Read them, do not guess which bucket you are closest to.

Handle a 429 response

When a bucket is exhausted, the request never reaches the handler. You get back:

HTTP/1.1 429 Too Many Requests
Retry-After: 47
RateLimit-Limit: 150
RateLimit-Remaining: 0
RateLimit-Reset: 47
Content-Type: application/json

{
  "code": "RATE_LIMIT_EXCEEDED",
  "message": "Too many requests, please try again later.",
  "status": 429
}

The body is identical across all three buckets. Match on the top-level code === "RATE_LIMIT_EXCEEDED" to detect rate-limit responses.

Retry-After is always an integer number of seconds, with a minimum of 1. Wait at least that long before the next attempt — never retry a 429 immediately.

Back off correctly

A well-behaved client never relies on hitting 429 to discover it is going too fast. Bake the headers into your transport layer.

Read the headers on every response

Pull RateLimit-Limit, RateLimit-Remaining, and RateLimit-Reset from every response, not just errors. When Remaining falls below ~10% of Limit, throttle voluntarily — sleep, batch, or queue.

Honour Retry-After on 429

Parse Retry-After as integer seconds and wait at least that long before retrying.

Add jitter to every retry

Multiply your wait time by 1 + (Math.random() - 0.5) * 0.4 (±20% jitter). Without it, multiple POS terminals coming back online at the same minute will stampede the API and re-trip the same bucket.

Use exponential backoff on repeat 429s

If you hit 429 twice in a row for the same operation, double your wait each time (cap around 5 minutes). Repeat 429s usually mean a bug in your client, not a transient issue — log the burst.

Use one API key per integration, not per terminal

The Global and Commands buckets are keyed by API key. If 200 terminals share one key, they share one Commands bucket of 50 per 10 minutes. For most POS partners, one key for the backend service that proxies all terminals is the right model. Do not rotate keys to dodge limits.

Code samples

These short reference clients show the header / Retry-After / jitter shape. They are not production-ready (no logging, no metrics, no circuit breaker) — adapt them to your stack.

async function callWithBackoff(url, options, attempt = 0) {
  const res = await fetch(url, options);
  if (res.status !== 429) return res;
  const retryAfter = parseInt(res.headers.get('retry-after') ?? '1', 10);
  const base = Math.max(retryAfter, 2 ** attempt);
  const jitter = base * (0.8 + Math.random() * 0.4); // ±20%
  if (attempt >= 5) throw new Error('rate limit: gave up after 5 retries');
  await new Promise((r) => setTimeout(r, jitter * 1000));
  return callWithBackoff(url, options, attempt + 1);
}

Handle the Auth bucket on shared IPs

The Auth bucket is keyed by client IP, not by API key. The intent is anti credential-stuffing: a single IP cannot brute-force more than 30 login, register, or refresh attempts per 10 minutes regardless of how many usernames it cycles through.

The trade-off is that a corporate NAT, a shared office Wi-Fi, or a misconfigured reverse proxy can saturate the Auth bucket for everyone behind it.

If you operate a multi-tenant POS product where tenants share an outbound IP:

  • Stagger refresh-token rotations so multiple tenants do not refresh in the same minute.
  • Cache access tokens until ~60 seconds before expiry. Do not refresh on every request.
  • If you genuinely need higher Auth throughput, run from distinct egress IPs.
Pair retries with Idempotency-Key so that retrying after a 429 never causes a duplicate fiscal receipt to print.

Where to next

  • API overview — the full request/response shape and authentication model.
  • API errors — every error code you can receive, including RATE_LIMIT_EXCEEDED.
  • Authentication — login, register, and refresh flows protected by the Auth bucket.
  • Commands — the endpoint family protected by the Commands bucket.
  • Idempotency — make safe retries with Idempotency-Key.