All Open APIs are subject to the rate limiting policy. API requests from a Channel are throttled if too many requests were received in a short time window, and will result to HTTP status 429 Too Many Requests. Different APIs may have different rate limiting constraints.
The response HTTP headers of an Open API request describe the (expected) result of your requests.
curl -i https://api.channel.io/open/v5/... > HTTP/2 200 > x-ratelimit-limit: 1000 > x-ratelimit-remaining: 999 > x-ratelimit-reset: 1696118400
curl -i https://api.channel.io/open/v5/... > HTTP/2 200 > x-ratelimit-limit: 1000 > x-ratelimit-remaining: 0 > x-ratelimit-reset: 1696118400 > x-ratelimit-will-be-throttled: true
|Header name||Description||Expected Type||Example Value|
|The maximum number of requests allowed in a single time window. (Refer to the rule details for more)||Number||1000|
|The number of requests remaining in the current time window.||Number||999|
|The time when tokens will be refreshed and more requests will be allowed, in epoch seconds.||Number (Epoch timestamp - in seconds)||1696118400|
|Whether this request will be throttled if the rate limiting policy were fully active. This header is available during the grace period. Use this header to interpret the expected results and adapt to the rate limiting policy.||Boolean||true|
Current Status of Open API Rate Limiting policy
Our team is in process of introducing the rate limiting to Open APIs. We intend to apply the policy in the following phases:
Monitoring: The rate limiting rules are applied, and the expected results are available by response HTTP headers. The requests (expected to be throttled) are not denied yet. The rate limiting rules and policies are not fixed and subject to changes.
Grace Period: The rate limiting rules are finalized and announced to the customers, allowing their usage of Open API to be updated to meet the constraints. During the grace period, the expected results are available by response HTTP headers, and the requests (expected to be throttled) are not denied yet. The end of the grace period will be notified to the customers.
Active: The rate limiting rules are fully active, and the throttling requests will be denied with HTTP 429 Too Many Requests status.
Currently, we are in the Grace Period . The grace period will end in 2024.05.14, and the rate limiting rules will be fully active starting from 2024.05.15. Be reminded to adjust your Open API requests before the deadline.
The rate limiting rules follow the token bucket algorithm. In short, a request requires and consumes a token to be successfully handled. The tokens are managed by a token bucket with a maximum capacity, which is replenished in a fixed rate. The number of tokens in a bucket represent the number of requests you can make at any given moment.
Token buckets are managed separately per channel, thus distributing your requests to multiple API keys will not help to reduce the throttled requests. Also, please remind that even if you make API requests from several endpoints, it will consume tokens from the same token bucket.
The following table shows the capacity of the token bucket and the refill rate used for different endpoints.
|Resource||Token Bucket Capacity||Token Bucket Refill Rate|
|List of UserChats (combined)|
|100||10 tokens / second|
(10 requests / second)
|All the other resources (combined)||1000||10 tokens / second|
(10 requests / second)
For example, assume the workflow which retrieves 200
GET /open/v5/user-chats requests at once, at some moment. Then, first 100 requests will be accepted, since the full capacity of the token bucket of the resource is 100. The remaining 100 requests will receive HTTP 429 Too Many Requests (or,
x-ratelimit-will-be-throtted: true header during the grace period) as a response.
To make the remaining requests succeed, distribute the request along multiple time windows. The token bucket replenishes by rate of 10 tokens per second, so you may make 10 more requests in the start of next second. Consider utilizing
x-ratelimit-reset response header to determine how long the client should wait to make the next request.
Even if you make remaining 100 requests at once at the start of next second, only 10 of them will be accepted. Therefore, the best practice is to predict your Open API usage beforehand and only make the requests that will succeed subject to the rate limit rules to avoid frequent retries due to throttle.
In this example, when the bucket for
GET /open/v5/user-chats is depleted, the request to
GET /open/v4/user-chats, for example, will be also throttled since they share the same token bucket. However, requests to other endpoints uses a separate token bucket, and will be not throttled.
Please create an inquiry to Channel.io for any questions you have about the rate limiting policy.
Updated about 1 month ago