Skip to content

Rate limit transparency #942

Open
Open
@silvertech-daniel

Description

@silvertech-daniel

Describe the bug

The client needs to transparently handle rate-limiting separately from the server error retry logic.

To Reproduce

Fire large enough requests fast enough. I hit this during next.js's site-wide build-time SSR and have to try to work around it by wrapping every server-side client fetch() and loadQuery() with my own rate-limit logic that has to be more conservative than necessary because it has no access to the response.

Expected behavior

While #199 addresses automatic retries which is good for server errors, API rate limits should be handled using the headers in the response. For example, a 429 response includes the retry-after header, which is the actual number of seconds that the client should be waiting to retry. However, the client should rarely see a 429 because it should also be using the ratelimit-limit, ratelimit-remaining, and ratelimit-reset headers from previous successful responses. The rate limiting should be handled by a globally shared fetch queue.

Screenshots

image

Which versions of Sanity are you using?

sanity 3.64.0 (latest: 3.67.1)

What operating system are you using?

N/A

Which versions of Node.js / npm are you running?

N/A

Additional context

N/A

Security issue?

No

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions