Skip to content

[BUG] Retry chroma-load upserts when rate limited #4485

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
May 8, 2025

Conversation

jasonvigil
Copy link
Contributor

Description of changes

When the upsert operations are rate-limited, the "block" of 100 records in the operation is never inserted. Therefore, the cardinality of the test data set can never grow beyond the point of the failed (rate limited) upsert.

Also, when starting a new workload, we need to reset the cardinality and heap for the data set, so that it starts again from cardinality of 0.

Test plan

Tested locally in tilt. Verified that the cardinality of the test data set continues to grow, even after hitting rate limit errors on previous upsert attempts. Also verified that the cardinality is reset to 0 when initializing a new workload.

When the upsert operations are rate-limited, the "block" of 100 records in the
operation is never inserted. Therefore, the cardinality of the test data set
can never grow beyond the point of the failed (rate limited) upsert.

Also, when starting a new workload, we need to reset the cardinality and heap
for the data set, so that it starts again from cardinality of 0.
@jasonvigil jasonvigil requested a review from rescrv May 7, 2025 22:09
Copy link

github-actions bot commented May 7, 2025

Reviewer Checklist

Please leverage this checklist to ensure your code review is thorough before approving

Testing, Bugs, Errors, Logs, Documentation

  • Can you think of any use case in which the code does not behave as intended? Have they been tested?
  • Can you think of any inputs or external events that could break the code? Is user input validated and safe? Have they been tested?
  • If appropriate, are there adequate property based tests?
  • If appropriate, are there adequate unit tests?
  • Should any logging, debugging, tracing information be added or removed?
  • Are error messages user-friendly?
  • Have all documentation changes needed been made?
  • Have all non-obvious changes been commented?

System Compatibility

  • Are there any potential impacts on other parts of the system or backward compatibility?
  • Does this change intersect with any items on our roadmap, and if so, is there a plan for fitting them together?

Quality

  • Is this code of a unexpectedly high quality (Readability, Modularity, Intuitiveness)

Copy link
Contributor

propel-code-bot bot commented May 7, 2025

Fix Rate-Limited Upserts in Chroma-Load with Retry Mechanism

This PR addresses a bug where upsert operations that are rate-limited would prevent the test data set from growing beyond the point of failure. The solution implements a retry mechanism with exponential backoff for rate-limited upsert operations, and ensures cardinality is reset when starting a new workload.

Key Changes:
• Add retry mechanism with exponential backoff for rate-limited upsert operations
• Reset cardinality and cardinality heap when initializing a new workload
• Add error handling for rate-limited requests (HTTP 429)

Affected Areas:
• rust/load/src/data_sets.rs
• rust/load/Cargo.toml
• Cargo.lock

This summary was automatically generated by @propel-code-bot

};
let result = collection.upsert(entries, None).await;
if let Err(err) = result {
if format!("{err:?}").contains("429") {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[BestPractice]

Using format!("{err:?}").contains("429") to check for rate limiting is fragile and depends on the specific error format. It would be more robust to check for specific error types or error codes. If the ChromaClient has a specific error type for rate limiting, consider matching on that instead.

Comment on lines +1265 to +1270
let entries = CollectionEntries {
ids: keys.clone(),
metadatas: res.metadatas.clone(),
documents: Some(documents.clone()),
embeddings: Some(embeddings.clone()),
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[PerformanceOptimization]

The clone() operations inside the loop can lead to unnecessary memory allocations on each retry. Consider moving the creation of entries outside the loop and only clone when necessary for retries.


tokio::time::sleep(delay).await;

// Exponential backoff with max delay
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[BestPractice]

Consider using rand to add jitter to your backoff delay to prevent thundering herd problems when multiple clients retry at the same time:

Suggested change
// Exponential backoff with max delay
// Exponential backoff with jitter and max delay
let jitter = rand::random::<f32>() * delay.as_millis() as f32 * 0.1;
let jittered_delay = delay.checked_add(std::time::Duration::from_millis(jitter as u64)).unwrap_or(delay);
delay = std::cmp::min(delay * 2, max_delay);

Committable suggestion

Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

@jasonvigil jasonvigil force-pushed the jason/fix-cv-job-bugs branch from 38abc51 to 1e6a027 Compare May 7, 2025 23:30
@jasonvigil jasonvigil merged commit 9816d32 into main May 8, 2025
70 checks passed
@jasonvigil jasonvigil deleted the jason/fix-cv-job-bugs branch May 8, 2025 16:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants