You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
retry fetching block data with decreasing batch size (#204)
### TL;DR
Implemented a retry mechanism for failed block fetches with automatic chunk splitting and added result sorting.
### What changed?
- Added a new `processChunkWithRetry` function that:
- Processes blocks in chunks and detects failed requests
- Splits failed blocks into smaller chunks and retries them recursively
- Continues splitting until successful or reaching single block chunks
- Sends successful results to the results channel as they're processed
- Modified the `Run` method to use the new retry mechanism
- Added sorting of results by block number before returning
- Removed the conditional sleep and made it consistent after each chunk processing
### How to test?
1. Run the application with a mix of valid and invalid block numbers
2. Verify that all valid blocks are fetched successfully, even when part of a batch with invalid blocks
3. Confirm that the results are properly sorted by block number
4. Check logs for "Splitting failed blocks" messages to verify the retry mechanism is working
### Why make this change?
This change improves the reliability of block fetching by automatically retrying failed requests with smaller batch sizes. The previous implementation would fail an entire batch if any block in it failed, leading to missing data. The new approach maximizes data retrieval by isolating problematic blocks and ensuring the results are returned in a consistent order regardless of how they were processed.
0 commit comments