Skip to content

Support for large connection data sets. #146

Open
@Marsjohn-11

Description

@Marsjohn-11

When dealing with very large datasets (e.g., 20,000+ items), the current implementation loads the entire merged list into memory, or into storage, which could cause performance issues, out-of-memory errors, or SQLite record exceptions.

While relay's cursor based pagination may make this experience unlikely, offset-based pagination (Assuming we have some offset connection merge policy), local population, or alternative initial sync type back-fills might support these use cases.

Potential Solutions:
• Implement windowing in the cache to only keep a subset of items in memory
• Add support for pagination boundaries to limit the maximum number of items stored
• Implement a sliding window approach that discards items that are far from the current view

Key to this (from what I see) is the idea that records are merged into the initial response's first record edges|nodes fields. It seems like we may benefit from some windowed record chained above the current caches to help manage this. Curious what others thoughts are on this.

I think you've been already looking at some of this concept in #121 (comment). Sorry if this is a dupe

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions