Skip to content

Reindex (and friends) consume unbounded memory #128618

Open
@DaveCTurner

Description

@DaveCTurner

A user ran one of their nodes out of memory by executing a handful of reindex requests with size: 1000 in each search. Their documents were in the region of 200kiB in size, so 1000 of them take up 200MiB of heap (maybe doubled or more because of overheads) and it doesn't take many of them to wipe out a node with 2GiB of heap.

At the very least we should be failing these requests rather than killing the whole node, but ideally we'd apply some kind of throttling or backpressure to keep this resource usage under control.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions