Open
Description
A user ran one of their nodes out of memory by executing a handful of reindex requests with size: 1000
in each search. Their documents were in the region of 200kiB in size, so 1000 of them take up 200MiB of heap (maybe doubled or more because of overheads) and it doesn't take many of them to wipe out a node with 2GiB of heap.
At the very least we should be failing these requests rather than killing the whole node, but ideally we'd apply some kind of throttling or backpressure to keep this resource usage under control.