Open
Description
Description
I was running patch
over a ~100Mb repodata file and noticed that it was slow due to copying and destroying the entire object multiple times.
I am quite curious if the performance of destroy()
could be improved? On my computer, it takes roughly 1 second to parse the file and create the json object, but also 0.5 seconds to destroy the json object.
Reproduction steps
Download a large JSON file, such as https://conda.anaconda.org/conda-forge/linux-64/repodata.json (curl --compressed https://conda.anaconda.org/conda-forge/linux-64/repodata.json
)
and run the following code:
#include <iostream>
#include <chrono>
#include <nlohmann/json.hpp>
int
main()
{
std::ifstream rdata("repodata.json");
std::unique_ptr<nlohmann::json> j = std::make_unique<nlohmann::json>();
{
auto t0 = std::chrono::high_resolution_clock::now();
rdata >> (*j);
auto t1 = std::chrono::high_resolution_clock::now();
std::cout << "took " << std::chrono::duration_cast<std::chrono::milliseconds>(t1-t0).count() <<" ms." << std::endl;
}
{
auto t0 = std::chrono::high_resolution_clock::now();
j.reset();
auto t1 = std::chrono::high_resolution_clock::now();
std::cout << "took " << std::chrono::duration_cast<std::chrono::milliseconds>(t1-t0).count() <<" ms." << std::endl;
}
}
Expected vs. actual results
destruction should have minimal runtime cost
Minimal code example
No response
Error messages
No response
Compiler and operating system
clang 12, macOS
Library version
3.10.5
Validation
- The bug also occurs if the latest version from the
develop
branch is used. - I can successfully compile and run the unit tests.