Skip to content

Performance Optimization: Resolver, Benchmarking & Batch Operations #6419

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
30 changes: 30 additions & 0 deletions .github/workflows/ci.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -139,6 +139,36 @@ jobs:
run: |
pipenv run pytest -ra -n auto -v --fulltrace tests

benchmark:
name: Package Manager Benchmark
needs: lint
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.11"
- name: Install system dependencies
run: |
sudo apt-get update -qq
sudo apt-get install -y libxmlsec1-dev librdkafka-dev
- name: Install benchmark utilities
run: pip install csv2md
- name: Run benchmark suite
working-directory: benchmarks
run: python benchmark.py
- name: Display benchmark results
working-directory: benchmarks
run: |
if [ -f stats.csv ]; then
csv2md stats.csv >> $GITHUB_STEP_SUMMARY
fi
- uses: actions/upload-artifact@v4
with:
name: pipenv-benchmark-stats
path: benchmarks/stats.csv
retention-days: 30

build:
name: Build Package
needs: lint
Expand Down
9 changes: 9 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -180,3 +180,12 @@ reimport-pip-patch:
pypi-server: SERVER ?= gunicorn
pypi-server:
pipenv run pypi-server run --server $(SERVER) -v --host=0.0.0.0 --port=8080 --hash-algo=sha256 --disable-fallback ./tests/pypi/ ./tests/fixtures

.PHONY: benchmark
benchmark:
cd benchmarks && python benchmark.py

.PHONY: benchmark-clean
benchmark-clean:
cd benchmarks && rm -f requirements.txt Pipfile.lock stats.csv
cd benchmarks && rm -rf timings/
11 changes: 11 additions & 0 deletions benchmarks/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# Generated files
requirements.txt
Pipfile.lock
stats.csv

# Timing data
timings/

# Virtual environments
.venv/
__pycache__/
12 changes: 12 additions & 0 deletions benchmarks/Pipfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"

[packages]
# Dependencies will be added during benchmark import step

[dev-packages]

[requires]
python_version = "3.11"
77 changes: 77 additions & 0 deletions benchmarks/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
# Pipenv Package Manager Benchmark

This directory contains benchmarking tests for pipenv based on the [python-package-manager-shootout](https://github.com/lincolnloop/python-package-manager-shootout) project.

## Purpose

These benchmarks help validate that pipenv performance doesn't regress over time by testing common package management operations against a real-world dependency set from [Sentry's requirements](https://github.com/getsentry/sentry/blob/main/requirements-base.txt).

## Operations Benchmarked

- **tooling** - Installing pipenv using the current development version
- **import** - Converting requirements.txt to Pipfile format
- **lock** - Generating Pipfile.lock from dependencies
- **install-cold** - Installing packages with empty cache
- **install-warm** - Installing packages with populated cache
- **update** - Updating all packages to latest versions
- **add-package** - Adding a new package and updating lock file

## Usage

### Local Testing

```bash
# Run all benchmark operations
make benchmark

# Clean benchmark artifacts
make benchmark-clean

# Run individual operations
cd benchmarks
python benchmark.py # Run full benchmark suite
python benchmark.py setup # Download requirements.txt
python benchmark.py tooling # Benchmark pipenv installation
python benchmark.py import # Benchmark requirements import
python benchmark.py lock-cold # Benchmark lock with cold cache
python benchmark.py lock-warm # Benchmark lock with warm cache
python benchmark.py install-cold # Benchmark install with cold cache
python benchmark.py install-warm # Benchmark install with warm cache
python benchmark.py update-cold # Benchmark update with cold cache
python benchmark.py update-warm # Benchmark update with warm cache
python benchmark.py add-package # Benchmark adding a package
python benchmark.py stats # Generate stats.csv
```

### CI Integration

The benchmarks run automatically in GitHub Actions on the `ubuntu-latest` runner as part of the CI pipeline. Results are:

- Displayed in the job summary with timing statistics
- Uploaded as artifacts for historical analysis
- Used to detect performance regressions

## Files

- `benchmark.py` - Main benchmark runner script
- `Pipfile` - Base Pipfile template (dependencies added during import)
- `requirements.txt` - Downloaded from Sentry's requirements-base.txt during setup
- `timings/` - Directory created during benchmarks to store timing data
- `stats.csv` - Generated CSV with benchmark results

## Dependencies

The benchmark uses Sentry's `requirements-base.txt` as a representative real-world dependency set. This includes packages like:

- Django and related packages
- Database connectors
- Serialization libraries
- HTTP clients
- Development tools

## Notes

- Benchmarks only run on Linux in CI to ensure consistent timing measurements
- System dependencies (libxmlsec1-dev, librdkafka-dev) are installed for Sentry requirements
- Cache clearing ensures cold/warm scenarios are properly tested
- Results include CPU time, memory usage, and I/O statistics
Loading
Loading