Skip to content

Commit aca7e42

Browse files
committed
Reset performance optimization branch to main
1 parent baa1ee7 commit aca7e42

18 files changed

+2875
-0
lines changed

performance-analysis/README.md

Lines changed: 204 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,204 @@
1+
# Performance Analysis
2+
3+
This directory contains performance analysis documentation, benchmark scripts, and profiling results for the Energy Dependency Inspector application.
4+
5+
## Prerequisites
6+
7+
The performance analysis tools require additional dependencies:
8+
9+
```bash
10+
# From the project root directory:
11+
pip install .[performance-analysis]
12+
13+
# Or from this performance-analysis directory:
14+
pip install ..[performance-analysis]
15+
```
16+
17+
## Quick Start
18+
19+
```bash
20+
# Run all detectors (default CLI interface)
21+
./detector-benchmarks.sh
22+
23+
# Run specific detector type
24+
./detector-benchmarks.sh host
25+
./detector-benchmarks.sh npm
26+
./detector-benchmarks.sh pip
27+
28+
# Run specific scenarios
29+
./detector-benchmarks.sh --scenario skip-system host
30+
./detector-benchmarks.sh --scenario small npm
31+
./detector-benchmarks.sh --scenario mixed pip
32+
33+
# Run with programmatic interface
34+
./detector-benchmarks.sh --interface programmatic
35+
./detector-benchmarks.sh --interface programmatic host
36+
37+
# Combine options for targeted analysis
38+
./detector-benchmarks.sh --scenario large --interface programmatic npm
39+
40+
# Run with profiling enabled
41+
./detector-benchmarks.sh --profiling
42+
43+
# Run sequential vs parallel execution benchmark
44+
./parallel-execution-benchmarks.sh
45+
46+
# Test different interfaces for parallel execution comparison
47+
./parallel-execution-benchmarks.sh --interface programmatic
48+
```
49+
50+
## Analysis Documents
51+
52+
- `performance-optimization-analysis.md` - Comprehensive performance analysis and optimization recommendations
53+
- `docker-performance-comparison.md` - Docker environment performance comparison
54+
- `dpkg-batch-optimization-benchmark.md` - DPKG batch optimization benchmarks
55+
56+
## Benchmark Scripts
57+
58+
### `detector-benchmarks.sh`
59+
60+
Main detector benchmark orchestrator with the following features:
61+
62+
- Prerequisites checking (py-spy, Docker, virtual environment)
63+
- Selective detector execution (host, npm, pip, or all)
64+
- Specific scenario selection (small, large, mixed, etc.)
65+
- Interface selection (CLI or programmatic)
66+
- Session-based result tracking with enhanced display
67+
- Filtered historical results (shows only relevant scenario history)
68+
- Profile cleanup options
69+
- Colored output and progress indicators
70+
- System information collection and CSV result appending
71+
72+
**Options:**
73+
74+
- `--interface TYPE`: Interface type - 'cli' or 'programmatic' (default: cli)
75+
- `--scenario TYPE`: Specific scenario type - varies by detector (default: all)
76+
- `--profiling`: Enable detailed profiling (slower, for analysis)
77+
- `--clean`: Clean existing profiles before running
78+
- `--skip-system-check`: Skip system dependency checks
79+
80+
**Specific scenario types by detector:**
81+
82+
- **host**: `skip-system`, `full-system`, `all` (default: all)
83+
- **npm**: `small`, `large`, `mixed`, `all` (default: all)
84+
- **pip**: `small`, `large`, `mixed`, `all` (default: all)
85+
86+
**Examples:**
87+
88+
```bash
89+
# Run all detectors with default CLI interface
90+
./detector-benchmarks.sh
91+
92+
# Run host detector with programmatic interface
93+
./detector-benchmarks.sh --interface programmatic host
94+
95+
# Run specific scenario types
96+
./detector-benchmarks.sh --scenario skip-system host
97+
./detector-benchmarks.sh --scenario small npm
98+
./detector-benchmarks.sh --scenario mixed pip
99+
100+
# Run all detectors with profiling enabled
101+
./detector-benchmarks.sh --clean --profiling
102+
103+
# Combine options for targeted analysis
104+
./detector-benchmarks.sh --scenario large --interface programmatic npm
105+
```
106+
107+
### Detector-Specific Scripts
108+
109+
The detector-specific benchmark scripts are located in the `detectors/` subdirectory and should **not** be executed directly. They are automatically called by `detector-benchmarks.sh` with the appropriate parameters.
110+
111+
#### `detectors/host-benchmarks.sh`
112+
113+
Profiles dependency resolution on the host system with interface selection support:
114+
115+
- Skip OS Packages (fast, ~40 packages)
116+
- Full system scan (slow, ~2700 packages)
117+
- CLI and programmatic interface support
118+
- System information collection (CPU, memory, versions)
119+
- CSV result appending for historical tracking
120+
121+
#### `detectors/pip-benchmarks.sh`
122+
123+
Profiles Python dependency resolution with different package scales and interface support:
124+
125+
- Small Python package sets (3 packages)
126+
- Large Python package sets (25+ packages)
127+
- Mixed environment (Python + Debian system packages via DPKG)
128+
- CLI and programmatic interface support
129+
- System information collection and CSV result appending
130+
131+
#### `detectors/npm-benchmarks.sh`
132+
133+
Specialized NPM profiling with different scenarios and interface support:
134+
135+
- Small package count (3 packages)
136+
- Large package count (20 packages)
137+
- Mixed environment (NPM + Debian system packages via DPKG)
138+
- CLI and programmatic interface support
139+
- System information collection and CSV result appending
140+
141+
**Note:** These detector scripts use a common library (`detectors/common.sh`) that provides shared functionality including color definitions, print utilities, argument parsing, environment initialization, CSV management, benchmarking functions, and container management. This eliminates code duplication and ensures consistent behavior across all detectors.
142+
143+
### `parallel-execution-benchmarks.sh`
144+
145+
Compares sequential vs parallel execution performance with support for both CLI and programmatic interfaces:
146+
147+
- Sequential: Energy Dependency Inspector runs executed one after another
148+
- Parallel: Energy Dependency Inspector runs executed concurrently using ThreadPoolExecutor
149+
- Tests against Docker container with Python/pip packages
150+
- Measures total execution time and calculates speedup/improvement
151+
- Supports both CLI interface (subprocess calls) and programmatic interface (direct Python function calls)
152+
153+
**Options:**
154+
155+
- `--workers N`: Number of parallel workers (default: 4)
156+
- `--interface TYPE`: Interface type - 'cli' or 'programmatic' (default: cli)
157+
- `--iterations N`: Number of benchmark runs (default: 10)
158+
159+
**Examples:**
160+
161+
```bash
162+
# Default CLI interface with 4 workers
163+
./parallel-execution-benchmarks.sh
164+
165+
# Test programmatic interface with 2 workers
166+
./parallel-execution-benchmarks.sh --workers 2 --interface programmatic
167+
168+
# Run quick test with fewer iterations
169+
./parallel-execution-benchmarks.sh --iterations 5
170+
171+
# Compare interfaces with custom iterations
172+
./parallel-execution-benchmarks.sh --interface cli --iterations 15
173+
./parallel-execution-benchmarks.sh --interface programmatic --iterations 15
174+
```
175+
176+
## Results and Data
177+
178+
### Profiling Results
179+
180+
All profiling results are saved to `profiles/` directory:
181+
182+
- **Speedscope files** (`.json`): Interactive flame graphs that can be viewed at <https://www.speedscope.app/>
183+
- **Legacy SVG files** (`.svg`): Static flame graphs that can be opened in any web browser
184+
185+
Upload the `.json` files to <https://www.speedscope.app/> for the best interactive profiling experience with features like call tree navigation, flamegraph zoom, and timeline views.
186+
187+
### Timing Results
188+
189+
All benchmark timing results are saved to `timing-results/` directory with comprehensive system information:
190+
191+
- **Individual CSV files**: `host_timing_results.csv`, `npm_timing_results.csv`, `pip_timing_results.csv`
192+
- **System information**: Each result includes timestamp, CPU cores, memory, Docker version, Python version
193+
- **Historical tracking**: Results are appended rather than overwritten for trend analysis
194+
195+
**CSV Format:**
196+
197+
```csv
198+
Scenario,Time(s),Packages,Timestamp,CPU_Cores,Memory_GB,Docker_Version,Python_Version
199+
Host Full System (cli),45.23,2847,2024-01-15 14:30:22,8,16.0,24.0.6,3.11.2
200+
```
201+
202+
## Generated During
203+
204+
These scripts and analysis were created during the performance optimization analysis to identify bottlenecks and optimization opportunities in the Energy Dependency Inspector application.

0 commit comments

Comments
 (0)