simple_function_benchmark 0.1.4__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,135 @@
1
+ Metadata-Version: 2.4
2
+ Name: simple_function_benchmark
3
+ Version: 0.1.4
4
+ Summary: Simple benchmarking library for comparing algorithm runtime
5
+ Author-email: ControlAltPete <peter@petertheobald.com>
6
+ License-Expression: MIT
7
+ Classifier: Programming Language :: Python :: 3
8
+ Requires-Python: >=3.7
9
+ Description-Content-Type: text/markdown
10
+
11
+ # Benchmark
12
+
13
+ A simple, easy-to-use Python benchmarking library for comparing algorithm performance.
14
+ [Github page](https://github.com/PeterTheobald/HackerDojoPythonGroup/tree/main/PyPI)
15
+
16
+ ## Features
17
+
18
+ - 🚀 **Simple API** - Compare multiple algorithms with just a few lines of code
19
+ - 📊 **Detailed Results** - Get setup time, total time, average time, and performance comparisons
20
+ - 🔄 **Progress Tracking** - Real-time progress updates during long-running benchmarks
21
+ - 🛡️ **Error Handling** - Gracefully handles algorithm failures without stopping the entire benchmark
22
+ - 📈 **Performance Ratios** - Automatically shows how much slower each algorithm is compared to the best
23
+
24
+ ## Installation
25
+
26
+ ```bash
27
+ pip install benchmark
28
+ ```
29
+
30
+ ## Quick Start
31
+
32
+ ```python
33
+ import benchmark
34
+
35
+ # Define your algorithms to compare
36
+ algorithms = [
37
+ {
38
+ "title": "Bubble Sort",
39
+ "algorithm_fn": bubble_sort,
40
+ "setup_fn": lambda: [3, 1, 4, 1, 5, 9, 2, 6]
41
+ },
42
+ {
43
+ "title": "Python's sorted()",
44
+ "algorithm_fn": sorted,
45
+ "setup_fn": lambda: [3, 1, 4, 1, 5, 9, 2, 6]
46
+ }
47
+ ]
48
+
49
+ # Run the benchmark
50
+ results = benchmark.run(algorithms, REPEAT=1000)
51
+ ```
52
+
53
+ ## Usage
54
+
55
+ ### Basic Example
56
+
57
+ ```python
58
+ import benchmark
59
+
60
+ def algorithm1(data):
61
+ return sorted(data)
62
+
63
+ def algorithm2(data):
64
+ return list(reversed(sorted(data, reverse=True)))
65
+
66
+ algorithms = [
67
+ {
68
+ "title": "Standard sort",
69
+ "algorithm_fn": algorithm1,
70
+ "setup_fn": lambda: [5, 2, 8, 1, 9]
71
+ },
72
+ {
73
+ "title": "Reverse then reverse",
74
+ "algorithm_fn": algorithm2,
75
+ "setup_fn": lambda: [5, 2, 8, 1, 9]
76
+ }
77
+ ]
78
+
79
+ results = benchmark.run(algorithms, REPEAT=10000, verbose=True)
80
+ ```
81
+
82
+ ### Output Example
83
+
84
+ ```
85
+ [1/2] Running: Standard sort... Done (0.05s)
86
+ [2/2] Running: Reverse then reverse... Done (0.08s)
87
+
88
+ Benchmark Results:
89
+ Standard sort setup: 0.0000s total: 0.0500s avg: 5.00us <-- BEST
90
+ Reverse then reverse setup: 0.0000s total: 0.0800s avg: 8.00us (1.60x slower)
91
+ ```
92
+
93
+ ## API Reference
94
+
95
+ ### `benchmark.run(algorithms, REPEAT=1000, verbose=True)`
96
+
97
+ Run a benchmark comparing multiple algorithms.
98
+
99
+ **Parameters:**
100
+ - `algorithms` (List[Dict]): List of algorithm dictionaries with keys:
101
+ - `algorithm_fn` (Callable): The function to benchmark
102
+ - `title` (str): Display name for the algorithm
103
+ - `setup_fn` (Callable, optional): Function called before timing to prepare test data
104
+ - `REPEAT` (int, default=1000): Number of times to run each algorithm
105
+ - `verbose` (bool, default=True): Whether to print progress and results
106
+
107
+ **Returns:**
108
+ - List[Dict]: Results for each algorithm containing:
109
+ - `title`: Algorithm name
110
+ - `setup_time`: Time spent in setup
111
+ - `total_time`: Total execution time
112
+ - `avg_time`: Average time per iteration
113
+ - `last_result`: Result from the last iteration
114
+ - `total_perf`: Combined setup + execution time
115
+ - `error`: Error message if the algorithm failed, None otherwise
116
+
117
+ ## Examples
118
+
119
+ See the included demo in `benchmark.py` which compares sorting algorithms:
120
+ - Bubble Sort
121
+ - Timsort (Python's built-in `sorted()`)
122
+ - Heap Sort
123
+ - Quicksort
124
+
125
+ ## Requirements
126
+
127
+ - Python >= 3.7
128
+
129
+ ## License
130
+
131
+ MIT License
132
+
133
+ ## Author
134
+
135
+ ControlAltPete (peter@petertheobald.com)
@@ -0,0 +1,125 @@
1
+ # Benchmark
2
+
3
+ A simple, easy-to-use Python benchmarking library for comparing algorithm performance.
4
+ [Github page](https://github.com/PeterTheobald/HackerDojoPythonGroup/tree/main/PyPI)
5
+
6
+ ## Features
7
+
8
+ - 🚀 **Simple API** - Compare multiple algorithms with just a few lines of code
9
+ - 📊 **Detailed Results** - Get setup time, total time, average time, and performance comparisons
10
+ - 🔄 **Progress Tracking** - Real-time progress updates during long-running benchmarks
11
+ - 🛡️ **Error Handling** - Gracefully handles algorithm failures without stopping the entire benchmark
12
+ - 📈 **Performance Ratios** - Automatically shows how much slower each algorithm is compared to the best
13
+
14
+ ## Installation
15
+
16
+ ```bash
17
+ pip install benchmark
18
+ ```
19
+
20
+ ## Quick Start
21
+
22
+ ```python
23
+ import benchmark
24
+
25
+ # Define your algorithms to compare
26
+ algorithms = [
27
+ {
28
+ "title": "Bubble Sort",
29
+ "algorithm_fn": bubble_sort,
30
+ "setup_fn": lambda: [3, 1, 4, 1, 5, 9, 2, 6]
31
+ },
32
+ {
33
+ "title": "Python's sorted()",
34
+ "algorithm_fn": sorted,
35
+ "setup_fn": lambda: [3, 1, 4, 1, 5, 9, 2, 6]
36
+ }
37
+ ]
38
+
39
+ # Run the benchmark
40
+ results = benchmark.run(algorithms, REPEAT=1000)
41
+ ```
42
+
43
+ ## Usage
44
+
45
+ ### Basic Example
46
+
47
+ ```python
48
+ import benchmark
49
+
50
+ def algorithm1(data):
51
+ return sorted(data)
52
+
53
+ def algorithm2(data):
54
+ return list(reversed(sorted(data, reverse=True)))
55
+
56
+ algorithms = [
57
+ {
58
+ "title": "Standard sort",
59
+ "algorithm_fn": algorithm1,
60
+ "setup_fn": lambda: [5, 2, 8, 1, 9]
61
+ },
62
+ {
63
+ "title": "Reverse then reverse",
64
+ "algorithm_fn": algorithm2,
65
+ "setup_fn": lambda: [5, 2, 8, 1, 9]
66
+ }
67
+ ]
68
+
69
+ results = benchmark.run(algorithms, REPEAT=10000, verbose=True)
70
+ ```
71
+
72
+ ### Output Example
73
+
74
+ ```
75
+ [1/2] Running: Standard sort... Done (0.05s)
76
+ [2/2] Running: Reverse then reverse... Done (0.08s)
77
+
78
+ Benchmark Results:
79
+ Standard sort setup: 0.0000s total: 0.0500s avg: 5.00us <-- BEST
80
+ Reverse then reverse setup: 0.0000s total: 0.0800s avg: 8.00us (1.60x slower)
81
+ ```
82
+
83
+ ## API Reference
84
+
85
+ ### `benchmark.run(algorithms, REPEAT=1000, verbose=True)`
86
+
87
+ Run a benchmark comparing multiple algorithms.
88
+
89
+ **Parameters:**
90
+ - `algorithms` (List[Dict]): List of algorithm dictionaries with keys:
91
+ - `algorithm_fn` (Callable): The function to benchmark
92
+ - `title` (str): Display name for the algorithm
93
+ - `setup_fn` (Callable, optional): Function called before timing to prepare test data
94
+ - `REPEAT` (int, default=1000): Number of times to run each algorithm
95
+ - `verbose` (bool, default=True): Whether to print progress and results
96
+
97
+ **Returns:**
98
+ - List[Dict]: Results for each algorithm containing:
99
+ - `title`: Algorithm name
100
+ - `setup_time`: Time spent in setup
101
+ - `total_time`: Total execution time
102
+ - `avg_time`: Average time per iteration
103
+ - `last_result`: Result from the last iteration
104
+ - `total_perf`: Combined setup + execution time
105
+ - `error`: Error message if the algorithm failed, None otherwise
106
+
107
+ ## Examples
108
+
109
+ See the included demo in `benchmark.py` which compares sorting algorithms:
110
+ - Bubble Sort
111
+ - Timsort (Python's built-in `sorted()`)
112
+ - Heap Sort
113
+ - Quicksort
114
+
115
+ ## Requirements
116
+
117
+ - Python >= 3.7
118
+
119
+ ## License
120
+
121
+ MIT License
122
+
123
+ ## Author
124
+
125
+ ControlAltPete (peter@petertheobald.com)
@@ -0,0 +1,15 @@
1
+ [build-system]
2
+ requires = ["setuptools>=61.0"]
3
+ build-backend = "setuptools.build_meta"
4
+
5
+ [project]
6
+ name = "simple_function_benchmark"
7
+ version = "0.1.4"
8
+ description = "Simple benchmarking library for comparing algorithm runtime"
9
+ authors = [{name = "ControlAltPete", email = "peter@petertheobald.com"}]
10
+ readme = "README.md"
11
+ license = "MIT"
12
+ requires-python = ">=3.7"
13
+ classifiers = [
14
+ "Programming Language :: Python :: 3",
15
+ ]
@@ -0,0 +1,4 @@
1
+ [egg_info]
2
+ tag_build =
3
+ tag_date = 0
4
+
@@ -0,0 +1,8 @@
1
+ """
2
+ benchmark - Simple benchmarking library for comparing algorithm runtime
3
+ """
4
+
5
+ from .benchmark import run
6
+
7
+ __version__ = "0.1.0"
8
+ __all__ = ["run"]
@@ -0,0 +1,115 @@
1
+ # benchmark.py
2
+ # Simple benchmarking library for running several different algorithms and comparing
3
+ # their runtime
4
+ # ControlAltPete 2026
5
+
6
+ # ToDo: Add @decorator support for easy function benchmarking
7
+ # instead of requiring dicts with 'algorithm_fn' and 'setup_fn' keys
8
+ # ToDo: Add correctness checking support and reporting (optional expected result input)
9
+ # ToDo: Add memory usage tracking
10
+ # ToDo: Add more detailed statistical analysis (min, max, stddev)
11
+
12
+ import time
13
+ from typing import Any, Dict, List
14
+
15
+
16
+ def run(
17
+ algorithms: List[Dict[str, Any]], REPEAT: int = 1000, verbose: bool = True
18
+ ) -> List[Dict[str, Any]]:
19
+ """
20
+ Run a benchmark on a list of algorithms.
21
+ Each algorithm is a dict with keys:
22
+ 'algorithm_fn': function to benchmark (takes setup result as input)
23
+ 'title': string title for reporting
24
+ 'setup_fn': function to call before timing (no args, returns input for algorithm_fn)
25
+ """
26
+ results: List[Dict[str, Any]] = []
27
+ best_idx = None
28
+ best_total = float("inf")
29
+ num_algos = len(algorithms)
30
+
31
+ for idx, algo in enumerate(algorithms):
32
+ title = algo.get("title", "Untitled")
33
+ algorithm_fn = algo["algorithm_fn"]
34
+ setup_fn = algo.get("setup_fn", lambda: None)
35
+
36
+ if verbose:
37
+ print(f"[{idx+1}/{num_algos}] Running: {title}...", end="", flush=True)
38
+
39
+ try:
40
+ # Setup timing
41
+ t_setup0 = time.perf_counter()
42
+ setup_data = setup_fn()
43
+ t_setup1 = time.perf_counter()
44
+ setup_time = t_setup1 - t_setup0
45
+ # Warmup (optional, not timed)
46
+ algorithm_fn(setup_data)
47
+ # Timing
48
+ result = None
49
+ t0 = time.perf_counter()
50
+
51
+ # Show progress every 10% or at key intervals
52
+ progress_interval = max(1, REPEAT // 10)
53
+ for i in range(REPEAT):
54
+ result = algorithm_fn(setup_data)
55
+ if verbose and (i + 1) % progress_interval == 0:
56
+ percent = (i + 1) / REPEAT * 100
57
+ print(
58
+ f"\r[{idx+1}/{num_algos}] Running: {title}... {percent:.0f}%",
59
+ end="",
60
+ flush=True,
61
+ )
62
+
63
+ t1 = time.perf_counter()
64
+ elapsed = t1 - t0
65
+ avg = elapsed / REPEAT
66
+ total_perf = setup_time + elapsed
67
+
68
+ if verbose:
69
+ print(f"\r[{idx+1}/{num_algos}] Running: {title}... Done ({elapsed:.2f}s)")
70
+
71
+ results.append(
72
+ {
73
+ "title": title,
74
+ "setup_time": setup_time,
75
+ "total_time": elapsed,
76
+ "avg_time": avg,
77
+ "last_result": result,
78
+ "total_perf": total_perf,
79
+ "error": None,
80
+ }
81
+ )
82
+ if total_perf < best_total:
83
+ best_total = total_perf
84
+ best_idx = idx
85
+ except Exception as e:
86
+ if verbose:
87
+ print(f"\r[{idx+1}/{num_algos}] Running: {title}... ERROR: {e}")
88
+ results.append(
89
+ {
90
+ "title": title,
91
+ "setup_time": 0,
92
+ "total_time": float('inf'),
93
+ "avg_time": float('inf'),
94
+ "last_result": None,
95
+ "total_perf": float('inf'),
96
+ "error": str(e),
97
+ }
98
+ )
99
+ if verbose:
100
+ print("\nBenchmark Results: ")
101
+ for i, res in enumerate(results):
102
+ if res['error']:
103
+ print(f"{res['title']:40} ERROR: {res['error']}")
104
+ else:
105
+ if i == best_idx:
106
+ highlight = " <-- BEST"
107
+ elif best_total > 0:
108
+ ratio = res['total_perf'] / best_total
109
+ highlight = f" ({ratio:.2f}x slower)"
110
+ else:
111
+ highlight = ""
112
+ print(
113
+ f"{res['title']:40} setup: {res['setup_time']:0.4f}s total: {res['total_time']:0.4f}s avg: {res['avg_time']*1e6:0.2f}us{highlight}"
114
+ )
115
+ return results
@@ -0,0 +1,135 @@
1
+ Metadata-Version: 2.4
2
+ Name: simple_function_benchmark
3
+ Version: 0.1.4
4
+ Summary: Simple benchmarking library for comparing algorithm runtime
5
+ Author-email: ControlAltPete <peter@petertheobald.com>
6
+ License-Expression: MIT
7
+ Classifier: Programming Language :: Python :: 3
8
+ Requires-Python: >=3.7
9
+ Description-Content-Type: text/markdown
10
+
11
+ # Benchmark
12
+
13
+ A simple, easy-to-use Python benchmarking library for comparing algorithm performance.
14
+ [Github page](https://github.com/PeterTheobald/HackerDojoPythonGroup/tree/main/PyPI)
15
+
16
+ ## Features
17
+
18
+ - 🚀 **Simple API** - Compare multiple algorithms with just a few lines of code
19
+ - 📊 **Detailed Results** - Get setup time, total time, average time, and performance comparisons
20
+ - 🔄 **Progress Tracking** - Real-time progress updates during long-running benchmarks
21
+ - 🛡️ **Error Handling** - Gracefully handles algorithm failures without stopping the entire benchmark
22
+ - 📈 **Performance Ratios** - Automatically shows how much slower each algorithm is compared to the best
23
+
24
+ ## Installation
25
+
26
+ ```bash
27
+ pip install benchmark
28
+ ```
29
+
30
+ ## Quick Start
31
+
32
+ ```python
33
+ import benchmark
34
+
35
+ # Define your algorithms to compare
36
+ algorithms = [
37
+ {
38
+ "title": "Bubble Sort",
39
+ "algorithm_fn": bubble_sort,
40
+ "setup_fn": lambda: [3, 1, 4, 1, 5, 9, 2, 6]
41
+ },
42
+ {
43
+ "title": "Python's sorted()",
44
+ "algorithm_fn": sorted,
45
+ "setup_fn": lambda: [3, 1, 4, 1, 5, 9, 2, 6]
46
+ }
47
+ ]
48
+
49
+ # Run the benchmark
50
+ results = benchmark.run(algorithms, REPEAT=1000)
51
+ ```
52
+
53
+ ## Usage
54
+
55
+ ### Basic Example
56
+
57
+ ```python
58
+ import benchmark
59
+
60
+ def algorithm1(data):
61
+ return sorted(data)
62
+
63
+ def algorithm2(data):
64
+ return list(reversed(sorted(data, reverse=True)))
65
+
66
+ algorithms = [
67
+ {
68
+ "title": "Standard sort",
69
+ "algorithm_fn": algorithm1,
70
+ "setup_fn": lambda: [5, 2, 8, 1, 9]
71
+ },
72
+ {
73
+ "title": "Reverse then reverse",
74
+ "algorithm_fn": algorithm2,
75
+ "setup_fn": lambda: [5, 2, 8, 1, 9]
76
+ }
77
+ ]
78
+
79
+ results = benchmark.run(algorithms, REPEAT=10000, verbose=True)
80
+ ```
81
+
82
+ ### Output Example
83
+
84
+ ```
85
+ [1/2] Running: Standard sort... Done (0.05s)
86
+ [2/2] Running: Reverse then reverse... Done (0.08s)
87
+
88
+ Benchmark Results:
89
+ Standard sort setup: 0.0000s total: 0.0500s avg: 5.00us <-- BEST
90
+ Reverse then reverse setup: 0.0000s total: 0.0800s avg: 8.00us (1.60x slower)
91
+ ```
92
+
93
+ ## API Reference
94
+
95
+ ### `benchmark.run(algorithms, REPEAT=1000, verbose=True)`
96
+
97
+ Run a benchmark comparing multiple algorithms.
98
+
99
+ **Parameters:**
100
+ - `algorithms` (List[Dict]): List of algorithm dictionaries with keys:
101
+ - `algorithm_fn` (Callable): The function to benchmark
102
+ - `title` (str): Display name for the algorithm
103
+ - `setup_fn` (Callable, optional): Function called before timing to prepare test data
104
+ - `REPEAT` (int, default=1000): Number of times to run each algorithm
105
+ - `verbose` (bool, default=True): Whether to print progress and results
106
+
107
+ **Returns:**
108
+ - List[Dict]: Results for each algorithm containing:
109
+ - `title`: Algorithm name
110
+ - `setup_time`: Time spent in setup
111
+ - `total_time`: Total execution time
112
+ - `avg_time`: Average time per iteration
113
+ - `last_result`: Result from the last iteration
114
+ - `total_perf`: Combined setup + execution time
115
+ - `error`: Error message if the algorithm failed, None otherwise
116
+
117
+ ## Examples
118
+
119
+ See the included demo in `benchmark.py` which compares sorting algorithms:
120
+ - Bubble Sort
121
+ - Timsort (Python's built-in `sorted()`)
122
+ - Heap Sort
123
+ - Quicksort
124
+
125
+ ## Requirements
126
+
127
+ - Python >= 3.7
128
+
129
+ ## License
130
+
131
+ MIT License
132
+
133
+ ## Author
134
+
135
+ ControlAltPete (peter@petertheobald.com)
@@ -0,0 +1,8 @@
1
+ README.md
2
+ pyproject.toml
3
+ src/benchmark/__init__.py
4
+ src/benchmark/benchmark.py
5
+ src/simple_function_benchmark.egg-info/PKG-INFO
6
+ src/simple_function_benchmark.egg-info/SOURCES.txt
7
+ src/simple_function_benchmark.egg-info/dependency_links.txt
8
+ src/simple_function_benchmark.egg-info/top_level.txt