perftester 0.7.0__tar.gz → 0.8.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (43) hide show
  1. perftester-0.8.0/.gitignore +72 -0
  2. perftester-0.8.0/PKG-INFO +475 -0
  3. {perftester-0.7.0 → perftester-0.8.0}/README.md +446 -446
  4. perftester-0.8.0/config_perftester.py +10 -0
  5. perftester-0.8.0/docs/README.md +15 -0
  6. perftester-0.8.0/docs/benchmarking_against_another_function.md +163 -0
  7. perftester-0.8.0/docs/change_benchmarking_function.md +58 -0
  8. perftester-0.8.0/docs/most_basic_use_memory.md +148 -0
  9. perftester-0.8.0/docs/most_basic_use_time.md +122 -0
  10. perftester-0.8.0/docs/results_most_basic_memory.md +104 -0
  11. perftester-0.8.0/docs/use_case_raw_time_testing.md +68 -0
  12. perftester-0.8.0/docs/use_of_config.md +191 -0
  13. perftester-0.8.0/docs/use_of_pp.md +43 -0
  14. perftester-0.8.0/docs/use_perftester_as_CLI.md +64 -0
  15. {perftester-0.7.0 → perftester-0.8.0}/perftester/__init__.py +16 -16
  16. {perftester-0.7.0 → perftester-0.8.0}/perftester/__main__.py +178 -178
  17. {perftester-0.7.0 → perftester-0.8.0}/perftester/perftester.py +0 -0
  18. perftester-0.8.0/perftester.egg-info/PKG-INFO +475 -0
  19. perftester-0.8.0/perftester.egg-info/SOURCES.txt +34 -0
  20. {perftester-0.7.0 → perftester-0.8.0}/perftester.egg-info/dependency_links.txt +0 -0
  21. {perftester-0.7.0 → perftester-0.8.0}/perftester.egg-info/requires.txt +6 -3
  22. {perftester-0.7.0 → perftester-0.8.0}/perftester.egg-info/top_level.txt +0 -0
  23. perftester-0.8.0/pyproject.toml +47 -0
  24. perftester-0.8.0/run_black.sh +1 -0
  25. perftester-0.8.0/run_tests.bat +42 -0
  26. perftester-0.8.0/run_tests.sh +43 -0
  27. {perftester-0.7.0 → perftester-0.8.0}/setup.cfg +4 -4
  28. perftester-0.8.0/tests/README.md +16 -0
  29. perftester-0.8.0/tests/doctest_cache.md +86 -0
  30. perftester-0.8.0/tests/doctest_config.md +129 -0
  31. perftester-0.8.0/tests/doctest_function_floats.md +209 -0
  32. perftester-0.8.0/tests/doctest_function_str.md +104 -0
  33. perftester-0.8.0/tests/doctest_test_sum.md +72 -0
  34. perftester-0.8.0/tests/for_testing/perftester_for_testing_2.py +14 -0
  35. perftester-0.8.0/tests/perftester_for_testing.py +54 -0
  36. perftester-0.8.0/tests/results_of_floats.md +369 -0
  37. perftester-0.7.0/LICENSE +0 -21
  38. perftester-0.7.0/PKG-INFO +0 -462
  39. perftester-0.7.0/perftester/tmp_singleton.py +0 -35
  40. perftester-0.7.0/perftester.egg-info/PKG-INFO +0 -462
  41. perftester-0.7.0/perftester.egg-info/SOURCES.txt +0 -13
  42. perftester-0.7.0/perftester.egg-info/entry_points.txt +0 -3
  43. perftester-0.7.0/setup.py +0 -31
@@ -0,0 +1,72 @@
1
+ # Byte-compiled / optimized / DLL files
2
+ __pycache__/
3
+ *.py[cod]
4
+ *$py.class
5
+
6
+ # Distribution / packaging
7
+ .Python
8
+ build/
9
+ develop-eggs/
10
+ dist/
11
+ downloads/
12
+ eggs/
13
+ .eggs/
14
+ lib/
15
+ lib64/
16
+ parts/
17
+ sdist/
18
+ var/
19
+ wheels/
20
+ pip-wheel-metadata/
21
+ share/python-wheels/
22
+ *.egg-info/
23
+ .installed.cfg
24
+ *.egg
25
+ MANIFEST
26
+
27
+ # PyInstaller
28
+ # Usually these files are written by a python script from a template
29
+ # before PyInstaller builds the exe, so as to inject date/other infos into it.
30
+ *.manifest
31
+ *.spec
32
+
33
+ # Installer logs
34
+ pip-log.txt
35
+ pip-delete-this-directory.txt
36
+
37
+ # Unit test / coverage reports
38
+ htmlcov/
39
+ .tox/
40
+ .nox/
41
+ .coverage
42
+ .coverage.*
43
+ .cache
44
+ nosetests.xml
45
+ coverage.xml
46
+ *.cover
47
+ *.py,cover
48
+ .hypothesis/
49
+ .pytest_cache/
50
+
51
+ # Environments
52
+ .env
53
+ .venv
54
+ env/
55
+ venv/
56
+ ENV/
57
+ env.bak/
58
+ venv.bak/
59
+ venv_*/
60
+ venv-*/
61
+
62
+ # temporary files
63
+ */tmp*.*
64
+
65
+ # virtual environments
66
+ **/venv*
67
+
68
+ # VS Code
69
+ **/.vscode/
70
+
71
+ # Log files
72
+ **/*.log
@@ -0,0 +1,475 @@
1
+ Metadata-Version: 2.4
2
+ Name: perftester
3
+ Version: 0.8.0
4
+ Summary: Lightweight performance testing in Python
5
+ Author-email: Nyggus <nyggus@gmail.com>
6
+ License-Expression: MIT
7
+ Project-URL: Homepage, https://github.com/nyggus/perftester/
8
+ Classifier: Intended Audience :: Developers
9
+ Classifier: Programming Language :: Python :: 3
10
+ Classifier: Programming Language :: Python :: 3.9
11
+ Classifier: Programming Language :: Python :: 3.10
12
+ Classifier: Programming Language :: Python :: 3.11
13
+ Classifier: Programming Language :: Python :: 3.12
14
+ Classifier: Programming Language :: Python :: 3.13
15
+ Classifier: Operating System :: OS Independent
16
+ Classifier: Topic :: Software Development :: Libraries :: Python Modules
17
+ Requires-Python: >=3.9
18
+ Description-Content-Type: text/markdown
19
+ Requires-Dist: easycheck
20
+ Requires-Dist: rounder
21
+ Requires-Dist: memory_profiler
22
+ Provides-Extra: dev
23
+ Requires-Dist: wheel; extra == "dev"
24
+ Requires-Dist: black; extra == "dev"
25
+ Requires-Dist: pytest; extra == "dev"
26
+ Requires-Dist: mypy; extra == "dev"
27
+ Requires-Dist: setuptools; extra == "dev"
28
+ Requires-Dist: build; extra == "dev"
29
+
30
+ # `perftester`: Lightweight performance testing of Python functions
31
+
32
+ ## Installation
33
+
34
+ Install using `pip`:
35
+
36
+ ```shell
37
+ pip install perftester
38
+ ```
39
+
40
+ The package has three external dependencies: [`memory_profiler`](https://pypi.org/project/memory-profiler/) ([repo](https://github.com/pythonprofilers/memory_profiler)), [`easycheck`](https://pypi.org/project/easycheck/) ([repo](https://github.com/nyggus/easycheck)), and [`rounder`](https://pypi.org/project/rounder/) ([repo](https://github.com/nyggus/rounder)).
41
+
42
+ > `perftester` is still under heavy testing. If you find anything that does not work as intended, please let me know via nyggus `<at>` gmail.com.
43
+
44
+ ## Pre-introduction: TL;DR
45
+
46
+ At the most basic level, using `perftester` is simple. It offers you two functions for benchmarking (one for execution time and one for memory), and two functions for performance testing (likewise). Read below for a very short introduction of them. If you want to learn more, however, do not stop there, but read on.
47
+
48
+ ### Benchmarking
49
+
50
+ You have `time_benchmark()` and `memory_benchmark()` functions:
51
+
52
+ ```python
53
+ import perftester as pt
54
+ def foo(x, n): return [x] * n
55
+ pt.time_benchmark(foo, x=129, n=100)
56
+ ```
57
+
58
+ and this will print the results of the time benchmark, with raw results similar to those that `timeit.repeat()` returns, but unlike it, `pt.time_benchmark()` returns mean raw time per function run, not overall; in additional, you will see some summaries of the results.
59
+
60
+ The above call did actually run `timeit.repeat()` function, with the default configuration of `Number=100_000` and `Repeat=5`. If you want to change any of these, you can use arguments `Number` and `Repeat`, correspondigly:
61
+
62
+ ```python
63
+ pt.time_benchmark(foo, x=129, n=100, Number=1000)
64
+ pt.time_benchmark(foo, x=129, n=100, Repeat=2)
65
+ pt.time_benchmark(foo, x=129, n=100, Number=1000, Repeat=2)
66
+ ```
67
+
68
+ These calls do not change the default settings so you use the arguments' values on the fly. Later you will learn how to change the default settings and the settings for a particular function.
69
+
70
+ > Some of you may wonder why the `Number` and `Repeat` arguments violate what we can call the Pythonic style, by using a capital first letter for function arguments. The reason is simple: I wanted to minimize a risk of conflicts that would happen when benchmarking (or testing) a function with any of the arguments `Number` or `Repeat` (or both). A chance that a Python function will have a `Number` or a `Repeat` argument is rather small. If that happens, however, you can use `functools.partial()` to overcome the problem:
71
+
72
+ ```python
73
+ from functools import partial
74
+
75
+ def bar(Number, Repeat): return [Number] * Repeat
76
+ bar_p = partial(bar, Number=129, Repeat=100)
77
+ pt.time_benchmark(bar_p, Number=100, Repeat=2)
78
+ ```
79
+
80
+ Benchmarking RAM usage is similar:
81
+
82
+ ```python
83
+ pt.memory_usage_benchmark(foo, x=129, n=100)
84
+ ```
85
+
86
+ It uses the `memory_profiler.memory_usage()` function, which runs the function just once to measure its memory use. Almost always, there is no need to repeat it, unless there is great randomness in memory usage by the function. If you have good reasons to change this behavior (e.g., in the case of such randomness), you can request several calls of the function, using the `Repeat` argument:
87
+
88
+ ```python
89
+ pt.memory_usage_benchmark(foo, x=129, n=100, Repeat=100)
90
+ ```
91
+
92
+ You can learn more in the detailed description of the package below.
93
+
94
+ ### Testing
95
+
96
+ The API of `perftester` testinf functions is similar to that of benchmarking functions, the only difference being limits you need to provide. You can determine those limits using the above benchmark functions. Here are examples of several performance tests using `perftester`:
97
+
98
+ ```python
99
+ >>> import perftester as pt
100
+ >>> def foo(x, n): return [x] * n
101
+
102
+ # A raw test
103
+ >>> pt.time_test(foo, raw_limit=9.e-07, x=129, n=100)
104
+
105
+ # A relative test
106
+ >>> pt.time_test(foo, relative_limit=7, x=129, n=100)
107
+
108
+ # A raw test
109
+ >>> pt.memory_usage_test(foo, raw_limit=25, x=129, n=100)
110
+
111
+ # A relative test
112
+ >>> pt.memory_usage_test(foo, relative_limit=1.2, x=129, n=100)
113
+
114
+ ```
115
+
116
+ You can, certainly, use `Repeat` and `Number`:
117
+
118
+ ```python
119
+ >>> pt.time_test(foo, relative_limit=7, x=129, n=100, Repeat=3, Number=1000)
120
+
121
+ ```
122
+
123
+ Raw tests work with raw executation time. Relative tests work with relative time against a call of an empty function; that way, the test should be more or less independent of the machine you run the test on; so, a quick machine should provide more or less similar relative results as a slow machine.
124
+
125
+ > Relative results, however, can differ between different operating systems.
126
+
127
+ You can use these testing functions in `pytest`, or in dedicated `doctest` files. You can, however, use `perftester` as a separate performance testing framework. Read on to learn more about that. What's more, `perftester` offers more functionalities, and a `config` object that offers you much more control of testing.
128
+
129
+ That's all in this short introduction. If you're interested in more advanced use of `perftester`, read on to read a far more detailed introduction. In addition, files in the [docs](docs/) folder explain in detail particular functionalities that `perftester` offers.
130
+
131
+ ## Introduction
132
+
133
+ `perftester` is a lightweight package for simple performance testing in Python. Here, performance refers to execution time and memory usage, so performance testing means testing if a function performs quickly enough and does not use too much RAM. In addition, the module offers you simple functions for straightforward benchmarking, in terms of both execution time and memory.
134
+
135
+ Under the hood, `perftester` is a wrapper around two functions from other modules:
136
+
137
+ * `perftester.time_benchmark()` and `perftester.time_test()` use `timeit.repeat()`
138
+ * `perftester.memory_usage_benchmark()` and `perftester.memory_usage_test()` use `memory_profiler.memory_usage()`
139
+
140
+ What `perftester` offers is a testing framework with as simple syntax as possible.
141
+
142
+ You can use `perftester` in three main ways:
143
+
144
+ * in an interactive session, for simple benchmarking of functions;
145
+ * as part of another testing framework, like `doctest` or `pytest`s; and
146
+ * as an independent testing framework.
147
+
148
+ The first way is a different type of use from the other two. I use it to learn the behavior of functions (interms of execution time and memory use) I am working on right now, so not for actual testing.
149
+
150
+ When it comes to actual testing, it's difficult to say which of the last two ways is better or more convinient: it may depend on how many performance tests you have, and how much time they take. If the tests do not take more than a couple of seconds, then you can combine them with unit tests. But if they take much time, you should likely make them independent of unit tests, and run them from time to time.
151
+
152
+ ## Using `perftester`
153
+
154
+ ### Use it as a separate testing framework
155
+
156
+ To use `perftester` that way,
157
+
158
+ * Collect tests in Python modules whose names start with "perftester_"; for instance, "perftester_module1.py", perftester_module2.py" and the like.
159
+ * Inside these modules, collect testing functions that start with "perftester_"; for instance, `def perftester_func_1()`, `def perftester_func_2()`, and the like (note how similar this approach is to that which `pytest` uses);
160
+ * You can create a config_perftester.py file, in which you can change any configuration you want, using the `perftester.config` object. The file should be located in the folder from which you will run the CLI command `perftester`. If this file is not there, `perftester` will use its default configuration. Note that cofig_perftester.py is a Python module,so `perftester` configuration is done in actual Python code.
161
+ * Now you can run performance tests using `perftester` in your shell. You can do it in three ways:
162
+ * `$ perftester` recursively collects all `perftester` modules from the directory in which the command was run, and from all its subdirectories; then it runs all the collected `perftester` tests;
163
+ * `$ perftester path_to_dir` recursively collects all `perftester` modules from path_to_dir/ and runs all perftesters located in them.
164
+ * `$ perftester path_to_file.py` runs all perftesters from the module given in the path.
165
+
166
+ Read more about using perftester that way [here](docs/use_perftester_as_CLI.md).
167
+
168
+ > It **does** make a difference how you do that. When you run the `perftester` command with each testing file independently, each file will be tested in a separated session, so with a new instance of the `pt.config` object. When you run the command for a directory, all the functions will be tested in one session. And when you run a bare `perftester` command, all your tests will be run in one session.
169
+
170
+ > There is no best approach, but remember to choose one that suits your needs.
171
+
172
+ ### Use `perftester` inside `pytest`
173
+
174
+ This is a very simple approach, perhaps the simplest one: When you use `pytest`, you can simply add `perftester` testing functions to `pytest` testing functions, and that way both frameworks will be combined, or rather the `pytest` framework will run `perftester` tests. The amount of additional work is minimal.
175
+
176
+ For instance, you can write the following test function:
177
+
178
+ ```python
179
+ import perftester as pt
180
+ from my_module import f1 # assume that f1 takes two arguments, a string (x) and a float (y)
181
+ def test_performance_of_f1():
182
+ pt.time_test(
183
+ f1,
184
+ raw_limit=10, relative_limit=None,
185
+ x="whatever string", y=10.002)
186
+ ```
187
+
188
+ This will use either the settings for this particular function (if you set them in `pt.config`) or the default settings (also from `pt.config`). However, you can also use `Number` and `Repeat` arguments, in order to overwrite these settings (passed to `timeit.repeat()` as `number` and `repeat`, respectively) for this particular function call:
189
+
190
+ ```python
191
+ import perftester as pt
192
+ from my_module import f1 # assume that f1 takes two arguments, a string (x) and a float (y)
193
+ def test_performance_of_f1():
194
+ pt.time_test(
195
+ f1,
196
+ raw_limit=10, relative_limit=None,
197
+ x="whatever string", y=10.002
198
+ Number=1_000_000, Repeat=20)
199
+ ```
200
+
201
+ If you now run `pytest` and the test passes, nothing will happen — just like with a regular `pytest` test. If the test fails, however, a `perftester.TimeTestError` exception will be thrown, with some additional information.
202
+
203
+ > `perftester`'s default behavior is to significantly shorten traceback, but only during testing (so when you run `pt.time_test()` and `pt.memory_usage_test()`). You can extend this behavior to other situations, with just one command: `pt.config.cut_traceback()`; to reverse, use `pt.config.full_traceback()` — but do remember that this will *not* mean the full traceback will be used during perftesting.
204
+
205
+ This is the easiest way to use `perftester`. Its only drawback is that if the performance tests take much time, `pytest` will also take much time, something usually to be avoided. You can then do some `pytest` tricks to not run `perftester` tests, and run them only when you want — or you can simply use the above-described command-line `perftester` framework for performance testing.
206
+
207
+ ### Use `perftester` inside `doctest`
208
+
209
+ In the same way, you can use `perftester` in `doctest`. You will find plenty of examples in the documentation here, and in the [tests/ folder](tests/).
210
+
211
+ > A great fan of `doctest`ing, I do **not** recommend using `perftester` in docstrings. For me, `doctest`s in docstrings should clarify things and explain how functions work, and adding a performance test to a function's docstring would decrease readability.
212
+
213
+ The best way, thus, is to write performance tests as separate `doctest` files, dedicated to performance testing. You can collect such files in a shell script that runs performance tests.
214
+
215
+ ## Basic use of `perftester`
216
+
217
+ ### Simple benchmarking
218
+
219
+ To create a performance test for a function, you likely need to know how it behaves. You can run two simple benchmarking functions, `pt.memory_usage_benchmark()` and `pt.time_benchmark()`, which will run time and memory benchmarks, respectively. First, we will decrease `number` (passed to `timeit.repeat`), in order to shorten the benchmarks (which here serve as `doctest`s):
220
+
221
+ ```python
222
+ >>> import perftester as pt
223
+ >>> def f(n): return sum(map(lambda i: i**0.5, range(n)))
224
+ >>> pt.config.set(f, "time", Number=1000)
225
+ >>> b_100_time = pt.time_benchmark(f, n=100)
226
+ >>> b_100_memory = pt.memory_usage_benchmark(f, n=100)
227
+ >>> b_1000_time = pt.time_benchmark(f, n=1000)
228
+ >>> b_1000_memory = pt.memory_usage_benchmark(f, n=1000)
229
+
230
+ ```
231
+
232
+ Remember also about the possibility of overwriting (for this single benchmark) the settings from `pt.config.settings`, which you can do using `Number` (only for time testing) and `Repeat` (for both): `pt.time_benchmark(f, n=100, Number=1_000_000, Repeat=20)` and `pt.memory_usage_benchmark(f, n=1000, Repeat=10)`.
233
+
234
+ And this is it. You can use `pt.pp()` function to pretty-print the results. In my machine, I got the following results (here, for `b_100`):
235
+
236
+ ```python
237
+ # pt.pp(b_100_time)
238
+ {'max': 16.66,
239
+ 'max_relative': 1.004,
240
+ 'max_result_per_run': [16.66],
241
+ 'max_result_per_run_relative': [1.004],
242
+ 'mean': 16.66,
243
+ 'mean_result_per_run': [16.66],
244
+ 'raw_results': [[16.66, 16.66, 16.66]],
245
+ 'relative_results': [[1.004, 1.004, 1.004]]}
246
+
247
+ # pt.pp(b_100_memory)
248
+ {'max': 1.389e-05,
249
+ 'mean': 1.303e-05,
250
+ 'min': 1.168e-05,
251
+ 'min_relative': 129.5,
252
+ 'raw_times': [1.168e-05, 1.263e-05, 1.349e-05, 1.346e-05, 1.389e-05],
253
+ 'raw_times_relative': [129.5, 140.0, 149.5, 149.2, 154.0]}
254
+
255
+ ```
256
+
257
+ For memory testing, the main result is `max` while for time testing, it is `min`. For relative testing, we would look at `max_relative` and `min_relative`, respectively.
258
+
259
+ Surely, we should expect that the function with `n=100` be quicker than with `n=1000`:
260
+
261
+ ```python
262
+ >>> b_100_time["min"] < b_1000_time["min"]
263
+ True
264
+
265
+ ```
266
+
267
+ but memory use will be more or less the same:
268
+
269
+ ```python
270
+ >>> import math
271
+ >>> math.isclose(b_100_memory["max"], b_1000_memory["max"], rel_tol=.01)
272
+ True
273
+
274
+ ```
275
+
276
+ ### Time testing
277
+
278
+ For time tests, we have the `pt.time_test()` function. First, a raw time test:
279
+
280
+ ```python
281
+ >>> pt.time_test(f, raw_limit=2e-05, n=100)
282
+
283
+ ```
284
+
285
+ > `raw_limit`, `relative_limit`, `Number` and `Repeat` are keyword-only arguments.
286
+
287
+ Like before, we can use `Number` and `Repeat` arguments:
288
+
289
+ ```python
290
+ >>> pt.time_test(func=f, raw_limit=3e-05, n=100, Number=10)
291
+
292
+ ```
293
+
294
+ Now, let's define a relative time test:
295
+
296
+ ```python
297
+ >>> pt.time_test(f, relative_limit=230, n=100)
298
+
299
+ ```
300
+
301
+ We also can combine both:
302
+
303
+ ```python
304
+ >>> pt.time_test(f, raw_limit=2e-05, relative_limit=230, n=100)
305
+
306
+ ```
307
+
308
+ You can read about relative testing below, [in section](#raw-and-relative-performance-testing).
309
+
310
+ ### Memory testing
311
+
312
+ Memory tests use `pt.memory_usage_test()` function, which is used in the same way as `pt.time_test()`:
313
+
314
+ ```python
315
+ >>> pt.memory_usage_test(f, raw_limit=27, n=100) # test on raw memory
316
+ >>> pt.memory_usage_test(f, relative_limit=1.2, n=100) # relative time test
317
+ >>> pt.memory_usage_test(f, raw_limit=27, relative_limit=1.2, n=100) # both
318
+
319
+ ```
320
+
321
+ In a memory usage test, a function is called only once. You can change that — but do that only if you have solid reasons — using, for example, `pt.config.set(f, "time", "repeat", 2)`, which will set this setting for the function in the configuration (so it will be used for all next calls for function `f()`). You can also do it just once (so, without saving the setting in `pt.config.settings`), using the `Repeat` argument:
322
+
323
+ ```python
324
+ >>> pt.memory_usage_test(f, raw_limit=27, relative_limit=1.2, n=100, Repeat=100)
325
+
326
+ ```
327
+
328
+ (There is little sence in repeating this particular function, as you will get almost the same results in each repetition.)
329
+
330
+ Of course, memory tests do not have to be very useful for functions that do not have to allocate too much memory, but as you will see in other documentation files in `perftester`, some function do use a lot of memory, and such tests do make quite a lot sense for them.
331
+
332
+ ## Configuration: `pt.config`
333
+
334
+ The whole configuration is stored in the `pt.config` object, which you can easily change. Here's a short example of how you can use it:
335
+
336
+ ```python
337
+ >>> def f(n): return list(range(n))
338
+ >>> pt.config.set(f, "time", Number=10_000, Repeat=1)
339
+
340
+ ```
341
+
342
+ but you can change much more using it. **You can read in detail about using `pt.config` [here](docs/use_of_config.md)**.
343
+
344
+ When you use `perftester` as a command-line tool, you can modify `pt.config` in the `settings_perftester.py` module, for instance:
345
+
346
+ ```python
347
+ # settings_perftester.py
348
+ import perftester as pt
349
+
350
+ # shorten the tests
351
+ pt.config.set_defaults("time", Number=10_000, Repeat=3)
352
+
353
+ # log the results to file (they will be printed in the console anyway)
354
+ pt.config.log_to_file = True
355
+ pt.config.log_file = "./perftester.log"
356
+
357
+ # increase the digits for printing floating numbers
358
+ pt.config.digits_for_printing = 7
359
+
360
+ # Use regular traceback
361
+ pt.config.full_traceback()
362
+
363
+ ```
364
+
365
+ and so on. You can also change settings in each testing file itself, preferably in `perftester_` functions.
366
+
367
+ When you use `perftester` in an interactive session, you update `pt.config` in a normal way, in the session. And when you use `perftester` inside `pytest`, you can do it in conftest.py and in each testing function.
368
+
369
+ ## Output
370
+
371
+ If a test fails, you will see something like this:
372
+
373
+ ```shell
374
+ # for time test
375
+ TimeTestError in perftester_for_testing.perftester_f
376
+ Time test not passed for function f:
377
+ raw_limit = 0.011
378
+ minimum run time = 0.1007
379
+
380
+ # for memory test
381
+ MemoryTestError in perftester_for_testing.perftester_f2_time_and_memory
382
+ Memory test not passed for function f2:
383
+ memory_limit = 20
384
+ maximum memory usage = 20.04
385
+ ```
386
+
387
+ Let's analyze what we see in this output:
388
+
389
+ * Whether it's an error from a time test (`TimeTestError`) or a memory test (`MemoryTestError`).
390
+ * `perftester_for_testing.perftester_f` provides the testing module (`perftester_for_testing`) and the perftester function (`perftester_f2_time_and_memory`).
391
+ * `Memory test not passed for function f2:`: Here you see for which tested (not `perftester_`) function the test failed (here, `f2()`).
392
+ * `raw_limit` and `memory_limit`: these are the raw limits you provided; these could be also `relative_limit` and `relative_memory_limit`, for relative tests.
393
+ * `minimum run time` and `maximum memory usage` are the actual results from testing, and they were too high (higher than the limits set inside the testing function).
394
+
395
+ You can locate where a particular test failed, using the module, `perftester_` function, and the tested function. If a `perftester_` function combines more tests, then you can find the failed test using the limits.
396
+
397
+ > Like in `pytest`, a recommended approach is to use one performance test per `perftester_` function. This can save you some time and trouble, but also this will ensure that all tests will be run.
398
+
399
+ #### Summary output
400
+
401
+ At the end, you will see a simple summary of the results, something like this:
402
+
403
+ ```shell
404
+ Out of 8 tests, 5 has passed and 3 has failed.
405
+
406
+ Passed tests:
407
+ perftester_for_testing.perftester_f2
408
+ perftester_for_testing.perftester_f2_2
409
+ perftester_for_testing.perftester_f2_3
410
+ perftester_for_testing.perftester_f3
411
+ perftester_for_testing_2.perftester_f
412
+
413
+ Failed tests:
414
+ perftester_for_testing.perftester_f
415
+ perftester_for_testing.perftester_f2_time_and_memory
416
+ perftester_for_testing.perftester_f_2
417
+ ```
418
+
419
+ ## Relative tests against another function
420
+
421
+ In the basic use, when you choose a relative benchmark, you compare the performance of your function with that of a built-in (empty) function `pt.config.benchmark_function()`. In most cases, this is what you need. Sometimes, however, you may wish to benchmark against another function. For instance, you may want to build your own function that does the same thing as a Python built-in function, and you want to test (and show) that your function performs better. There are two ways of achieving this:
422
+
423
+ * you can use a simple trick; [see here](benchmarking_against_another_function.md);
424
+ * you can overwrite the built-in benchmark functions; [see here](change_benchmarking_function.md).
425
+
426
+ ## Raw and relative performance testing
427
+
428
+ Surely, any performance tests are strongly environment-dependent, so you need to remember that when writing and conducting any performance tests. `perftester`, however, offers a solution to this: You can define tests based on
429
+
430
+ * raw values: raw execution time and raw memory usage, and
431
+ * relative values: relative execution time and relative memory usage
432
+
433
+ Above, _relative_ means benchmarking against a built-in (into `perftester`) simple function, which is actually an empty function (so it represents the overhead of running a function). Thus, you can, for instance, test whether your function is two times slower than this function. The benchmarking function itself does not matter, as it is just a benchmark. What matters is that, usually, your function should _relatively to this benchmarking function_ behave the same way between different machines. So, if it works two times slower than the benchmarking function on your machine, then it should work in a similar way on another machine, even if this other machine is much faster than yours. Of course, this assumes linearity (so, two times slower here means two times slower everywhere), which does not have to be always true. Anyway, such tests will almost always be more representative, and more precise, than those based on raw times.
434
+
435
+ This does not mean, however, that raw tests are useless. In fact, in a production environment, you may wish to use raw tests. Imagine a client expects that an app never takes longer than an hour to perform a particular task (note that this strongly depends on what other processes are run in the production environment). You can create an automated test for that using `perftester`, in a very simple way - just several lines of code.
436
+
437
+ You can of course combine both types of tests, and you can do it in a very simple way. Then, the test is run once, but the results are checked with raw limits and relative limits.
438
+
439
+ > Warning! Relative results can be different between operating systems.
440
+
441
+ ## Other tools
442
+
443
+ Of course, Python comes with various powerful tools for profiling, benchmarking and testing. Here are some of them:
444
+
445
+ * [`cProfile` and `profile`](https://docs.python.org/3/library/profile.html), the built-in powerful tools for deterministic profiling
446
+ * [the built-in `timeit` module](https://docs.python.org/3/library/timeit.html), for benchmarking
447
+ * [`memory_profiler`](https://pypi.org/project/memory-profiler/), a powerful memory profiler (`memory_profiler` is utilized by `perftester`)
448
+
449
+ In fact, `perftester` is just a simple wrapper around `timeit` and `memory_profiler`, since `perftester` itself does not come with its own solutions. It simply uses these functions and offers an easy-to-use API to benchmark and test memory and time performance.
450
+
451
+ ## Manipulating the traceback
452
+
453
+ The default behavior of `perftester` is to **not** include the full traceback when a test does not pass. This is because when running performance tests, you're not interested in finding bugs, and this is what traceback is for. Instead, you want to see which test did not pass and how.
454
+
455
+ > This behavior will not affect any other function than the two `perftester` testing functions: `pt.time_test()` and `pt.memory_usage_test()`. If you want to use this behavior for other functions, too, you can use `pt.config.cut_traceback()`; to reverse, use `pt.config.full_traceback()`.
456
+
457
+ ## Tracing full memory usage → Moved to `tracemem`
458
+
459
+ Since the `0.5.*` versions, `perftester` contained a beta version of a memory tracer that could be used to trace full memory usage of a Python session.
460
+
461
+ Since `perftester` requires some memory to load, it over-measured session memory. In order to avoid this, this feature was moved to a separate Python package, called `tracemem`. You can install it from [PyPi](https://pypi.org/project/tracemem/), and you will find its Git repository [here](https://github.com/nyggus/tracemem).
462
+
463
+ ## Caveats
464
+
465
+ * `perftester` does not work with multiple threads or processes.
466
+ * `perftester` is still in a beta version and so is still under testing.
467
+ * Watch out when you're running the same test in different operating systems. Even relative tests can differ from OS to OS.
468
+
469
+ ## Operating systems
470
+
471
+ The package is developed in Linux (actually, under WSL) and checked in Windows 10, so it works in both these environments.
472
+
473
+ ## Support
474
+
475
+ Any contribution will be welcome. You can submit an issue in the [repository](https://github.com/nyggus/perftester). You can also create your own pull requests.