perftester 0.4.0__tar.gz → 0.6.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: perftester
3
- Version: 0.4.0
3
+ Version: 0.6.0
4
4
  Summary: Lightweight performance testing in Python
5
5
  Home-page: https://github.com/nyggus/perftester
6
6
  Author: Nyggus
@@ -18,13 +18,12 @@ Description: # `perftester`: Lightweight performance testing of Python functions
18
18
 
19
19
  The package has three external dependencies: [`memory_profiler`](https://pypi.org/project/memory-profiler/) ([repo](https://github.com/pythonprofilers/memory_profiler)), [`easycheck`](https://pypi.org/project/easycheck/) ([repo](https://github.com/nyggus/easycheck)), and [`rounder`](https://pypi.org/project/rounder/) ([repo](https://github.com/nyggus/rounder)).
20
20
 
21
- > `perftester` is still under heavy testing. If you find anything that does not work as intended, please let me know via nyggus <at> gmail.com.
21
+ > `perftester` is still under heavy testing. If you find anything that does not work as intended, please let me know via nyggus `<at>` gmail.com.
22
22
 
23
23
  ## Pre-introduction: TL;DR
24
24
 
25
25
  At the most basic level, using `perftester` is simple. It offers you two functions for benchmarking (one for execution time and one for memory), and two functions for performance testing (likewise). Read below for a very short introduction of them. If you want to learn more, however, do not stop there, but read on.
26
26
 
27
-
28
27
  ### Benchmarking
29
28
 
30
29
  You have `time_benchmark()` and `memory_benchmark()` functions:
@@ -34,9 +33,10 @@ Description: # `perftester`: Lightweight performance testing of Python functions
34
33
  def foo(x, n): return [x] * n
35
34
  pt.time_benchmark(foo, x=129, n=100)
36
35
  ```
36
+
37
37
  and this will print the results of the time benchmark, with raw results similar to those that `timeit.repeat()` returns, but unlike it, `pt.time_benchmark()` returns mean raw time per function run, not overall; in additional, you will see some summaries of the results.
38
38
 
39
- The above call did actually run `timeit.repeat()` function, with the default configuration of `number=100_000` and `repeat=5`. If you want to change any of these, you can use arguments `Number` and `Repeat`, correspondigly:
39
+ The above call did actually run `timeit.repeat()` function, with the default configuration of `Number=100_000` and `Repeat=5`. If you want to change any of these, you can use arguments `Number` and `Repeat`, correspondigly:
40
40
 
41
41
  ```python
42
42
  pt.time_benchmark(foo, x=129, n=100, Number=1000)
@@ -46,7 +46,7 @@ Description: # `perftester`: Lightweight performance testing of Python functions
46
46
 
47
47
  These calls do not change the default settings so you use the arguments' values on the fly. Later you will learn how to change the default settings and the settings for a particular function.
48
48
 
49
- > Some of you may wonder why the `Number` and `Repeat` arguments violate what we can call the Pythonic style, by using a capital first letter for function arguments. The reason is simple: I wanted to minimize a risk of conflicts that would happen when benchmarking (or testing) a function with any of the arguments `number` or `repeat` (or both). A chance that a Python function will have a `Number` or a `Repeat` argument is rather small. If that happens, however, you can use `functools.partial()` to overcome the problem:
49
+ > Some of you may wonder why the `Number` and `Repeat` arguments violate what we can call the Pythonic style, by using a capital first letter for function arguments. The reason is simple: I wanted to minimize a risk of conflicts that would happen when benchmarking (or testing) a function with any of the arguments `Number` or `Repeat` (or both). A chance that a Python function will have a `Number` or a `Repeat` argument is rather small. If that happens, however, you can use `functools.partial()` to overcome the problem:
50
50
 
51
51
  ```python
52
52
  from functools import partial
@@ -88,7 +88,7 @@ Description: # `perftester`: Lightweight performance testing of Python functions
88
88
  >>> pt.memory_usage_test(foo, raw_limit=25, x=129, n=100)
89
89
 
90
90
  # A relative test
91
- >>> pt.memory_usage_test(foo, relative_limit=1.01, x=129, n=100)
91
+ >>> pt.memory_usage_test(foo, relative_limit=1.2, x=129, n=100)
92
92
 
93
93
  ```
94
94
 
@@ -109,16 +109,17 @@ Description: # `perftester`: Lightweight performance testing of Python functions
109
109
 
110
110
  ## Introduction
111
111
 
112
-
113
112
  `perftester` is a lightweight package for simple performance testing in Python. Here, performance refers to execution time and memory usage, so performance testing means testing if a function performs quickly enough and does not use too much RAM. In addition, the module offers you simple functions for straightforward benchmarking, in terms of both execution time and memory.
114
113
 
115
114
  Under the hood, `perftester` is a wrapper around two functions from other modules:
115
+
116
116
  * `perftester.time_benchmark()` and `perftester.time_test()` use `timeit.repeat()`
117
117
  * `perftester.memory_usage_benchmark()` and `perftester.memory_usage_test()` use `memory_profiler.memory_usage()`
118
118
 
119
119
  What `perftester` offers is a testing framework with as simple syntax as possible.
120
120
 
121
121
  You can use `perftester` in three main ways:
122
+
122
123
  * in an interactive session, for simple benchmarking of functions;
123
124
  * as part of another testing framework, like `doctest` or `pytest`s; and
124
125
  * as an independent testing framework.
@@ -127,7 +128,6 @@ Description: # `perftester`: Lightweight performance testing of Python functions
127
128
 
128
129
  When it comes to actual testing, it's difficult to say which of the last two ways is better or more convinient: it may depend on how many performance tests you have, and how much time they take. If the tests do not take more than a couple of seconds, then you can combine them with unit tests. But if they take much time, you should likely make them independent of unit tests, and run them from time to time.
129
130
 
130
-
131
131
  ## Using `perftester`
132
132
 
133
133
  ### Use it as a separate testing framework
@@ -148,10 +148,9 @@ Description: # `perftester`: Lightweight performance testing of Python functions
148
148
 
149
149
  > There is no best approach, but remember to choose one that suits your needs.
150
150
 
151
-
152
151
  ### Use `perftester` inside `pytest`
153
152
 
154
- This is a very simple approach, perhaps the simplest one: When you use `pytest`, you can simply add `perftester` testing functions to `pytest` testing functions, and that way both frameworks will be combined, or rather the `pytest` framework will run `perftester` tests. The amount of additional work is minimal.
153
+ This is a very simple approach, perhaps the simplest one: When you use `pytest`, you can simply add `perftester` testing functions to `pytest` testing functions, and that way both frameworks will be combined, or rather the `pytest` framework will run `perftester` tests. The amount of additional work is minimal.
155
154
 
156
155
  For instance, you can write the following test function:
157
156
 
@@ -184,19 +183,16 @@ Description: # `perftester`: Lightweight performance testing of Python functions
184
183
 
185
184
  This is the easiest way to use `perftester`. Its only drawback is that if the performance tests take much time, `pytest` will also take much time, something usually to be avoided. You can then do some `pytest` tricks to not run `perftester` tests, and run them only when you want — or you can simply use the above-described command-line `perftester` framework for performance testing.
186
185
 
187
-
188
186
  ### Use `perftester` inside `doctest`
189
187
 
190
- In the same way, you can use `perftester` in `doctest`. You will find plenty of examples in the documentation here, and in the [tests/ folder](tests/).
188
+ In the same way, you can use `perftester` in `doctest`. You will find plenty of examples in the documentation here, and in the [tests/ folder](tests/).
191
189
 
192
- > A great fan of `doctest`ing, I do **not** recommend using `perftester` in docstrings. For me, `doctest`s in docstrings should clarify things and explain how functions work, and adding a performance test to a function's docstring would decrease readability.
190
+ > A great fan of `doctest`ing, I do **not** recommend using `perftester` in docstrings. For me, `doctest`s in docstrings should clarify things and explain how functions work, and adding a performance test to a function's docstring would decrease readability.
193
191
 
194
192
  The best way, thus, is to write performance tests as separate `doctest` files, dedicated to performance testing. You can collect such files in a shell script that runs performance tests.
195
193
 
196
-
197
194
  ## Basic use of `perftester`
198
195
 
199
-
200
196
  ### Simple benchmarking
201
197
 
202
198
  To create a performance test for a function, you likely need to know how it behaves. You can run two simple benchmarking functions, `pt.memory_usage_benchmark()` and `pt.time_benchmark()`, which will run time and memory benchmarks, respectively. First, we will decrease `number` (passed to `timeit.repeat`), in order to shorten the benchmarks (which here serve as `doctest`s):
@@ -204,7 +200,7 @@ Description: # `perftester`: Lightweight performance testing of Python functions
204
200
  ```python
205
201
  >>> import perftester as pt
206
202
  >>> def f(n): return sum(map(lambda i: i**0.5, range(n)))
207
- >>> pt.config.set(f, "time", number=1000)
203
+ >>> pt.config.set(f, "time", Number=1000)
208
204
  >>> b_100_time = pt.time_benchmark(f, n=100)
209
205
  >>> b_100_memory = pt.memory_usage_benchmark(f, n=100)
210
206
  >>> b_1000_time = pt.time_benchmark(f, n=1000)
@@ -260,7 +256,6 @@ Description: # `perftester`: Lightweight performance testing of Python functions
260
256
 
261
257
  For time tests, we have the `pt.time_test()` function. First, a raw time test:
262
258
 
263
-
264
259
  ```python
265
260
  >>> pt.time_test(f, raw_limit=2e-05, n=100)
266
261
 
@@ -289,8 +284,7 @@ Description: # `perftester`: Lightweight performance testing of Python functions
289
284
 
290
285
  ```
291
286
 
292
- You can read about relative testing below, [in section](#raw-and-relative-performance-testing).
293
-
287
+ You can read about relative testing below, [in section](#raw-and-relative-performance-testing).
294
288
 
295
289
  ### Memory testing
296
290
 
@@ -298,15 +292,15 @@ Description: # `perftester`: Lightweight performance testing of Python functions
298
292
 
299
293
  ```python
300
294
  >>> pt.memory_usage_test(f, raw_limit=27, n=100) # test on raw memory
301
- >>> pt.memory_usage_test(f, relative_limit=1.01, n=100) # relative time test
302
- >>> pt.memory_usage_test(f, raw_limit=27, relative_limit=1.01, n=100) # both
295
+ >>> pt.memory_usage_test(f, relative_limit=1.2, n=100) # relative time test
296
+ >>> pt.memory_usage_test(f, raw_limit=27, relative_limit=1.2, n=100) # both
303
297
 
304
298
  ```
305
299
 
306
300
  In a memory usage test, a function is called only once. You can change that — but do that only if you have solid reasons — using, for example, `pt.config.set(f, "time", "repeat", 2)`, which will set this setting for the function in the configuration (so it will be used for all next calls for function `f()`). You can also do it just once (so, without saving the setting in `pt.config.settings`), using the `Repeat` argument:
307
301
 
308
302
  ```python
309
- >>> pt.memory_usage_test(f, raw_limit=27, relative_limit=1.01, n=100, Repeat=100)
303
+ >>> pt.memory_usage_test(f, raw_limit=27, relative_limit=1.2, n=100, Repeat=100)
310
304
 
311
305
  ```
312
306
 
@@ -314,14 +308,13 @@ Description: # `perftester`: Lightweight performance testing of Python functions
314
308
 
315
309
  Of course, memory tests do not have to be very useful for functions that do not have to allocate too much memory, but as you will see in other documentation files in `perftester`, some function do use a lot of memory, and such tests do make quite a lot sense for them.
316
310
 
317
-
318
311
  ## Configuration: `pt.config`
319
312
 
320
313
  The whole configuration is stored in the `pt.config` object, which you can easily change. Here's a short example of how you can use it:
321
314
 
322
315
  ```python
323
316
  >>> def f(n): return list(range(n))
324
- >>> pt.config.set(f, "time", number=10_000, repeat=1)
317
+ >>> pt.config.set(f, "time", Number=10_000, Repeat=1)
325
318
 
326
319
  ```
327
320
 
@@ -334,7 +327,7 @@ Description: # `perftester`: Lightweight performance testing of Python functions
334
327
  import perftester as pt
335
328
 
336
329
  # shorten the tests
337
- pt.config.set_defaults("time", number=10_000, repeat=3)
330
+ pt.config.set_defaults("time", Number=10_000, Repeat=3)
338
331
 
339
332
  # log the results to file (they will be printed in the console anyway)
340
333
  pt.config.log_to_file = True
@@ -352,7 +345,6 @@ Description: # `perftester`: Lightweight performance testing of Python functions
352
345
 
353
346
  When you use `perftester` in an interactive session, you update `pt.config` in a normal way, in the session. And when you use `perftester` inside `pytest`, you can do it in conftest.py and in each testing function.
354
347
 
355
-
356
348
  ## Output
357
349
 
358
350
  If a test fails, you will see something like this:
@@ -383,7 +375,6 @@ Description: # `perftester`: Lightweight performance testing of Python functions
383
375
 
384
376
  > Like in `pytest`, a recommended approach is to use one performance test per `perftester_` function. This can save you some time and trouble, but also this will ensure that all tests will be run.
385
377
 
386
-
387
378
  #### Summary output
388
379
 
389
380
  At the end, you will see a simple summary of the results, something like this:
@@ -404,7 +395,6 @@ Description: # `perftester`: Lightweight performance testing of Python functions
404
395
  perftester_for_testing.perftester_f_2
405
396
  ```
406
397
 
407
-
408
398
  ## Relative tests against another function
409
399
 
410
400
  In the basic use, when you choose a relative benchmark, you compare the performance of your function with that of a built-in (empty) function `pt.config.benchmark_function()`. In most cases, this is what you need. Sometimes, however, you may wish to benchmark against another function. For instance, you may want to build your own function that does the same thing as a Python built-in function, and you want to test (and show) that your function performs better. There are two ways of achieving this:
@@ -412,7 +402,6 @@ Description: # `perftester`: Lightweight performance testing of Python functions
412
402
  * you can use a simple trick; [see here](benchmarking_against_another_function.md);
413
403
  * you can overwrite the built-in benchmark functions; [see here](change_benchmarking_function.md).
414
404
 
415
-
416
405
  ## Raw and relative performance testing
417
406
 
418
407
  Surely, any performance tests are strongly environment-dependent, so you need to remember that when writing and conducting any performance tests. `perftester`, however, offers a solution to this: You can define tests based on
@@ -428,6 +417,99 @@ Description: # `perftester`: Lightweight performance testing of Python functions
428
417
 
429
418
  > Warning! Relative results can be different between operating systems.
430
419
 
420
+ ## Tracing full memory usage
421
+
422
+ Currently, `perftester` contains a beta version (under heavy testing) of a new feature that can be used to trace full memory usage of a Python program.
423
+
424
+ > Warning: Backward compatibility of this feature is not guaranteed! It does not affect the main functionality of `perftester`, however, so its backward compatibility should be kept.
425
+
426
+ The feature works in the following way. When you import `perftester` — but you need to do it with `import perftester`, not via importing particular objects — you will be able to see new objects in the global space. One of the is `MEMLOGS`:
427
+
428
+ ```python-repl
429
+ >>> import perftester
430
+ >>> MEMLOGS[0].ID
431
+ 'perftester import'
432
+
433
+ ```
434
+
435
+ It's an empty list for the moment. When you start tracing memory using `perftester`, this list will collect the subsequent measurements. You can measure them in two ways. One is via a `MEMPOINT()` function, and another via a `MEMTRACE` decorator. They, too, are in the global scope, so you can use them in any module inside a session in which `perftester` was already imported.
436
+
437
+ The `MEMLOGS` list will contain elements being instances of `MemLog`, which is a `functools.namedtuple `data type, with two attributes:`"ID"`and `"memory"`. This data type is imported with `perftester`, so if you want to use it, you can reach it as `perftester.MemLog`. You don't have to use it, though. Since it's a named tuple, you can treat it as a regular tuple.
438
+
439
+ #### What sort of memory is measured?
440
+
441
+ The feature uses `pympler.asizeof.asizeof(all=True)` to measure the size of all current gc objects, including module, global and stack frame objects, minus the size of `MEMLOGS`. The memory is measured in MB.
442
+
443
+ #### Using `MEMPOINT()`
444
+
445
+ `MEMPOINT()` creates a point of full-memory measurement. It will be appended into `MEMLOGS`.
446
+
447
+ ```python-repl
448
+ 3>>> import perftester
449
+ >>> def foo(n):
450
+ ... x = [i for i in range(n)]
451
+ ... MEMPOINT()
452
+ ... return x
453
+ >>> _ = foo(100)
454
+ >>> _ = foo(1_000_000)
455
+ >>> len(MEMLOGS)
456
+ 3
457
+ >>> MEMLOGS[2].memory > MEMLOGS[1].memory
458
+ True
459
+
460
+ ```
461
+
462
+ The last tests checks whether the second measurement — that is, from the function with `n` of a million — uses more memory that the function using `n` of a hundred. Makes sense, and indeed the test passes.
463
+
464
+ When creating a point, you can use an ID, for instance, `MEMPOINT("from sth() function")`.
465
+
466
+ `MEMPOINT()` can be used to create a point anywhere inside the code. Nevertheless, if you want to trace memory for a function, you can use a `MEMTRACE` decorator:
467
+
468
+ ```python-repl
469
+ >>> @MEMTRACE
470
+ ... def bar(n):
471
+ ... return [i for i in range(n)]
472
+ >>> _ = bar(1_000_000)
473
+ >>> MEMLOGS[-2].memory < MEMLOGS[-1].memory
474
+ True
475
+
476
+ ```
477
+
478
+ The decorator creates two points: one right before running the test and another right after returning.
479
+
480
+ The last line tests whether memory before running the function is smaller than that after running it — and given so big `n`, it should be.
481
+
482
+ Look here:
483
+
484
+ ```python-repl
485
+ >>> @MEMTRACE
486
+ ... def bar(n):
487
+ ... x = [i for i in range(n)]
488
+ ... y = [i/3 for i in x]
489
+ ... z = [i/3 for i in y]
490
+ ... MEMPOINT("with x, y, z")
491
+ ... del x
492
+ ... MEMPOINT("without x")
493
+ ... del y
494
+ ... MEMPOINT("without x and y")
495
+ ... del z
496
+ ... MEMPOINT("without x and y and z")
497
+ ... return
498
+ >>> _ = bar(100_000)
499
+ >>> MEMLOGS[-3].memory > MEMLOGS[-2].memory > MEMLOGS[-1].memory
500
+ True
501
+
502
+ ```
503
+
504
+ ### Print `MEMLOGS`
505
+
506
+ You can do whatever you want with `MEMLOGS`. However, when you want to see this object nicely printed, use the `MEMPRINT()` function, available from the global scope, too. You will see the results printed in a pretty way, with memory provided in MB.
507
+
508
+ ### Why the global scope?
509
+
510
+ Since this feature of `perftester` is to be used to debug memory use from various modules, it'd be inconvinient to import the required objects in all these modules. That's why for the moment, the required objects are kept in the global scope — but this can change in future versions.
511
+
512
+ If you have any comments about this, please share them via Issues of the package's repository.
431
513
 
432
514
  ## Other tools
433
515
 
@@ -436,9 +518,8 @@ Description: # `perftester`: Lightweight performance testing of Python functions
436
518
  * [`cProfile` and `profile`](https://docs.python.org/3/library/profile.html), the built-in powerful tools for deterministic profiling
437
519
  * [the built-in `timeit` module](https://docs.python.org/3/library/timeit.html), for benchmarking
438
520
  * [`memory_profiler`](https://pypi.org/project/memory-profiler/), a powerful memory profiler (`memory_profiler` is utilized by `perftester`)
439
-
440
- In fact, `perftester` is just a simple wrapper around `timeit` and `memory_profiler`, since `perftester` itself does not come with its own solutions. It simply uses these functions and offers an easy-to-use API to benchmark and test memory and time performance.
441
521
 
522
+ In fact, `perftester` is just a simple wrapper around `timeit` and `memory_profiler`, since `perftester` itself does not come with its own solutions. It simply uses these functions and offers an easy-to-use API to benchmark and test memory and time performance.
442
523
 
443
524
  ## Manipulating the traceback
444
525
 
@@ -446,19 +527,16 @@ Description: # `perftester`: Lightweight performance testing of Python functions
446
527
 
447
528
  > This behavior will not affect any other function than the two `perftester` testing functions: `pt.time_test()` and `pt.memory_usage_test()`. If you want to use this behavior for other functions, too, you can use `pt.config.cut_traceback()`; to reverse, use `pt.config.full_traceback()`.
448
529
 
449
-
450
530
  ## Caveats
451
531
 
452
532
  * `perftester` does not work with multiple threads or processes.
453
533
  * `perftester` is still in a beta version and so is still under testing.
454
534
  * Watch out when you're running the same test in different operating systems. Even relative tests can differ from OS to OS.
455
535
 
456
-
457
536
  ## Operating systems
458
537
 
459
538
  The package is developed in Linux (actually, under WSL) and checked in Windows 10, so it works in both these environments.
460
539
 
461
-
462
540
  ## Support
463
541
 
464
542
  Any contribution will be welcome. You can submit an issue in the [repository](https://github.com/nyggus/perftester). You can also create your own pull requests.