perftester 0.5.0__tar.gz → 0.6.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: perftester
3
- Version: 0.5.0
3
+ Version: 0.6.0
4
4
  Summary: Lightweight performance testing in Python
5
5
  Home-page: https://github.com/nyggus/perftester
6
6
  Author: Nyggus
@@ -18,13 +18,12 @@ Description: # `perftester`: Lightweight performance testing of Python functions
18
18
 
19
19
  The package has three external dependencies: [`memory_profiler`](https://pypi.org/project/memory-profiler/) ([repo](https://github.com/pythonprofilers/memory_profiler)), [`easycheck`](https://pypi.org/project/easycheck/) ([repo](https://github.com/nyggus/easycheck)), and [`rounder`](https://pypi.org/project/rounder/) ([repo](https://github.com/nyggus/rounder)).
20
20
 
21
- > `perftester` is still under heavy testing. If you find anything that does not work as intended, please let me know via nyggus <at> gmail.com.
21
+ > `perftester` is still under heavy testing. If you find anything that does not work as intended, please let me know via nyggus `<at>` gmail.com.
22
22
 
23
23
  ## Pre-introduction: TL;DR
24
24
 
25
25
  At the most basic level, using `perftester` is simple. It offers you two functions for benchmarking (one for execution time and one for memory), and two functions for performance testing (likewise). Read below for a very short introduction of them. If you want to learn more, however, do not stop there, but read on.
26
26
 
27
-
28
27
  ### Benchmarking
29
28
 
30
29
  You have `time_benchmark()` and `memory_benchmark()` functions:
@@ -34,6 +33,7 @@ Description: # `perftester`: Lightweight performance testing of Python functions
34
33
  def foo(x, n): return [x] * n
35
34
  pt.time_benchmark(foo, x=129, n=100)
36
35
  ```
36
+
37
37
  and this will print the results of the time benchmark, with raw results similar to those that `timeit.repeat()` returns, but unlike it, `pt.time_benchmark()` returns mean raw time per function run, not overall; in additional, you will see some summaries of the results.
38
38
 
39
39
  The above call did actually run `timeit.repeat()` function, with the default configuration of `Number=100_000` and `Repeat=5`. If you want to change any of these, you can use arguments `Number` and `Repeat`, correspondigly:
@@ -88,7 +88,7 @@ Description: # `perftester`: Lightweight performance testing of Python functions
88
88
  >>> pt.memory_usage_test(foo, raw_limit=25, x=129, n=100)
89
89
 
90
90
  # A relative test
91
- >>> pt.memory_usage_test(foo, relative_limit=1.01, x=129, n=100)
91
+ >>> pt.memory_usage_test(foo, relative_limit=1.2, x=129, n=100)
92
92
 
93
93
  ```
94
94
 
@@ -109,16 +109,17 @@ Description: # `perftester`: Lightweight performance testing of Python functions
109
109
 
110
110
  ## Introduction
111
111
 
112
-
113
112
  `perftester` is a lightweight package for simple performance testing in Python. Here, performance refers to execution time and memory usage, so performance testing means testing if a function performs quickly enough and does not use too much RAM. In addition, the module offers you simple functions for straightforward benchmarking, in terms of both execution time and memory.
114
113
 
115
114
  Under the hood, `perftester` is a wrapper around two functions from other modules:
115
+
116
116
  * `perftester.time_benchmark()` and `perftester.time_test()` use `timeit.repeat()`
117
117
  * `perftester.memory_usage_benchmark()` and `perftester.memory_usage_test()` use `memory_profiler.memory_usage()`
118
118
 
119
119
  What `perftester` offers is a testing framework with as simple syntax as possible.
120
120
 
121
121
  You can use `perftester` in three main ways:
122
+
122
123
  * in an interactive session, for simple benchmarking of functions;
123
124
  * as part of another testing framework, like `doctest` or `pytest`s; and
124
125
  * as an independent testing framework.
@@ -127,7 +128,6 @@ Description: # `perftester`: Lightweight performance testing of Python functions
127
128
 
128
129
  When it comes to actual testing, it's difficult to say which of the last two ways is better or more convinient: it may depend on how many performance tests you have, and how much time they take. If the tests do not take more than a couple of seconds, then you can combine them with unit tests. But if they take much time, you should likely make them independent of unit tests, and run them from time to time.
129
130
 
130
-
131
131
  ## Using `perftester`
132
132
 
133
133
  ### Use it as a separate testing framework
@@ -148,10 +148,9 @@ Description: # `perftester`: Lightweight performance testing of Python functions
148
148
 
149
149
  > There is no best approach, but remember to choose one that suits your needs.
150
150
 
151
-
152
151
  ### Use `perftester` inside `pytest`
153
152
 
154
- This is a very simple approach, perhaps the simplest one: When you use `pytest`, you can simply add `perftester` testing functions to `pytest` testing functions, and that way both frameworks will be combined, or rather the `pytest` framework will run `perftester` tests. The amount of additional work is minimal.
153
+ This is a very simple approach, perhaps the simplest one: When you use `pytest`, you can simply add `perftester` testing functions to `pytest` testing functions, and that way both frameworks will be combined, or rather the `pytest` framework will run `perftester` tests. The amount of additional work is minimal.
155
154
 
156
155
  For instance, you can write the following test function:
157
156
 
@@ -184,19 +183,16 @@ Description: # `perftester`: Lightweight performance testing of Python functions
184
183
 
185
184
  This is the easiest way to use `perftester`. Its only drawback is that if the performance tests take much time, `pytest` will also take much time, something usually to be avoided. You can then do some `pytest` tricks to not run `perftester` tests, and run them only when you want — or you can simply use the above-described command-line `perftester` framework for performance testing.
186
185
 
187
-
188
186
  ### Use `perftester` inside `doctest`
189
187
 
190
- In the same way, you can use `perftester` in `doctest`. You will find plenty of examples in the documentation here, and in the [tests/ folder](tests/).
188
+ In the same way, you can use `perftester` in `doctest`. You will find plenty of examples in the documentation here, and in the [tests/ folder](tests/).
191
189
 
192
- > A great fan of `doctest`ing, I do **not** recommend using `perftester` in docstrings. For me, `doctest`s in docstrings should clarify things and explain how functions work, and adding a performance test to a function's docstring would decrease readability.
190
+ > A great fan of `doctest`ing, I do **not** recommend using `perftester` in docstrings. For me, `doctest`s in docstrings should clarify things and explain how functions work, and adding a performance test to a function's docstring would decrease readability.
193
191
 
194
192
  The best way, thus, is to write performance tests as separate `doctest` files, dedicated to performance testing. You can collect such files in a shell script that runs performance tests.
195
193
 
196
-
197
194
  ## Basic use of `perftester`
198
195
 
199
-
200
196
  ### Simple benchmarking
201
197
 
202
198
  To create a performance test for a function, you likely need to know how it behaves. You can run two simple benchmarking functions, `pt.memory_usage_benchmark()` and `pt.time_benchmark()`, which will run time and memory benchmarks, respectively. First, we will decrease `number` (passed to `timeit.repeat`), in order to shorten the benchmarks (which here serve as `doctest`s):
@@ -260,7 +256,6 @@ Description: # `perftester`: Lightweight performance testing of Python functions
260
256
 
261
257
  For time tests, we have the `pt.time_test()` function. First, a raw time test:
262
258
 
263
-
264
259
  ```python
265
260
  >>> pt.time_test(f, raw_limit=2e-05, n=100)
266
261
 
@@ -289,8 +284,7 @@ Description: # `perftester`: Lightweight performance testing of Python functions
289
284
 
290
285
  ```
291
286
 
292
- You can read about relative testing below, [in section](#raw-and-relative-performance-testing).
293
-
287
+ You can read about relative testing below, [in section](#raw-and-relative-performance-testing).
294
288
 
295
289
  ### Memory testing
296
290
 
@@ -298,15 +292,15 @@ Description: # `perftester`: Lightweight performance testing of Python functions
298
292
 
299
293
  ```python
300
294
  >>> pt.memory_usage_test(f, raw_limit=27, n=100) # test on raw memory
301
- >>> pt.memory_usage_test(f, relative_limit=1.01, n=100) # relative time test
302
- >>> pt.memory_usage_test(f, raw_limit=27, relative_limit=1.01, n=100) # both
295
+ >>> pt.memory_usage_test(f, relative_limit=1.2, n=100) # relative time test
296
+ >>> pt.memory_usage_test(f, raw_limit=27, relative_limit=1.2, n=100) # both
303
297
 
304
298
  ```
305
299
 
306
300
  In a memory usage test, a function is called only once. You can change that — but do that only if you have solid reasons — using, for example, `pt.config.set(f, "time", "repeat", 2)`, which will set this setting for the function in the configuration (so it will be used for all next calls for function `f()`). You can also do it just once (so, without saving the setting in `pt.config.settings`), using the `Repeat` argument:
307
301
 
308
302
  ```python
309
- >>> pt.memory_usage_test(f, raw_limit=27, relative_limit=1.01, n=100, Repeat=100)
303
+ >>> pt.memory_usage_test(f, raw_limit=27, relative_limit=1.2, n=100, Repeat=100)
310
304
 
311
305
  ```
312
306
 
@@ -314,7 +308,6 @@ Description: # `perftester`: Lightweight performance testing of Python functions
314
308
 
315
309
  Of course, memory tests do not have to be very useful for functions that do not have to allocate too much memory, but as you will see in other documentation files in `perftester`, some function do use a lot of memory, and such tests do make quite a lot sense for them.
316
310
 
317
-
318
311
  ## Configuration: `pt.config`
319
312
 
320
313
  The whole configuration is stored in the `pt.config` object, which you can easily change. Here's a short example of how you can use it:
@@ -352,7 +345,6 @@ Description: # `perftester`: Lightweight performance testing of Python functions
352
345
 
353
346
  When you use `perftester` in an interactive session, you update `pt.config` in a normal way, in the session. And when you use `perftester` inside `pytest`, you can do it in conftest.py and in each testing function.
354
347
 
355
-
356
348
  ## Output
357
349
 
358
350
  If a test fails, you will see something like this:
@@ -383,7 +375,6 @@ Description: # `perftester`: Lightweight performance testing of Python functions
383
375
 
384
376
  > Like in `pytest`, a recommended approach is to use one performance test per `perftester_` function. This can save you some time and trouble, but also this will ensure that all tests will be run.
385
377
 
386
-
387
378
  #### Summary output
388
379
 
389
380
  At the end, you will see a simple summary of the results, something like this:
@@ -404,7 +395,6 @@ Description: # `perftester`: Lightweight performance testing of Python functions
404
395
  perftester_for_testing.perftester_f_2
405
396
  ```
406
397
 
407
-
408
398
  ## Relative tests against another function
409
399
 
410
400
  In the basic use, when you choose a relative benchmark, you compare the performance of your function with that of a built-in (empty) function `pt.config.benchmark_function()`. In most cases, this is what you need. Sometimes, however, you may wish to benchmark against another function. For instance, you may want to build your own function that does the same thing as a Python built-in function, and you want to test (and show) that your function performs better. There are two ways of achieving this:
@@ -412,7 +402,6 @@ Description: # `perftester`: Lightweight performance testing of Python functions
412
402
  * you can use a simple trick; [see here](benchmarking_against_another_function.md);
413
403
  * you can overwrite the built-in benchmark functions; [see here](change_benchmarking_function.md).
414
404
 
415
-
416
405
  ## Raw and relative performance testing
417
406
 
418
407
  Surely, any performance tests are strongly environment-dependent, so you need to remember that when writing and conducting any performance tests. `perftester`, however, offers a solution to this: You can define tests based on
@@ -428,6 +417,99 @@ Description: # `perftester`: Lightweight performance testing of Python functions
428
417
 
429
418
  > Warning! Relative results can be different between operating systems.
430
419
 
420
+ ## Tracing full memory usage
421
+
422
+ Currently, `perftester` contains a beta version (under heavy testing) of a new feature that can be used to trace full memory usage of a Python program.
423
+
424
+ > Warning: Backward compatibility of this feature is not guaranteed! It does not affect the main functionality of `perftester`, however, so its backward compatibility should be kept.
425
+
426
+ The feature works in the following way. When you import `perftester` — but you need to do it with `import perftester`, not via importing particular objects — you will be able to see new objects in the global space. One of the is `MEMLOGS`:
427
+
428
+ ```python-repl
429
+ >>> import perftester
430
+ >>> MEMLOGS[0].ID
431
+ 'perftester import'
432
+
433
+ ```
434
+
435
+ It's an empty list for the moment. When you start tracing memory using `perftester`, this list will collect the subsequent measurements. You can measure them in two ways. One is via a `MEMPOINT()` function, and another via a `MEMTRACE` decorator. They, too, are in the global scope, so you can use them in any module inside a session in which `perftester` was already imported.
436
+
437
+ The `MEMLOGS` list will contain elements being instances of `MemLog`, which is a `functools.namedtuple `data type, with two attributes:`"ID"`and `"memory"`. This data type is imported with `perftester`, so if you want to use it, you can reach it as `perftester.MemLog`. You don't have to use it, though. Since it's a named tuple, you can treat it as a regular tuple.
438
+
439
+ #### What sort of memory is measured?
440
+
441
+ The feature uses `pympler.asizeof.asizeof(all=True)` to measure the size of all current gc objects, including module, global and stack frame objects, minus the size of `MEMLOGS`. The memory is measured in MB.
442
+
443
+ #### Using `MEMPOINT()`
444
+
445
+ `MEMPOINT()` creates a point of full-memory measurement. It will be appended into `MEMLOGS`.
446
+
447
+ ```python-repl
448
+ 3>>> import perftester
449
+ >>> def foo(n):
450
+ ... x = [i for i in range(n)]
451
+ ... MEMPOINT()
452
+ ... return x
453
+ >>> _ = foo(100)
454
+ >>> _ = foo(1_000_000)
455
+ >>> len(MEMLOGS)
456
+ 3
457
+ >>> MEMLOGS[2].memory > MEMLOGS[1].memory
458
+ True
459
+
460
+ ```
461
+
462
+ The last tests checks whether the second measurement — that is, from the function with `n` of a million — uses more memory that the function using `n` of a hundred. Makes sense, and indeed the test passes.
463
+
464
+ When creating a point, you can use an ID, for instance, `MEMPOINT("from sth() function")`.
465
+
466
+ `MEMPOINT()` can be used to create a point anywhere inside the code. Nevertheless, if you want to trace memory for a function, you can use a `MEMTRACE` decorator:
467
+
468
+ ```python-repl
469
+ >>> @MEMTRACE
470
+ ... def bar(n):
471
+ ... return [i for i in range(n)]
472
+ >>> _ = bar(1_000_000)
473
+ >>> MEMLOGS[-2].memory < MEMLOGS[-1].memory
474
+ True
475
+
476
+ ```
477
+
478
+ The decorator creates two points: one right before running the test and another right after returning.
479
+
480
+ The last line tests whether memory before running the function is smaller than that after running it — and given so big `n`, it should be.
481
+
482
+ Look here:
483
+
484
+ ```python-repl
485
+ >>> @MEMTRACE
486
+ ... def bar(n):
487
+ ... x = [i for i in range(n)]
488
+ ... y = [i/3 for i in x]
489
+ ... z = [i/3 for i in y]
490
+ ... MEMPOINT("with x, y, z")
491
+ ... del x
492
+ ... MEMPOINT("without x")
493
+ ... del y
494
+ ... MEMPOINT("without x and y")
495
+ ... del z
496
+ ... MEMPOINT("without x and y and z")
497
+ ... return
498
+ >>> _ = bar(100_000)
499
+ >>> MEMLOGS[-3].memory > MEMLOGS[-2].memory > MEMLOGS[-1].memory
500
+ True
501
+
502
+ ```
503
+
504
+ ### Print `MEMLOGS`
505
+
506
+ You can do whatever you want with `MEMLOGS`. However, when you want to see this object nicely printed, use the `MEMPRINT()` function, available from the global scope, too. You will see the results printed in a pretty way, with memory provided in MB.
507
+
508
+ ### Why the global scope?
509
+
510
+ Since this feature of `perftester` is to be used to debug memory use from various modules, it'd be inconvinient to import the required objects in all these modules. That's why for the moment, the required objects are kept in the global scope — but this can change in future versions.
511
+
512
+ If you have any comments about this, please share them via Issues of the package's repository.
431
513
 
432
514
  ## Other tools
433
515
 
@@ -436,9 +518,8 @@ Description: # `perftester`: Lightweight performance testing of Python functions
436
518
  * [`cProfile` and `profile`](https://docs.python.org/3/library/profile.html), the built-in powerful tools for deterministic profiling
437
519
  * [the built-in `timeit` module](https://docs.python.org/3/library/timeit.html), for benchmarking
438
520
  * [`memory_profiler`](https://pypi.org/project/memory-profiler/), a powerful memory profiler (`memory_profiler` is utilized by `perftester`)
439
-
440
- In fact, `perftester` is just a simple wrapper around `timeit` and `memory_profiler`, since `perftester` itself does not come with its own solutions. It simply uses these functions and offers an easy-to-use API to benchmark and test memory and time performance.
441
521
 
522
+ In fact, `perftester` is just a simple wrapper around `timeit` and `memory_profiler`, since `perftester` itself does not come with its own solutions. It simply uses these functions and offers an easy-to-use API to benchmark and test memory and time performance.
442
523
 
443
524
  ## Manipulating the traceback
444
525
 
@@ -446,19 +527,16 @@ Description: # `perftester`: Lightweight performance testing of Python functions
446
527
 
447
528
  > This behavior will not affect any other function than the two `perftester` testing functions: `pt.time_test()` and `pt.memory_usage_test()`. If you want to use this behavior for other functions, too, you can use `pt.config.cut_traceback()`; to reverse, use `pt.config.full_traceback()`.
448
529
 
449
-
450
530
  ## Caveats
451
531
 
452
532
  * `perftester` does not work with multiple threads or processes.
453
533
  * `perftester` is still in a beta version and so is still under testing.
454
534
  * Watch out when you're running the same test in different operating systems. Even relative tests can differ from OS to OS.
455
535
 
456
-
457
536
  ## Operating systems
458
537
 
459
538
  The package is developed in Linux (actually, under WSL) and checked in Windows 10, so it works in both these environments.
460
539
 
461
-
462
540
  ## Support
463
541
 
464
542
  Any contribution will be welcome. You can submit an issue in the [repository](https://github.com/nyggus/perftester). You can also create your own pull requests.
@@ -10,13 +10,12 @@ pip install perftester
10
10
 
11
11
  The package has three external dependencies: [`memory_profiler`](https://pypi.org/project/memory-profiler/) ([repo](https://github.com/pythonprofilers/memory_profiler)), [`easycheck`](https://pypi.org/project/easycheck/) ([repo](https://github.com/nyggus/easycheck)), and [`rounder`](https://pypi.org/project/rounder/) ([repo](https://github.com/nyggus/rounder)).
12
12
 
13
- > `perftester` is still under heavy testing. If you find anything that does not work as intended, please let me know via nyggus <at> gmail.com.
13
+ > `perftester` is still under heavy testing. If you find anything that does not work as intended, please let me know via nyggus `<at>` gmail.com.
14
14
 
15
15
  ## Pre-introduction: TL;DR
16
16
 
17
17
  At the most basic level, using `perftester` is simple. It offers you two functions for benchmarking (one for execution time and one for memory), and two functions for performance testing (likewise). Read below for a very short introduction of them. If you want to learn more, however, do not stop there, but read on.
18
18
 
19
-
20
19
  ### Benchmarking
21
20
 
22
21
  You have `time_benchmark()` and `memory_benchmark()` functions:
@@ -26,6 +25,7 @@ import perftester as pt
26
25
  def foo(x, n): return [x] * n
27
26
  pt.time_benchmark(foo, x=129, n=100)
28
27
  ```
28
+
29
29
  and this will print the results of the time benchmark, with raw results similar to those that `timeit.repeat()` returns, but unlike it, `pt.time_benchmark()` returns mean raw time per function run, not overall; in additional, you will see some summaries of the results.
30
30
 
31
31
  The above call did actually run `timeit.repeat()` function, with the default configuration of `Number=100_000` and `Repeat=5`. If you want to change any of these, you can use arguments `Number` and `Repeat`, correspondigly:
@@ -80,7 +80,7 @@ The API of `perftester` testinf functions is similar to that of benchmarking fun
80
80
  >>> pt.memory_usage_test(foo, raw_limit=25, x=129, n=100)
81
81
 
82
82
  # A relative test
83
- >>> pt.memory_usage_test(foo, relative_limit=1.01, x=129, n=100)
83
+ >>> pt.memory_usage_test(foo, relative_limit=1.2, x=129, n=100)
84
84
 
85
85
  ```
86
86
 
@@ -101,16 +101,17 @@ That's all in this short introduction. If you're interested in more advanced use
101
101
 
102
102
  ## Introduction
103
103
 
104
-
105
104
  `perftester` is a lightweight package for simple performance testing in Python. Here, performance refers to execution time and memory usage, so performance testing means testing if a function performs quickly enough and does not use too much RAM. In addition, the module offers you simple functions for straightforward benchmarking, in terms of both execution time and memory.
106
105
 
107
106
  Under the hood, `perftester` is a wrapper around two functions from other modules:
107
+
108
108
  * `perftester.time_benchmark()` and `perftester.time_test()` use `timeit.repeat()`
109
109
  * `perftester.memory_usage_benchmark()` and `perftester.memory_usage_test()` use `memory_profiler.memory_usage()`
110
110
 
111
111
  What `perftester` offers is a testing framework with as simple syntax as possible.
112
112
 
113
113
  You can use `perftester` in three main ways:
114
+
114
115
  * in an interactive session, for simple benchmarking of functions;
115
116
  * as part of another testing framework, like `doctest` or `pytest`s; and
116
117
  * as an independent testing framework.
@@ -119,7 +120,6 @@ The first way is a different type of use from the other two. I use it to learn t
119
120
 
120
121
  When it comes to actual testing, it's difficult to say which of the last two ways is better or more convinient: it may depend on how many performance tests you have, and how much time they take. If the tests do not take more than a couple of seconds, then you can combine them with unit tests. But if they take much time, you should likely make them independent of unit tests, and run them from time to time.
121
122
 
122
-
123
123
  ## Using `perftester`
124
124
 
125
125
  ### Use it as a separate testing framework
@@ -140,10 +140,9 @@ Read more about using perftester that way [here](docs/use_perftester_as_CLI.md).
140
140
 
141
141
  > There is no best approach, but remember to choose one that suits your needs.
142
142
 
143
-
144
143
  ### Use `perftester` inside `pytest`
145
144
 
146
- This is a very simple approach, perhaps the simplest one: When you use `pytest`, you can simply add `perftester` testing functions to `pytest` testing functions, and that way both frameworks will be combined, or rather the `pytest` framework will run `perftester` tests. The amount of additional work is minimal.
145
+ This is a very simple approach, perhaps the simplest one: When you use `pytest`, you can simply add `perftester` testing functions to `pytest` testing functions, and that way both frameworks will be combined, or rather the `pytest` framework will run `perftester` tests. The amount of additional work is minimal.
147
146
 
148
147
  For instance, you can write the following test function:
149
148
 
@@ -176,19 +175,16 @@ If you now run `pytest` and the test passes, nothing will happen — just like w
176
175
 
177
176
  This is the easiest way to use `perftester`. Its only drawback is that if the performance tests take much time, `pytest` will also take much time, something usually to be avoided. You can then do some `pytest` tricks to not run `perftester` tests, and run them only when you want — or you can simply use the above-described command-line `perftester` framework for performance testing.
178
177
 
179
-
180
178
  ### Use `perftester` inside `doctest`
181
179
 
182
- In the same way, you can use `perftester` in `doctest`. You will find plenty of examples in the documentation here, and in the [tests/ folder](tests/).
180
+ In the same way, you can use `perftester` in `doctest`. You will find plenty of examples in the documentation here, and in the [tests/ folder](tests/).
183
181
 
184
- > A great fan of `doctest`ing, I do **not** recommend using `perftester` in docstrings. For me, `doctest`s in docstrings should clarify things and explain how functions work, and adding a performance test to a function's docstring would decrease readability.
182
+ > A great fan of `doctest`ing, I do **not** recommend using `perftester` in docstrings. For me, `doctest`s in docstrings should clarify things and explain how functions work, and adding a performance test to a function's docstring would decrease readability.
185
183
 
186
184
  The best way, thus, is to write performance tests as separate `doctest` files, dedicated to performance testing. You can collect such files in a shell script that runs performance tests.
187
185
 
188
-
189
186
  ## Basic use of `perftester`
190
187
 
191
-
192
188
  ### Simple benchmarking
193
189
 
194
190
  To create a performance test for a function, you likely need to know how it behaves. You can run two simple benchmarking functions, `pt.memory_usage_benchmark()` and `pt.time_benchmark()`, which will run time and memory benchmarks, respectively. First, we will decrease `number` (passed to `timeit.repeat`), in order to shorten the benchmarks (which here serve as `doctest`s):
@@ -252,7 +248,6 @@ True
252
248
 
253
249
  For time tests, we have the `pt.time_test()` function. First, a raw time test:
254
250
 
255
-
256
251
  ```python
257
252
  >>> pt.time_test(f, raw_limit=2e-05, n=100)
258
253
 
@@ -281,8 +276,7 @@ We also can combine both:
281
276
 
282
277
  ```
283
278
 
284
- You can read about relative testing below, [in section](#raw-and-relative-performance-testing).
285
-
279
+ You can read about relative testing below, [in section](#raw-and-relative-performance-testing).
286
280
 
287
281
  ### Memory testing
288
282
 
@@ -290,15 +284,15 @@ Memory tests use `pt.memory_usage_test()` function, which is used in the same wa
290
284
 
291
285
  ```python
292
286
  >>> pt.memory_usage_test(f, raw_limit=27, n=100) # test on raw memory
293
- >>> pt.memory_usage_test(f, relative_limit=1.01, n=100) # relative time test
294
- >>> pt.memory_usage_test(f, raw_limit=27, relative_limit=1.01, n=100) # both
287
+ >>> pt.memory_usage_test(f, relative_limit=1.2, n=100) # relative time test
288
+ >>> pt.memory_usage_test(f, raw_limit=27, relative_limit=1.2, n=100) # both
295
289
 
296
290
  ```
297
291
 
298
292
  In a memory usage test, a function is called only once. You can change that — but do that only if you have solid reasons — using, for example, `pt.config.set(f, "time", "repeat", 2)`, which will set this setting for the function in the configuration (so it will be used for all next calls for function `f()`). You can also do it just once (so, without saving the setting in `pt.config.settings`), using the `Repeat` argument:
299
293
 
300
294
  ```python
301
- >>> pt.memory_usage_test(f, raw_limit=27, relative_limit=1.01, n=100, Repeat=100)
295
+ >>> pt.memory_usage_test(f, raw_limit=27, relative_limit=1.2, n=100, Repeat=100)
302
296
 
303
297
  ```
304
298
 
@@ -306,7 +300,6 @@ In a memory usage test, a function is called only once. You can change that —
306
300
 
307
301
  Of course, memory tests do not have to be very useful for functions that do not have to allocate too much memory, but as you will see in other documentation files in `perftester`, some function do use a lot of memory, and such tests do make quite a lot sense for them.
308
302
 
309
-
310
303
  ## Configuration: `pt.config`
311
304
 
312
305
  The whole configuration is stored in the `pt.config` object, which you can easily change. Here's a short example of how you can use it:
@@ -344,7 +337,6 @@ and so on. You can also change settings in each testing file itself, preferably
344
337
 
345
338
  When you use `perftester` in an interactive session, you update `pt.config` in a normal way, in the session. And when you use `perftester` inside `pytest`, you can do it in conftest.py and in each testing function.
346
339
 
347
-
348
340
  ## Output
349
341
 
350
342
  If a test fails, you will see something like this:
@@ -375,7 +367,6 @@ You can locate where a particular test failed, using the module, `perftester_` f
375
367
 
376
368
  > Like in `pytest`, a recommended approach is to use one performance test per `perftester_` function. This can save you some time and trouble, but also this will ensure that all tests will be run.
377
369
 
378
-
379
370
  #### Summary output
380
371
 
381
372
  At the end, you will see a simple summary of the results, something like this:
@@ -396,7 +387,6 @@ perftester_for_testing.perftester_f2_time_and_memory
396
387
  perftester_for_testing.perftester_f_2
397
388
  ```
398
389
 
399
-
400
390
  ## Relative tests against another function
401
391
 
402
392
  In the basic use, when you choose a relative benchmark, you compare the performance of your function with that of a built-in (empty) function `pt.config.benchmark_function()`. In most cases, this is what you need. Sometimes, however, you may wish to benchmark against another function. For instance, you may want to build your own function that does the same thing as a Python built-in function, and you want to test (and show) that your function performs better. There are two ways of achieving this:
@@ -404,7 +394,6 @@ In the basic use, when you choose a relative benchmark, you compare the performa
404
394
  * you can use a simple trick; [see here](benchmarking_against_another_function.md);
405
395
  * you can overwrite the built-in benchmark functions; [see here](change_benchmarking_function.md).
406
396
 
407
-
408
397
  ## Raw and relative performance testing
409
398
 
410
399
  Surely, any performance tests are strongly environment-dependent, so you need to remember that when writing and conducting any performance tests. `perftester`, however, offers a solution to this: You can define tests based on
@@ -420,6 +409,99 @@ You can of course combine both types of tests, and you can do it in a very simpl
420
409
 
421
410
  > Warning! Relative results can be different between operating systems.
422
411
 
412
+ ## Tracing full memory usage
413
+
414
+ Currently, `perftester` contains a beta version (under heavy testing) of a new feature that can be used to trace full memory usage of a Python program.
415
+
416
+ > Warning: Backward compatibility of this feature is not guaranteed! It does not affect the main functionality of `perftester`, however, so its backward compatibility should be kept.
417
+
418
+ The feature works in the following way. When you import `perftester` — but you need to do it with `import perftester`, not via importing particular objects — you will be able to see new objects in the global space. One of the is `MEMLOGS`:
419
+
420
+ ```python-repl
421
+ >>> import perftester
422
+ >>> MEMLOGS[0].ID
423
+ 'perftester import'
424
+
425
+ ```
426
+
427
+ It's an empty list for the moment. When you start tracing memory using `perftester`, this list will collect the subsequent measurements. You can measure them in two ways. One is via a `MEMPOINT()` function, and another via a `MEMTRACE` decorator. They, too, are in the global scope, so you can use them in any module inside a session in which `perftester` was already imported.
428
+
429
+ The `MEMLOGS` list will contain elements being instances of `MemLog`, which is a `functools.namedtuple `data type, with two attributes:`"ID"`and `"memory"`. This data type is imported with `perftester`, so if you want to use it, you can reach it as `perftester.MemLog`. You don't have to use it, though. Since it's a named tuple, you can treat it as a regular tuple.
430
+
431
+ #### What sort of memory is measured?
432
+
433
+ The feature uses `pympler.asizeof.asizeof(all=True)` to measure the size of all current gc objects, including module, global and stack frame objects, minus the size of `MEMLOGS`. The memory is measured in MB.
434
+
435
+ #### Using `MEMPOINT()`
436
+
437
+ `MEMPOINT()` creates a point of full-memory measurement. It will be appended into `MEMLOGS`.
438
+
439
+ ```python-repl
440
+ 3>>> import perftester
441
+ >>> def foo(n):
442
+ ... x = [i for i in range(n)]
443
+ ... MEMPOINT()
444
+ ... return x
445
+ >>> _ = foo(100)
446
+ >>> _ = foo(1_000_000)
447
+ >>> len(MEMLOGS)
448
+ 3
449
+ >>> MEMLOGS[2].memory > MEMLOGS[1].memory
450
+ True
451
+
452
+ ```
453
+
454
+ The last tests checks whether the second measurement — that is, from the function with `n` of a million — uses more memory that the function using `n` of a hundred. Makes sense, and indeed the test passes.
455
+
456
+ When creating a point, you can use an ID, for instance, `MEMPOINT("from sth() function")`.
457
+
458
+ `MEMPOINT()` can be used to create a point anywhere inside the code. Nevertheless, if you want to trace memory for a function, you can use a `MEMTRACE` decorator:
459
+
460
+ ```python-repl
461
+ >>> @MEMTRACE
462
+ ... def bar(n):
463
+ ... return [i for i in range(n)]
464
+ >>> _ = bar(1_000_000)
465
+ >>> MEMLOGS[-2].memory < MEMLOGS[-1].memory
466
+ True
467
+
468
+ ```
469
+
470
+ The decorator creates two points: one right before running the test and another right after returning.
471
+
472
+ The last line tests whether memory before running the function is smaller than that after running it — and given so big `n`, it should be.
473
+
474
+ Look here:
475
+
476
+ ```python-repl
477
+ >>> @MEMTRACE
478
+ ... def bar(n):
479
+ ... x = [i for i in range(n)]
480
+ ... y = [i/3 for i in x]
481
+ ... z = [i/3 for i in y]
482
+ ... MEMPOINT("with x, y, z")
483
+ ... del x
484
+ ... MEMPOINT("without x")
485
+ ... del y
486
+ ... MEMPOINT("without x and y")
487
+ ... del z
488
+ ... MEMPOINT("without x and y and z")
489
+ ... return
490
+ >>> _ = bar(100_000)
491
+ >>> MEMLOGS[-3].memory > MEMLOGS[-2].memory > MEMLOGS[-1].memory
492
+ True
493
+
494
+ ```
495
+
496
+ ### Print `MEMLOGS`
497
+
498
+ You can do whatever you want with `MEMLOGS`. However, when you want to see this object nicely printed, use the `MEMPRINT()` function, available from the global scope, too. You will see the results printed in a pretty way, with memory provided in MB.
499
+
500
+ ### Why the global scope?
501
+
502
+ Since this feature of `perftester` is to be used to debug memory use from various modules, it'd be inconvinient to import the required objects in all these modules. That's why for the moment, the required objects are kept in the global scope — but this can change in future versions.
503
+
504
+ If you have any comments about this, please share them via Issues of the package's repository.
423
505
 
424
506
  ## Other tools
425
507
 
@@ -428,9 +510,8 @@ Of course, Python comes with various powerful tools for profiling, benchmarking
428
510
  * [`cProfile` and `profile`](https://docs.python.org/3/library/profile.html), the built-in powerful tools for deterministic profiling
429
511
  * [the built-in `timeit` module](https://docs.python.org/3/library/timeit.html), for benchmarking
430
512
  * [`memory_profiler`](https://pypi.org/project/memory-profiler/), a powerful memory profiler (`memory_profiler` is utilized by `perftester`)
431
-
432
- In fact, `perftester` is just a simple wrapper around `timeit` and `memory_profiler`, since `perftester` itself does not come with its own solutions. It simply uses these functions and offers an easy-to-use API to benchmark and test memory and time performance.
433
513
 
514
+ In fact, `perftester` is just a simple wrapper around `timeit` and `memory_profiler`, since `perftester` itself does not come with its own solutions. It simply uses these functions and offers an easy-to-use API to benchmark and test memory and time performance.
434
515
 
435
516
  ## Manipulating the traceback
436
517
 
@@ -438,19 +519,16 @@ The default behavior of `perftester` is to **not** include the full traceback wh
438
519
 
439
520
  > This behavior will not affect any other function than the two `perftester` testing functions: `pt.time_test()` and `pt.memory_usage_test()`. If you want to use this behavior for other functions, too, you can use `pt.config.cut_traceback()`; to reverse, use `pt.config.full_traceback()`.
440
521
 
441
-
442
522
  ## Caveats
443
523
 
444
524
  * `perftester` does not work with multiple threads or processes.
445
525
  * `perftester` is still in a beta version and so is still under testing.
446
526
  * Watch out when you're running the same test in different operating systems. Even relative tests can differ from OS to OS.
447
527
 
448
-
449
528
  ## Operating systems
450
529
 
451
530
  The package is developed in Linux (actually, under WSL) and checked in Windows 10, so it works in both these environments.
452
531
 
453
-
454
532
  ## Support
455
533
 
456
534
  Any contribution will be welcome. You can submit an issue in the [repository](https://github.com/nyggus/perftester). You can also create your own pull requests.
@@ -6,10 +6,11 @@ from .perftester import (
6
6
  CLIPathError,
7
7
  LogFilePathError,
8
8
  Config,
9
+ MemLog,
9
10
  config,
10
11
  memory_usage_benchmark,
11
12
  memory_usage_test,
12
13
  time_benchmark,
13
14
  time_test,
14
- pp,
15
+ pp
15
16
  )
@@ -26,6 +26,7 @@ You can change this behavior, however:
26
26
  Let's return to previous settings:
27
27
  >>> pt.config.digits_for_printing = 4
28
28
  """
29
+ import builtins
29
30
  import copy
30
31
  import os
31
32
  import rounder
@@ -42,9 +43,11 @@ from easycheck import (
42
43
  check_if_paths_exist,
43
44
  assert_instance,
44
45
  )
46
+ from functools import wraps
45
47
  from memory_profiler import memory_usage
46
48
  from pathlib import Path
47
49
  from pprint import pprint
50
+ from pympler.asizeof import asizeof
48
51
  from statistics import mean
49
52
 
50
53
 
@@ -842,6 +845,60 @@ def _add_func_to_config(func):
842
845
  )
843
846
 
844
847
 
848
+ # Full memory measurement
849
+
850
+ builtins.__dict__["MEMLOGS"] = []
851
+
852
+
853
+ MemLog = namedtuple("MemLog", "ID memory")
854
+
855
+
856
+ def MEMPRINT():
857
+ """Pretty-print MEMLOGS."""
858
+ for i, memlog in enumerate(MEMLOGS): # type: ignore
859
+ ID = memlog.ID if memlog.ID else ""
860
+ print(f"{i: < 4} "
861
+ f"{round(memlog.memory / 1024/1024, 1): <6} → "
862
+ f"{ID}")
863
+
864
+
865
+ def MEMPOINT(ID=None):
866
+ """Global function to measure full memory and log it into MEMLOGS.
867
+
868
+ The function is available from any module of a session. It logs into
869
+ MEMLOGS, also available from any module.
870
+
871
+ Memory is collected using pympler.asizeof.asizeof(), and reported in
872
+ bytes. So, the function measures the size of all current gc objects,
873
+ including module, global and stack frame objects, minus the size
874
+ of `MEMLOGS`.
875
+ """
876
+ MEMLOGS.append(MemLog( # type: ignore
877
+ ID,
878
+ (asizeof(all=True) - asizeof(MEMLOGS))) # type: ignore
879
+ )
880
+
881
+
882
+ def MEMTRACE(func, ID_before=None, ID_after=None):
883
+ """Decorator to log memory before and after running a function."""
884
+ @wraps(func)
885
+ def inner(*args, **kwargs):
886
+ before = ID_before if ID_before else f"Before {func.__name__}()"
887
+ MEMPOINT(before)
888
+ f = func(*args, **kwargs)
889
+ after = ID_after if ID_after else f"After {func.__name__}()"
890
+ MEMPOINT(after)
891
+ return f
892
+ return inner
893
+
894
+
895
+ builtins.__dict__["MEMPOINT"] = MEMPOINT
896
+ builtins.__dict__["MEMPRINT"] = MEMPRINT
897
+ builtins.__dict__["MEMTRACE"] = MEMTRACE
898
+
899
+ MEMPOINT("perftester import")
900
+
901
+
845
902
  if __name__ == "__main__":
846
903
  import doctest
847
904
 
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: perftester
3
- Version: 0.5.0
3
+ Version: 0.6.0
4
4
  Summary: Lightweight performance testing in Python
5
5
  Home-page: https://github.com/nyggus/perftester
6
6
  Author: Nyggus
@@ -18,13 +18,12 @@ Description: # `perftester`: Lightweight performance testing of Python functions
18
18
 
19
19
  The package has three external dependencies: [`memory_profiler`](https://pypi.org/project/memory-profiler/) ([repo](https://github.com/pythonprofilers/memory_profiler)), [`easycheck`](https://pypi.org/project/easycheck/) ([repo](https://github.com/nyggus/easycheck)), and [`rounder`](https://pypi.org/project/rounder/) ([repo](https://github.com/nyggus/rounder)).
20
20
 
21
- > `perftester` is still under heavy testing. If you find anything that does not work as intended, please let me know via nyggus <at> gmail.com.
21
+ > `perftester` is still under heavy testing. If you find anything that does not work as intended, please let me know via nyggus `<at>` gmail.com.
22
22
 
23
23
  ## Pre-introduction: TL;DR
24
24
 
25
25
  At the most basic level, using `perftester` is simple. It offers you two functions for benchmarking (one for execution time and one for memory), and two functions for performance testing (likewise). Read below for a very short introduction of them. If you want to learn more, however, do not stop there, but read on.
26
26
 
27
-
28
27
  ### Benchmarking
29
28
 
30
29
  You have `time_benchmark()` and `memory_benchmark()` functions:
@@ -34,6 +33,7 @@ Description: # `perftester`: Lightweight performance testing of Python functions
34
33
  def foo(x, n): return [x] * n
35
34
  pt.time_benchmark(foo, x=129, n=100)
36
35
  ```
36
+
37
37
  and this will print the results of the time benchmark, with raw results similar to those that `timeit.repeat()` returns, but unlike it, `pt.time_benchmark()` returns mean raw time per function run, not overall; in additional, you will see some summaries of the results.
38
38
 
39
39
  The above call did actually run `timeit.repeat()` function, with the default configuration of `Number=100_000` and `Repeat=5`. If you want to change any of these, you can use arguments `Number` and `Repeat`, correspondigly:
@@ -88,7 +88,7 @@ Description: # `perftester`: Lightweight performance testing of Python functions
88
88
  >>> pt.memory_usage_test(foo, raw_limit=25, x=129, n=100)
89
89
 
90
90
  # A relative test
91
- >>> pt.memory_usage_test(foo, relative_limit=1.01, x=129, n=100)
91
+ >>> pt.memory_usage_test(foo, relative_limit=1.2, x=129, n=100)
92
92
 
93
93
  ```
94
94
 
@@ -109,16 +109,17 @@ Description: # `perftester`: Lightweight performance testing of Python functions
109
109
 
110
110
  ## Introduction
111
111
 
112
-
113
112
  `perftester` is a lightweight package for simple performance testing in Python. Here, performance refers to execution time and memory usage, so performance testing means testing if a function performs quickly enough and does not use too much RAM. In addition, the module offers you simple functions for straightforward benchmarking, in terms of both execution time and memory.
114
113
 
115
114
  Under the hood, `perftester` is a wrapper around two functions from other modules:
115
+
116
116
  * `perftester.time_benchmark()` and `perftester.time_test()` use `timeit.repeat()`
117
117
  * `perftester.memory_usage_benchmark()` and `perftester.memory_usage_test()` use `memory_profiler.memory_usage()`
118
118
 
119
119
  What `perftester` offers is a testing framework with as simple syntax as possible.
120
120
 
121
121
  You can use `perftester` in three main ways:
122
+
122
123
  * in an interactive session, for simple benchmarking of functions;
123
124
  * as part of another testing framework, like `doctest` or `pytest`s; and
124
125
  * as an independent testing framework.
@@ -127,7 +128,6 @@ Description: # `perftester`: Lightweight performance testing of Python functions
127
128
 
128
129
  When it comes to actual testing, it's difficult to say which of the last two ways is better or more convinient: it may depend on how many performance tests you have, and how much time they take. If the tests do not take more than a couple of seconds, then you can combine them with unit tests. But if they take much time, you should likely make them independent of unit tests, and run them from time to time.
129
130
 
130
-
131
131
  ## Using `perftester`
132
132
 
133
133
  ### Use it as a separate testing framework
@@ -148,10 +148,9 @@ Description: # `perftester`: Lightweight performance testing of Python functions
148
148
 
149
149
  > There is no best approach, but remember to choose one that suits your needs.
150
150
 
151
-
152
151
  ### Use `perftester` inside `pytest`
153
152
 
154
- This is a very simple approach, perhaps the simplest one: When you use `pytest`, you can simply add `perftester` testing functions to `pytest` testing functions, and that way both frameworks will be combined, or rather the `pytest` framework will run `perftester` tests. The amount of additional work is minimal.
153
+ This is a very simple approach, perhaps the simplest one: When you use `pytest`, you can simply add `perftester` testing functions to `pytest` testing functions, and that way both frameworks will be combined, or rather the `pytest` framework will run `perftester` tests. The amount of additional work is minimal.
155
154
 
156
155
  For instance, you can write the following test function:
157
156
 
@@ -184,19 +183,16 @@ Description: # `perftester`: Lightweight performance testing of Python functions
184
183
 
185
184
  This is the easiest way to use `perftester`. Its only drawback is that if the performance tests take much time, `pytest` will also take much time, something usually to be avoided. You can then do some `pytest` tricks to not run `perftester` tests, and run them only when you want — or you can simply use the above-described command-line `perftester` framework for performance testing.
186
185
 
187
-
188
186
  ### Use `perftester` inside `doctest`
189
187
 
190
- In the same way, you can use `perftester` in `doctest`. You will find plenty of examples in the documentation here, and in the [tests/ folder](tests/).
188
+ In the same way, you can use `perftester` in `doctest`. You will find plenty of examples in the documentation here, and in the [tests/ folder](tests/).
191
189
 
192
- > A great fan of `doctest`ing, I do **not** recommend using `perftester` in docstrings. For me, `doctest`s in docstrings should clarify things and explain how functions work, and adding a performance test to a function's docstring would decrease readability.
190
+ > A great fan of `doctest`ing, I do **not** recommend using `perftester` in docstrings. For me, `doctest`s in docstrings should clarify things and explain how functions work, and adding a performance test to a function's docstring would decrease readability.
193
191
 
194
192
  The best way, thus, is to write performance tests as separate `doctest` files, dedicated to performance testing. You can collect such files in a shell script that runs performance tests.
195
193
 
196
-
197
194
  ## Basic use of `perftester`
198
195
 
199
-
200
196
  ### Simple benchmarking
201
197
 
202
198
  To create a performance test for a function, you likely need to know how it behaves. You can run two simple benchmarking functions, `pt.memory_usage_benchmark()` and `pt.time_benchmark()`, which will run time and memory benchmarks, respectively. First, we will decrease `number` (passed to `timeit.repeat`), in order to shorten the benchmarks (which here serve as `doctest`s):
@@ -260,7 +256,6 @@ Description: # `perftester`: Lightweight performance testing of Python functions
260
256
 
261
257
  For time tests, we have the `pt.time_test()` function. First, a raw time test:
262
258
 
263
-
264
259
  ```python
265
260
  >>> pt.time_test(f, raw_limit=2e-05, n=100)
266
261
 
@@ -289,8 +284,7 @@ Description: # `perftester`: Lightweight performance testing of Python functions
289
284
 
290
285
  ```
291
286
 
292
- You can read about relative testing below, [in section](#raw-and-relative-performance-testing).
293
-
287
+ You can read about relative testing below, [in section](#raw-and-relative-performance-testing).
294
288
 
295
289
  ### Memory testing
296
290
 
@@ -298,15 +292,15 @@ Description: # `perftester`: Lightweight performance testing of Python functions
298
292
 
299
293
  ```python
300
294
  >>> pt.memory_usage_test(f, raw_limit=27, n=100) # test on raw memory
301
- >>> pt.memory_usage_test(f, relative_limit=1.01, n=100) # relative time test
302
- >>> pt.memory_usage_test(f, raw_limit=27, relative_limit=1.01, n=100) # both
295
+ >>> pt.memory_usage_test(f, relative_limit=1.2, n=100) # relative time test
296
+ >>> pt.memory_usage_test(f, raw_limit=27, relative_limit=1.2, n=100) # both
303
297
 
304
298
  ```
305
299
 
306
300
  In a memory usage test, a function is called only once. You can change that — but do that only if you have solid reasons — using, for example, `pt.config.set(f, "time", "repeat", 2)`, which will set this setting for the function in the configuration (so it will be used for all next calls for function `f()`). You can also do it just once (so, without saving the setting in `pt.config.settings`), using the `Repeat` argument:
307
301
 
308
302
  ```python
309
- >>> pt.memory_usage_test(f, raw_limit=27, relative_limit=1.01, n=100, Repeat=100)
303
+ >>> pt.memory_usage_test(f, raw_limit=27, relative_limit=1.2, n=100, Repeat=100)
310
304
 
311
305
  ```
312
306
 
@@ -314,7 +308,6 @@ Description: # `perftester`: Lightweight performance testing of Python functions
314
308
 
315
309
  Of course, memory tests do not have to be very useful for functions that do not have to allocate too much memory, but as you will see in other documentation files in `perftester`, some function do use a lot of memory, and such tests do make quite a lot sense for them.
316
310
 
317
-
318
311
  ## Configuration: `pt.config`
319
312
 
320
313
  The whole configuration is stored in the `pt.config` object, which you can easily change. Here's a short example of how you can use it:
@@ -352,7 +345,6 @@ Description: # `perftester`: Lightweight performance testing of Python functions
352
345
 
353
346
  When you use `perftester` in an interactive session, you update `pt.config` in a normal way, in the session. And when you use `perftester` inside `pytest`, you can do it in conftest.py and in each testing function.
354
347
 
355
-
356
348
  ## Output
357
349
 
358
350
  If a test fails, you will see something like this:
@@ -383,7 +375,6 @@ Description: # `perftester`: Lightweight performance testing of Python functions
383
375
 
384
376
  > Like in `pytest`, a recommended approach is to use one performance test per `perftester_` function. This can save you some time and trouble, but also this will ensure that all tests will be run.
385
377
 
386
-
387
378
  #### Summary output
388
379
 
389
380
  At the end, you will see a simple summary of the results, something like this:
@@ -404,7 +395,6 @@ Description: # `perftester`: Lightweight performance testing of Python functions
404
395
  perftester_for_testing.perftester_f_2
405
396
  ```
406
397
 
407
-
408
398
  ## Relative tests against another function
409
399
 
410
400
  In the basic use, when you choose a relative benchmark, you compare the performance of your function with that of a built-in (empty) function `pt.config.benchmark_function()`. In most cases, this is what you need. Sometimes, however, you may wish to benchmark against another function. For instance, you may want to build your own function that does the same thing as a Python built-in function, and you want to test (and show) that your function performs better. There are two ways of achieving this:
@@ -412,7 +402,6 @@ Description: # `perftester`: Lightweight performance testing of Python functions
412
402
  * you can use a simple trick; [see here](benchmarking_against_another_function.md);
413
403
  * you can overwrite the built-in benchmark functions; [see here](change_benchmarking_function.md).
414
404
 
415
-
416
405
  ## Raw and relative performance testing
417
406
 
418
407
  Surely, any performance tests are strongly environment-dependent, so you need to remember that when writing and conducting any performance tests. `perftester`, however, offers a solution to this: You can define tests based on
@@ -428,6 +417,99 @@ Description: # `perftester`: Lightweight performance testing of Python functions
428
417
 
429
418
  > Warning! Relative results can be different between operating systems.
430
419
 
420
+ ## Tracing full memory usage
421
+
422
+ Currently, `perftester` contains a beta version (under heavy testing) of a new feature that can be used to trace full memory usage of a Python program.
423
+
424
+ > Warning: Backward compatibility of this feature is not guaranteed! It does not affect the main functionality of `perftester`, however, so its backward compatibility should be kept.
425
+
426
+ The feature works in the following way. When you import `perftester` — but you need to do it with `import perftester`, not via importing particular objects — you will be able to see new objects in the global space. One of the is `MEMLOGS`:
427
+
428
+ ```python-repl
429
+ >>> import perftester
430
+ >>> MEMLOGS[0].ID
431
+ 'perftester import'
432
+
433
+ ```
434
+
435
+ It's an empty list for the moment. When you start tracing memory using `perftester`, this list will collect the subsequent measurements. You can measure them in two ways. One is via a `MEMPOINT()` function, and another via a `MEMTRACE` decorator. They, too, are in the global scope, so you can use them in any module inside a session in which `perftester` was already imported.
436
+
437
+ The `MEMLOGS` list will contain elements being instances of `MemLog`, which is a `functools.namedtuple `data type, with two attributes:`"ID"`and `"memory"`. This data type is imported with `perftester`, so if you want to use it, you can reach it as `perftester.MemLog`. You don't have to use it, though. Since it's a named tuple, you can treat it as a regular tuple.
438
+
439
+ #### What sort of memory is measured?
440
+
441
+ The feature uses `pympler.asizeof.asizeof(all=True)` to measure the size of all current gc objects, including module, global and stack frame objects, minus the size of `MEMLOGS`. The memory is measured in MB.
442
+
443
+ #### Using `MEMPOINT()`
444
+
445
+ `MEMPOINT()` creates a point of full-memory measurement. It will be appended into `MEMLOGS`.
446
+
447
+ ```python-repl
448
+ 3>>> import perftester
449
+ >>> def foo(n):
450
+ ... x = [i for i in range(n)]
451
+ ... MEMPOINT()
452
+ ... return x
453
+ >>> _ = foo(100)
454
+ >>> _ = foo(1_000_000)
455
+ >>> len(MEMLOGS)
456
+ 3
457
+ >>> MEMLOGS[2].memory > MEMLOGS[1].memory
458
+ True
459
+
460
+ ```
461
+
462
+ The last tests checks whether the second measurement — that is, from the function with `n` of a million — uses more memory that the function using `n` of a hundred. Makes sense, and indeed the test passes.
463
+
464
+ When creating a point, you can use an ID, for instance, `MEMPOINT("from sth() function")`.
465
+
466
+ `MEMPOINT()` can be used to create a point anywhere inside the code. Nevertheless, if you want to trace memory for a function, you can use a `MEMTRACE` decorator:
467
+
468
+ ```python-repl
469
+ >>> @MEMTRACE
470
+ ... def bar(n):
471
+ ... return [i for i in range(n)]
472
+ >>> _ = bar(1_000_000)
473
+ >>> MEMLOGS[-2].memory < MEMLOGS[-1].memory
474
+ True
475
+
476
+ ```
477
+
478
+ The decorator creates two points: one right before running the test and another right after returning.
479
+
480
+ The last line tests whether memory before running the function is smaller than that after running it — and given so big `n`, it should be.
481
+
482
+ Look here:
483
+
484
+ ```python-repl
485
+ >>> @MEMTRACE
486
+ ... def bar(n):
487
+ ... x = [i for i in range(n)]
488
+ ... y = [i/3 for i in x]
489
+ ... z = [i/3 for i in y]
490
+ ... MEMPOINT("with x, y, z")
491
+ ... del x
492
+ ... MEMPOINT("without x")
493
+ ... del y
494
+ ... MEMPOINT("without x and y")
495
+ ... del z
496
+ ... MEMPOINT("without x and y and z")
497
+ ... return
498
+ >>> _ = bar(100_000)
499
+ >>> MEMLOGS[-3].memory > MEMLOGS[-2].memory > MEMLOGS[-1].memory
500
+ True
501
+
502
+ ```
503
+
504
+ ### Print `MEMLOGS`
505
+
506
+ You can do whatever you want with `MEMLOGS`. However, when you want to see this object nicely printed, use the `MEMPRINT()` function, available from the global scope, too. You will see the results printed in a pretty way, with memory provided in MB.
507
+
508
+ ### Why the global scope?
509
+
510
+ Since this feature of `perftester` is to be used to debug memory use from various modules, it'd be inconvinient to import the required objects in all these modules. That's why for the moment, the required objects are kept in the global scope — but this can change in future versions.
511
+
512
+ If you have any comments about this, please share them via Issues of the package's repository.
431
513
 
432
514
  ## Other tools
433
515
 
@@ -436,9 +518,8 @@ Description: # `perftester`: Lightweight performance testing of Python functions
436
518
  * [`cProfile` and `profile`](https://docs.python.org/3/library/profile.html), the built-in powerful tools for deterministic profiling
437
519
  * [the built-in `timeit` module](https://docs.python.org/3/library/timeit.html), for benchmarking
438
520
  * [`memory_profiler`](https://pypi.org/project/memory-profiler/), a powerful memory profiler (`memory_profiler` is utilized by `perftester`)
439
-
440
- In fact, `perftester` is just a simple wrapper around `timeit` and `memory_profiler`, since `perftester` itself does not come with its own solutions. It simply uses these functions and offers an easy-to-use API to benchmark and test memory and time performance.
441
521
 
522
+ In fact, `perftester` is just a simple wrapper around `timeit` and `memory_profiler`, since `perftester` itself does not come with its own solutions. It simply uses these functions and offers an easy-to-use API to benchmark and test memory and time performance.
442
523
 
443
524
  ## Manipulating the traceback
444
525
 
@@ -446,19 +527,16 @@ Description: # `perftester`: Lightweight performance testing of Python functions
446
527
 
447
528
  > This behavior will not affect any other function than the two `perftester` testing functions: `pt.time_test()` and `pt.memory_usage_test()`. If you want to use this behavior for other functions, too, you can use `pt.config.cut_traceback()`; to reverse, use `pt.config.full_traceback()`.
448
529
 
449
-
450
530
  ## Caveats
451
531
 
452
532
  * `perftester` does not work with multiple threads or processes.
453
533
  * `perftester` is still in a beta version and so is still under testing.
454
534
  * Watch out when you're running the same test in different operating systems. Even relative tests can differ from OS to OS.
455
535
 
456
-
457
536
  ## Operating systems
458
537
 
459
538
  The package is developed in Linux (actually, under WSL) and checked in Windows 10, so it works in both these environments.
460
539
 
461
-
462
540
  ## Support
463
541
 
464
542
  Any contribution will be welcome. You can submit an issue in the [repository](https://github.com/nyggus/perftester). You can also create your own pull requests.
@@ -0,0 +1,8 @@
1
+ easycheck
2
+ memory_profiler
3
+ pympler
4
+ rounder
5
+
6
+ [dev]
7
+ black
8
+ wheel
@@ -4,12 +4,12 @@ with open("README.md", "r") as fh:
4
4
  long_description = fh.read()
5
5
 
6
6
  extras_requirements = {
7
- "dev": ["wheel==0.37.1", "black"],
7
+ "dev": ["wheel", "black"],
8
8
  }
9
9
 
10
10
  setuptools.setup(
11
11
  name="perftester",
12
- version="0.5.0",
12
+ version="0.6.0",
13
13
  author="Nyggus",
14
14
  author_email="nyggus@gmail.com",
15
15
  description="Lightweight performance testing in Python",
@@ -22,7 +22,7 @@ setuptools.setup(
22
22
  "License :: OSI Approved :: MIT License",
23
23
  "Operating System :: OS Independent",
24
24
  ],
25
- install_requires=["easycheck", "rounder", "memory_profiler==0.60.0"],
25
+ install_requires=["easycheck", "rounder", "memory_profiler", "pympler"],
26
26
  python_requires=">=3.8",
27
27
  extras_require=extras_requirements,
28
28
  entry_points={"console_scripts": ["perftester = perftester.__main__:main"]},
@@ -1,7 +0,0 @@
1
- easycheck
2
- memory_profiler==0.60.0
3
- rounder
4
-
5
- [dev]
6
- black
7
- wheel==0.37.1
File without changes
File without changes