mapFolding 0.12.0__py3-none-any.whl → 0.12.2__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- mapFolding/__init__.py +42 -18
- mapFolding/_theSSOT.py +137 -0
- mapFolding/basecamp.py +28 -18
- mapFolding/beDRY.py +21 -19
- mapFolding/dataBaskets.py +170 -18
- mapFolding/datatypes.py +109 -1
- mapFolding/filesystemToolkit.py +38 -33
- mapFolding/oeis.py +209 -93
- mapFolding/someAssemblyRequired/RecipeJob.py +120 -9
- mapFolding/someAssemblyRequired/__init__.py +35 -38
- mapFolding/someAssemblyRequired/_toolIfThis.py +80 -18
- mapFolding/someAssemblyRequired/_toolkitContainers.py +123 -45
- mapFolding/someAssemblyRequired/infoBooth.py +37 -2
- mapFolding/someAssemblyRequired/makeAllModules.py +712 -0
- mapFolding/someAssemblyRequired/makeJobTheorem2Numba.py +111 -48
- mapFolding/someAssemblyRequired/toolkitNumba.py +171 -19
- mapFolding/someAssemblyRequired/transformationTools.py +93 -49
- mapfolding-0.12.2.dist-info/METADATA +167 -0
- mapfolding-0.12.2.dist-info/RECORD +53 -0
- {mapfolding-0.12.0.dist-info → mapfolding-0.12.2.dist-info}/WHEEL +1 -1
- tests/__init__.py +28 -44
- tests/conftest.py +66 -61
- tests/test_computations.py +39 -82
- tests/test_filesystem.py +25 -1
- tests/test_oeis.py +30 -1
- tests/test_other.py +27 -0
- tests/test_tasks.py +31 -1
- mapFolding/someAssemblyRequired/Z0Z_makeAllModules.py +0 -433
- mapFolding/theSSOT.py +0 -34
- mapfolding-0.12.0.dist-info/METADATA +0 -184
- mapfolding-0.12.0.dist-info/RECORD +0 -53
- {mapfolding-0.12.0.dist-info → mapfolding-0.12.2.dist-info}/entry_points.txt +0 -0
- {mapfolding-0.12.0.dist-info → mapfolding-0.12.2.dist-info}/licenses/LICENSE +0 -0
- {mapfolding-0.12.0.dist-info → mapfolding-0.12.2.dist-info}/top_level.txt +0 -0
tests/conftest.py
CHANGED
|
@@ -1,62 +1,30 @@
|
|
|
1
|
-
"""
|
|
2
|
-
|
|
3
|
-
|
|
4
|
-
|
|
5
|
-
|
|
6
|
-
|
|
7
|
-
|
|
8
|
-
|
|
9
|
-
|
|
10
|
-
|
|
11
|
-
|
|
12
|
-
|
|
13
|
-
|
|
14
|
-
|
|
15
|
-
-
|
|
16
|
-
|
|
17
|
-
|
|
18
|
-
|
|
19
|
-
|
|
20
|
-
|
|
21
|
-
|
|
22
|
-
|
|
23
|
-
|
|
24
|
-
- `syntheticDispatcherFixture`: Tests with generated Numba-optimized implementation
|
|
25
|
-
|
|
26
|
-
2. **Test Data Fixtures**:
|
|
27
|
-
- `oeisID`, `oeisID_1random`: Provide OEIS sequence identifiers for testing
|
|
28
|
-
- `listDimensionsTestCountFolds`: Provides dimensions suitable for algorithm testing
|
|
29
|
-
- `listDimensionsTestParallelization`: Provides dimensions suitable for parallel testing
|
|
30
|
-
- `mapShapeTestFunctionality`: Provides map shapes suitable for functional testing
|
|
31
|
-
|
|
32
|
-
3. **Path Fixtures**:
|
|
33
|
-
- `pathTmpTesting`: Creates a temporary directory for test-specific files
|
|
34
|
-
- `pathFilenameTmpTesting`: Creates a temporary file with appropriate extension
|
|
35
|
-
- `pathCacheTesting`: Creates a temporary OEIS cache directory
|
|
36
|
-
|
|
37
|
-
### Standardized Test Utilities
|
|
38
|
-
|
|
39
|
-
The module provides utilities that create consistent test outputs:
|
|
40
|
-
|
|
41
|
-
- `standardizedEqualToCallableReturn`: Core utility that handles testing function return values
|
|
42
|
-
or exceptions with uniform error messages
|
|
43
|
-
- `standardizedSystemExit`: Tests code that should exit the program with specific status codes
|
|
44
|
-
- `uniformTestMessage`: Creates consistent error messages for test failures
|
|
45
|
-
|
|
46
|
-
## Using These Fixtures for Custom Tests
|
|
47
|
-
|
|
48
|
-
The most important fixtures for testing custom implementations are:
|
|
49
|
-
|
|
50
|
-
1. `syntheticDispatcherFixture`: Creates and patches a Numba-optimized module from a recipe
|
|
51
|
-
2. `pathTmpTesting`: Provides a clean temporary directory for test files
|
|
52
|
-
3. `standardizedEqualToCallableReturn`: Simplifies test assertions with clear error messages
|
|
53
|
-
|
|
54
|
-
These can be adapted by copying and modifying them to test custom recipes and jobs.
|
|
55
|
-
See the examples in `test_computations.py` for guidance on adapting these fixtures.
|
|
1
|
+
"""Test framework infrastructure and shared fixtures for mapFolding.
|
|
2
|
+
|
|
3
|
+
This module serves as the foundation for the entire test suite, providing standardized
|
|
4
|
+
fixtures, temporary file management, and testing utilities. It implements the Single
|
|
5
|
+
Source of Truth principle for test configuration and establishes consistent patterns
|
|
6
|
+
that make the codebase easier to extend and maintain.
|
|
7
|
+
|
|
8
|
+
The testing framework is designed for multiple audiences:
|
|
9
|
+
- Contributors who need to understand the test patterns
|
|
10
|
+
- AI assistants that help maintain or extend the codebase
|
|
11
|
+
- Users who want to test custom modules they create
|
|
12
|
+
- Future maintainers who need to debug or modify tests
|
|
13
|
+
|
|
14
|
+
Key Components:
|
|
15
|
+
- Temporary file management with automatic cleanup
|
|
16
|
+
- Standardized assertion functions with uniform error messages
|
|
17
|
+
- Test data generation from OEIS sequences for reproducible results
|
|
18
|
+
- Mock objects for external dependencies and timing-sensitive operations
|
|
19
|
+
|
|
20
|
+
The module follows Domain-Driven Design principles, organizing test concerns around
|
|
21
|
+
the mathematical concepts of map folding rather than technical implementation details.
|
|
22
|
+
This makes tests more meaningful and easier to understand in the context of the
|
|
23
|
+
research domain.
|
|
56
24
|
"""
|
|
57
25
|
|
|
58
26
|
from collections.abc import Callable, Generator, Sequence
|
|
59
|
-
from mapFolding import getLeavesTotal,
|
|
27
|
+
from mapFolding import getLeavesTotal, makeDataContainer, validateListDimensions
|
|
60
28
|
from mapFolding.oeis import oeisIDsImplemented, settingsOEIS
|
|
61
29
|
from pathlib import Path
|
|
62
30
|
from typing import Any
|
|
@@ -68,7 +36,7 @@ import unittest.mock
|
|
|
68
36
|
import uuid
|
|
69
37
|
|
|
70
38
|
# SSOT for test data paths and filenames
|
|
71
|
-
pathDataSamples = Path("tests/dataSamples").absolute()
|
|
39
|
+
pathDataSamples: Path = Path("tests/dataSamples").absolute()
|
|
72
40
|
pathTmpRoot: Path = pathDataSamples / "tmp"
|
|
73
41
|
pathTmpRoot.mkdir(parents=True, exist_ok=True)
|
|
74
42
|
|
|
@@ -102,7 +70,7 @@ def setupTeardownTmpObjects() -> Generator[None, None, None]:
|
|
|
102
70
|
@pytest.fixture
|
|
103
71
|
def pathTmpTesting(request: pytest.FixtureRequest) -> Path:
|
|
104
72
|
# "Z0Z_" ensures the directory name does not start with a number, which would make it an invalid Python identifier
|
|
105
|
-
pathTmp = pathTmpRoot / ("Z0Z_" + str(uuid.uuid4().hex))
|
|
73
|
+
pathTmp: Path = pathTmpRoot / ("Z0Z_" + str(uuid.uuid4().hex))
|
|
106
74
|
pathTmp.mkdir(parents=True, exist_ok=False)
|
|
107
75
|
|
|
108
76
|
registrarRecordsTmpObject(pathTmp)
|
|
@@ -153,7 +121,15 @@ def setupWarningsAsErrors() -> Generator[None, Any, None]:
|
|
|
153
121
|
@pytest.fixture
|
|
154
122
|
def oneTestCuzTestsOverwritingTests(oeisID_1random: str) -> tuple[int, ...]:
|
|
155
123
|
"""For each `oeisID_1random` from the `pytest.fixture`, returns `listDimensions` from `valuesTestValidation`
|
|
156
|
-
if `validateListDimensions` approves. Each `listDimensions` is suitable for testing counts.
|
|
124
|
+
if `validateListDimensions` approves. Each `listDimensions` is suitable for testing counts.
|
|
125
|
+
|
|
126
|
+
This fixture provides a single test case to avoid issues with tests that write to the same
|
|
127
|
+
output files. It's particularly useful when testing code generation or file output functions
|
|
128
|
+
where multiple concurrent tests could interfere with each other.
|
|
129
|
+
|
|
130
|
+
The returned map shape is guaranteed to be computationally feasible for testing purposes,
|
|
131
|
+
avoiding cases that would take excessive time to complete during test runs.
|
|
132
|
+
"""
|
|
157
133
|
while True:
|
|
158
134
|
n = random.choice(settingsOEIS[oeisID_1random]['valuesTestValidation'])
|
|
159
135
|
if n < 2:
|
|
@@ -235,13 +211,42 @@ def oeisID_1random() -> str:
|
|
|
235
211
|
return random.choice(oeisIDsImplemented)
|
|
236
212
|
|
|
237
213
|
def uniformTestMessage(expected: Any, actual: Any, functionName: str, *arguments: Any) -> str:
|
|
238
|
-
"""Format assertion message for any test comparison.
|
|
214
|
+
"""Format assertion message for any test comparison.
|
|
215
|
+
|
|
216
|
+
Creates standardized, machine-parsable error messages that clearly display
|
|
217
|
+
what was expected versus what was received. This uniform formatting makes
|
|
218
|
+
test failures easier to debug and maintains consistency across the entire
|
|
219
|
+
test suite.
|
|
220
|
+
|
|
221
|
+
Parameters
|
|
222
|
+
expected: The value or exception type that was expected
|
|
223
|
+
actual: The value or exception type that was actually received
|
|
224
|
+
functionName: Name of the function being tested
|
|
225
|
+
arguments: Arguments that were passed to the function
|
|
226
|
+
|
|
227
|
+
Returns
|
|
228
|
+
formattedMessage: A formatted string showing the test context and comparison
|
|
229
|
+
"""
|
|
239
230
|
return (f"\nTesting: `{functionName}({', '.join(str(parameter) for parameter in arguments)})`\n"
|
|
240
231
|
f"Expected: {expected}\n"
|
|
241
232
|
f"Got: {actual}")
|
|
242
233
|
|
|
243
234
|
def standardizedEqualToCallableReturn(expected: Any, functionTarget: Callable[..., Any], *arguments: Any) -> None:
|
|
244
|
-
"""Use with callables that produce a return or an error.
|
|
235
|
+
"""Use with callables that produce a return or an error.
|
|
236
|
+
|
|
237
|
+
This is the primary testing function for validating both successful returns
|
|
238
|
+
and expected exceptions. It provides consistent error messaging and handles
|
|
239
|
+
the comparison logic that most tests in the suite rely on.
|
|
240
|
+
|
|
241
|
+
When testing a function that should raise an exception, pass the exception
|
|
242
|
+
type as the `expected` parameter. For successful returns, pass the expected
|
|
243
|
+
return value.
|
|
244
|
+
|
|
245
|
+
Parameters
|
|
246
|
+
expected: Expected return value or exception type
|
|
247
|
+
functionTarget: The function to test
|
|
248
|
+
arguments: Arguments to pass to the function
|
|
249
|
+
"""
|
|
245
250
|
if type(expected) is type[Exception]:
|
|
246
251
|
messageExpected = expected.__name__
|
|
247
252
|
else:
|
tests/test_computations.py
CHANGED
|
@@ -1,88 +1,26 @@
|
|
|
1
|
-
"""
|
|
2
|
-
Core Algorithm and Module Generation Testing
|
|
3
|
-
|
|
4
|
-
This module provides tests for validating algorithm correctness and testing
|
|
5
|
-
code generation functionality. It's designed not only to test the package's
|
|
6
|
-
functionality but also to serve as a template for users testing their own
|
|
7
|
-
custom implementations.
|
|
8
|
-
|
|
9
|
-
## Key Testing Categories
|
|
10
|
-
|
|
11
|
-
1. Algorithm Validation Tests
|
|
12
|
-
- `test_algorithmSourceParallel` - Tests the source algorithm in parallel mode
|
|
13
|
-
- `test_algorithmSourceSequential` - Tests the source algorithm in sequential mode
|
|
14
|
-
- `test_aOFn_calculate_value` - Tests OEIS sequence value calculations
|
|
15
|
-
|
|
16
|
-
2. Synthetic Module Tests
|
|
17
|
-
- `test_syntheticParallel` - Tests generated Numba-optimized code in parallel mode
|
|
18
|
-
- `test_syntheticSequential` - Tests generated Numba-optimized code in sequential mode
|
|
19
|
-
|
|
20
|
-
3. Job Testing
|
|
21
|
-
- `test_writeJobNumba` - Tests job-specific module generation and execution
|
|
22
|
-
|
|
23
|
-
## How to Test Your Custom Implementations
|
|
24
|
-
|
|
25
|
-
### Testing Custom Recipes (RecipeSynthesizeFlow):
|
|
26
|
-
|
|
27
|
-
1. Copy the `syntheticDispatcherFixture` from conftest.py
|
|
28
|
-
2. Modify it to use your custom recipe configuration
|
|
29
|
-
3. Copy and adapt `test_syntheticParallel` and `test_syntheticSequential`
|
|
30
|
-
|
|
31
|
-
Example:
|
|
32
|
-
|
|
33
|
-
```python
|
|
34
|
-
@pytest.fixture
|
|
35
|
-
def myCustomRecipeFixture(useThisDispatcher, pathTmpTesting):
|
|
36
|
-
# Create your custom recipe configuration
|
|
37
|
-
myRecipe = RecipeSynthesizeFlow(
|
|
38
|
-
pathPackage=PurePosixPath(pathTmpTesting.absolute()),
|
|
39
|
-
# Add your custom configuration
|
|
40
|
-
)
|
|
41
|
-
|
|
42
|
-
# Generate the module
|
|
43
|
-
makeNumbaFlow(myRecipe)
|
|
1
|
+
"""Core computational verification and algorithm validation tests.
|
|
44
2
|
|
|
45
|
-
|
|
46
|
-
|
|
3
|
+
This module validates the mathematical correctness of map folding computations and
|
|
4
|
+
serves as the primary testing ground for new computational approaches. It's the most
|
|
5
|
+
important module for users who create custom folding algorithms or modify existing ones.
|
|
47
6
|
|
|
48
|
-
|
|
7
|
+
The tests here verify that different computational flows produce identical results,
|
|
8
|
+
ensuring mathematical consistency across implementation strategies. This is critical
|
|
9
|
+
for maintaining confidence in results as the codebase evolves and new optimization
|
|
10
|
+
techniques are added.
|
|
49
11
|
|
|
50
|
-
|
|
51
|
-
|
|
52
|
-
|
|
53
|
-
|
|
54
|
-
|
|
55
|
-
listDimensionsTestParallelization,
|
|
56
|
-
None,
|
|
57
|
-
'maximum'
|
|
58
|
-
)
|
|
59
|
-
```
|
|
12
|
+
Key Testing Areas:
|
|
13
|
+
- Flow control validation across different algorithmic approaches
|
|
14
|
+
- OEIS sequence value verification against known mathematical results
|
|
15
|
+
- Code generation and execution for dynamically created computational modules
|
|
16
|
+
- Numerical accuracy and consistency checks
|
|
60
17
|
|
|
61
|
-
|
|
18
|
+
For users implementing new computational methods: use the `test_flowControl` pattern
|
|
19
|
+
as a template. It demonstrates how to validate that your algorithm produces results
|
|
20
|
+
consistent with the established mathematical foundation.
|
|
62
21
|
|
|
63
|
-
|
|
64
|
-
|
|
65
|
-
|
|
66
|
-
Example:
|
|
67
|
-
|
|
68
|
-
```python
|
|
69
|
-
def test_myCustomJob(oneTestCuzTestsOverwritingTests, pathFilenameTmpTesting):
|
|
70
|
-
# Create your custom job configuration
|
|
71
|
-
myJob = RecipeJob(
|
|
72
|
-
state=makeInitializedComputationState(validateListDimensions(oneTestCuzTestsOverwritingTests)),
|
|
73
|
-
# Add your custom configuration
|
|
74
|
-
)
|
|
75
|
-
|
|
76
|
-
spices = SpicesJobNumba()
|
|
77
|
-
# Customize spices if needed
|
|
78
|
-
|
|
79
|
-
# Generate and test the job
|
|
80
|
-
makeJobNumba(myJob, spices)
|
|
81
|
-
# Test execution similar to test_writeJobNumba
|
|
82
|
-
```
|
|
83
|
-
|
|
84
|
-
All tests leverage standardized utilities like `standardizedEqualToCallableReturn`
|
|
85
|
-
that provide consistent, informative error messages and simplify test validation.
|
|
22
|
+
The `test_writeJobNumba` function shows how to test dynamically generated code,
|
|
23
|
+
which is useful if you're working with the code synthesis features of the package.
|
|
86
24
|
"""
|
|
87
25
|
|
|
88
26
|
from mapFolding import countFolds, getFoldsTotalKnown, oeisIDfor_n
|
|
@@ -91,7 +29,7 @@ from mapFolding.oeis import settingsOEIS
|
|
|
91
29
|
from mapFolding.someAssemblyRequired.RecipeJob import RecipeJobTheorem2Numba
|
|
92
30
|
from mapFolding.syntheticModules.initializeCount import initializeGroupsOfFolds
|
|
93
31
|
from pathlib import Path, PurePosixPath
|
|
94
|
-
from tests.conftest import
|
|
32
|
+
from tests.conftest import registrarRecordsTmpObject, standardizedEqualToCallableReturn
|
|
95
33
|
from typing import Literal
|
|
96
34
|
import importlib.util
|
|
97
35
|
import multiprocessing
|
|
@@ -104,6 +42,15 @@ if __name__ == '__main__':
|
|
|
104
42
|
|
|
105
43
|
@pytest.mark.parametrize('flow', ['daoOfMapFolding', 'theorem2', 'theorem2Trimmed', 'theorem2numba'])
|
|
106
44
|
def test_flowControl(mapShapeTestCountFolds: tuple[int, ...], flow: Literal['daoOfMapFolding'] | Literal['theorem2'] | Literal['theorem2numba']) -> None:
|
|
45
|
+
"""Validate that different computational flows produce identical results.
|
|
46
|
+
|
|
47
|
+
This is the primary test for ensuring mathematical consistency across different
|
|
48
|
+
algorithmic implementations. When adding a new computational approach, include
|
|
49
|
+
it in the parametrized flow list to verify it produces correct results.
|
|
50
|
+
|
|
51
|
+
The test compares the output of each flow against known correct values from
|
|
52
|
+
OEIS sequences, ensuring that optimization techniques don't compromise accuracy.
|
|
53
|
+
"""
|
|
107
54
|
standardizedEqualToCallableReturn(getFoldsTotalKnown(mapShapeTestCountFolds), countFolds, None, None, None, None, mapShapeTestCountFolds, None, None, flow)
|
|
108
55
|
|
|
109
56
|
def test_aOFn_calculate_value(oeisID: str) -> None:
|
|
@@ -112,8 +59,18 @@ def test_aOFn_calculate_value(oeisID: str) -> None:
|
|
|
112
59
|
|
|
113
60
|
@pytest.mark.parametrize('pathFilenameTmpTesting', ['.py'], indirect=True)
|
|
114
61
|
def test_writeJobNumba(oneTestCuzTestsOverwritingTests: tuple[int, ...], pathFilenameTmpTesting: Path) -> None:
|
|
115
|
-
|
|
62
|
+
"""Test dynamic code generation and execution for computational modules.
|
|
63
|
+
|
|
64
|
+
This test validates the package's ability to generate, compile, and execute
|
|
65
|
+
optimized computational code at runtime. It's essential for users working with
|
|
66
|
+
the code synthesis features or implementing custom optimization strategies.
|
|
67
|
+
|
|
68
|
+
The test creates a complete computational module, executes it, and verifies
|
|
69
|
+
that the generated code produces mathematically correct results. This pattern
|
|
70
|
+
can be adapted for testing other dynamically generated computational approaches.
|
|
71
|
+
"""
|
|
116
72
|
from mapFolding.someAssemblyRequired.makeJobTheorem2Numba import makeJobNumba
|
|
73
|
+
from mapFolding.someAssemblyRequired.toolkitNumba import SpicesJobNumba
|
|
117
74
|
mapShape = oneTestCuzTestsOverwritingTests
|
|
118
75
|
state = MapFoldingState(mapShape)
|
|
119
76
|
state = initializeGroupsOfFolds(state)
|
tests/test_filesystem.py
CHANGED
|
@@ -1,5 +1,29 @@
|
|
|
1
|
+
"""File system operations and path management validation.
|
|
2
|
+
|
|
3
|
+
This module tests the package's interaction with the file system, ensuring that
|
|
4
|
+
results are correctly saved, paths are properly constructed, and fallback mechanisms
|
|
5
|
+
work when file operations fail. These tests are essential for maintaining data
|
|
6
|
+
integrity during long-running computations.
|
|
7
|
+
|
|
8
|
+
The file system abstraction allows the package to work consistently across different
|
|
9
|
+
operating systems and storage configurations. These tests verify that abstraction
|
|
10
|
+
works correctly and handles edge cases gracefully.
|
|
11
|
+
|
|
12
|
+
Key Testing Areas:
|
|
13
|
+
- Filename generation following consistent naming conventions
|
|
14
|
+
- Path construction and directory creation
|
|
15
|
+
- Fallback file creation when primary save operations fail
|
|
16
|
+
- Cross-platform path handling
|
|
17
|
+
|
|
18
|
+
Most users won't need to modify these tests unless they're changing how the package
|
|
19
|
+
stores computational results or adding new file formats.
|
|
20
|
+
"""
|
|
21
|
+
|
|
1
22
|
from contextlib import redirect_stdout
|
|
2
|
-
from mapFolding import
|
|
23
|
+
from mapFolding import (
|
|
24
|
+
getFilenameFoldsTotal, getPathFilenameFoldsTotal, getPathRootJobDEFAULT, saveFoldsTotal,
|
|
25
|
+
validateListDimensions,
|
|
26
|
+
)
|
|
3
27
|
from pathlib import Path
|
|
4
28
|
import io
|
|
5
29
|
import pytest
|
tests/test_oeis.py
CHANGED
|
@@ -1,5 +1,34 @@
|
|
|
1
|
+
"""OEIS (Online Encyclopedia of Integer Sequences) integration testing.
|
|
2
|
+
|
|
3
|
+
This module validates the package's integration with OEIS, ensuring that sequence
|
|
4
|
+
identification, value retrieval, and caching mechanisms work correctly. The OEIS
|
|
5
|
+
connection provides the mathematical foundation that validates computational results
|
|
6
|
+
against established mathematical knowledge.
|
|
7
|
+
|
|
8
|
+
These tests verify both the technical aspects of OEIS integration (network requests,
|
|
9
|
+
caching, error handling) and the mathematical correctness of sequence identification
|
|
10
|
+
and value mapping.
|
|
11
|
+
|
|
12
|
+
Key Testing Areas:
|
|
13
|
+
- OEIS sequence ID validation and normalization
|
|
14
|
+
- Network request handling and error recovery
|
|
15
|
+
- Local caching of sequence data for offline operation
|
|
16
|
+
- Command-line interface for OEIS sequence queries
|
|
17
|
+
- Mathematical consistency between local computations and OEIS values
|
|
18
|
+
|
|
19
|
+
The caching tests are particularly important for users working in environments with
|
|
20
|
+
limited network access, as they ensure the package can operate effectively offline
|
|
21
|
+
once sequence data has been retrieved.
|
|
22
|
+
|
|
23
|
+
Network error handling tests verify graceful degradation when OEIS is unavailable,
|
|
24
|
+
which is crucial for maintaining package reliability in production environments.
|
|
25
|
+
"""
|
|
26
|
+
|
|
1
27
|
from contextlib import redirect_stdout
|
|
2
|
-
from mapFolding.oeis import
|
|
28
|
+
from mapFolding.oeis import (
|
|
29
|
+
clearOEIScache, getOEISids, getOEISidValues, OEIS_for_n, oeisIDfor_n, oeisIDsImplemented,
|
|
30
|
+
settingsOEIS, validateOEISid,
|
|
31
|
+
)
|
|
3
32
|
from pathlib import Path
|
|
4
33
|
from tests.conftest import standardizedEqualToCallableReturn, standardizedSystemExit
|
|
5
34
|
from typing import Any, NoReturn
|
tests/test_other.py
CHANGED
|
@@ -1,3 +1,30 @@
|
|
|
1
|
+
"""Foundational utilities and data validation testing.
|
|
2
|
+
|
|
3
|
+
This module tests the core utility functions that support the mathematical
|
|
4
|
+
computations but aren't specific to any particular algorithm. These are the
|
|
5
|
+
building blocks that ensure data integrity and proper parameter handling
|
|
6
|
+
throughout the package.
|
|
7
|
+
|
|
8
|
+
The tests here validate fundamental operations like dimension validation,
|
|
9
|
+
processor limit configuration, and basic mathematical utilities. These
|
|
10
|
+
functions form the foundation that other modules build upon.
|
|
11
|
+
|
|
12
|
+
Key Testing Areas:
|
|
13
|
+
- Input validation and sanitization for map dimensions
|
|
14
|
+
- Processor limit configuration for parallel computations
|
|
15
|
+
- Mathematical utility functions from helper modules
|
|
16
|
+
- Edge case handling for boundary conditions
|
|
17
|
+
- Type system validation and error propagation
|
|
18
|
+
|
|
19
|
+
For users extending the package: these tests demonstrate proper input validation
|
|
20
|
+
patterns and show how to handle edge cases gracefully. The parametrized tests
|
|
21
|
+
provide examples of comprehensive boundary testing that you can adapt for your
|
|
22
|
+
own functions.
|
|
23
|
+
|
|
24
|
+
The integration with external utility modules (Z0Z_tools) shows how to test
|
|
25
|
+
dependencies while maintaining clear separation of concerns.
|
|
26
|
+
"""
|
|
27
|
+
|
|
1
28
|
from collections.abc import Callable
|
|
2
29
|
from mapFolding import getLeavesTotal, setProcessorLimit, validateListDimensions
|
|
3
30
|
from tests.conftest import standardizedEqualToCallableReturn
|
tests/test_tasks.py
CHANGED
|
@@ -1,5 +1,35 @@
|
|
|
1
|
+
"""Parallel processing and task distribution validation.
|
|
2
|
+
|
|
3
|
+
This module tests the package's parallel processing capabilities, ensuring that
|
|
4
|
+
computations can be effectively distributed across multiple processors while
|
|
5
|
+
maintaining mathematical accuracy. These tests are crucial for performance
|
|
6
|
+
optimization and scalability.
|
|
7
|
+
|
|
8
|
+
The task distribution system allows large computational problems to be broken
|
|
9
|
+
down into smaller chunks that can be processed concurrently. These tests verify
|
|
10
|
+
that the distribution logic works correctly and that results remain consistent
|
|
11
|
+
regardless of how the work is divided.
|
|
12
|
+
|
|
13
|
+
Key Testing Areas:
|
|
14
|
+
- Task division strategies for different computational approaches
|
|
15
|
+
- Processor limit configuration and enforcement
|
|
16
|
+
- Parallel execution consistency and correctness
|
|
17
|
+
- Resource management and concurrency control
|
|
18
|
+
- Error handling in multi-process environments
|
|
19
|
+
|
|
20
|
+
For users working with large-scale computations: these tests demonstrate how to
|
|
21
|
+
configure and validate parallel processing setups. The concurrency limit tests
|
|
22
|
+
show how to balance performance with system resource constraints.
|
|
23
|
+
|
|
24
|
+
The multiprocessing configuration (spawn method) is essential for cross-platform
|
|
25
|
+
compatibility and proper resource isolation between test processes.
|
|
26
|
+
"""
|
|
27
|
+
|
|
1
28
|
from collections.abc import Callable
|
|
2
|
-
from mapFolding import
|
|
29
|
+
from mapFolding import (
|
|
30
|
+
countFolds, getFoldsTotalKnown, getLeavesTotal, getTaskDivisions, setProcessorLimit,
|
|
31
|
+
validateListDimensions,
|
|
32
|
+
)
|
|
3
33
|
from tests.conftest import standardizedEqualToCallableReturn
|
|
4
34
|
from typing import Literal
|
|
5
35
|
from Z0Z_tools.pytestForYourUse import PytestFor_defineConcurrencyLimit
|