genlayer-test 0.9.0__py3-none-any.whl → 0.10.1__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: genlayer-test
3
- Version: 0.9.0
3
+ Version: 0.10.1
4
4
  Summary: GenLayer Testing Suite
5
5
  Author: GenLayer
6
6
  License-Expression: MIT
@@ -270,31 +270,7 @@ $ gltest --default-wait-interval <default_wait_interval>
270
270
  $ gltest --default-wait-retries <default_wait_retries>
271
271
  ```
272
272
 
273
- 10. Run tests with mocked LLM responses (localnet only)
274
- ```bash
275
- $ gltest --test-with-mocks
276
- ```
277
- The `--test-with-mocks` flag enables mocking of LLM responses when creating validators. This is particularly useful for:
278
- - Testing without actual LLM API calls
279
- - Ensuring deterministic test results
280
- - Faster test execution
281
- - Testing specific edge cases with controlled responses
282
-
283
- When using this flag with the `setup_validators` fixture, you can provide custom mock responses:
284
- ```python
285
- def test_with_mocked_llm(setup_validators):
286
- # Setup validators with a specific mock response
287
- mock_response = {"result": "This is a mocked LLM response"}
288
- setup_validators(mock_response=mock_response)
289
-
290
- # Your LLM-based contract will receive the mocked response
291
- contract = factory.deploy()
292
- result = contract.llm_method() # Will use the mocked response
293
- ```
294
-
295
- Note: This feature is only available when running tests on localnet.
296
-
297
- 11. Run tests with leader-only mode enabled
273
+ 10. Run tests with leader-only mode enabled
298
274
  ```bash
299
275
  $ gltest --leader-only
300
276
  ```
@@ -475,7 +451,7 @@ def test_write_methods():
475
451
  ).transact(
476
452
  value=0, # Optional: amount of native currency to send
477
453
  consensus_max_rotations=3, # Optional: max consensus rotations
478
- wait_interval=1, # Optional: seconds between status checks
454
+ wait_interval=1000, # Optional: milliseconds between status checks
479
455
  wait_retries=10, # Optional: max number of retries
480
456
  transaction_context=None, # Optional: custom transaction context
481
457
  )
@@ -538,7 +514,6 @@ The following fixtures are available in `gltest.fixtures`:
538
514
  - **`gl_client`** (session scope) - GenLayer client instance for network operations
539
515
  - **`default_account`** (session scope) - Default account for testing and deployments
540
516
  - **`accounts`** (session scope) - List of test accounts for multi-account scenarios
541
- - **`setup_validators`** (function scope) - Function to create test validators for LLM operations
542
517
 
543
518
  ##### 1. `gl_client` (session scope)
544
519
  Provides a GenLayer PY client instance that's created once per test session. This is useful for operations that interact directly with the GenLayer network.
@@ -573,26 +548,6 @@ def test_multiple_accounts(accounts):
573
548
  contract.transfer(args=[receiver.address, 100], account=sender)
574
549
  ```
575
550
 
576
- ##### 4. `setup_validators` (function scope)
577
- Creates test validators for localnet environment. This fixture is particularly useful for testing LLM-based contract methods and consensus behavior. It yields a function that allows you to configure validators with custom settings.
578
-
579
- ```python
580
- def test_with_validators(setup_validators):
581
- # Setup validators with default configuration
582
- setup_validators()
583
-
584
- # Or setup with custom mock responses for testing
585
- mock_response = {"result": "mocked LLM response"}
586
- setup_validators(mock_response=mock_response, n_validators=3)
587
-
588
- # Now test your LLM-based contract methods
589
- contract = factory.deploy()
590
- result = contract.llm_based_method()
591
- ```
592
-
593
- Parameters for `setup_validators`:
594
- - `mock_response` (dict, optional): Mock validator response when using `--test-with-mocks` flag
595
- - `n_validators` (int, optional): Number of validators to create (default: 5)
596
551
 
597
552
  #### Using Fixtures in Your Tests
598
553
 
@@ -602,9 +557,7 @@ To use these fixtures, simply import them and include them as parameters in your
602
557
  from gltest import get_contract_factory
603
558
  from gltest.assertions import tx_execution_succeeded
604
559
 
605
- def test_complete_workflow(gl_client, default_account, accounts, setup_validators):
606
- # Setup validators for LLM operations
607
- setup_validators()
560
+ def test_complete_workflow(gl_client, default_account, accounts):
608
561
 
609
562
  # Deploy contract with default account
610
563
  factory = get_contract_factory("MyContract")
@@ -650,7 +603,7 @@ def test_analyze_method():
650
603
  print(f"Reliability score: {analysis.reliability_score:.2f}%")
651
604
  print(f"Unique states: {analysis.unique_states}")
652
605
  print(f"Execution time: {analysis.execution_time:.1f}s")
653
-
606
+
654
607
  # The analysis returns a MethodStatsSummary object with:
655
608
  # - method: The contract method name
656
609
  # - args: Arguments passed to the method
@@ -668,127 +621,6 @@ The `.analyze()` method helps you:
668
621
  - Identify edge cases and failure patterns
669
622
  - Benchmark performance across multiple runs
670
623
 
671
- ### Mock LLM Responses
672
-
673
- The Mock LLM system allows you to simulate Large Language Model responses in GenLayer tests. This is essential for creating deterministic tests by providing predefined responses instead of relying on actual LLM calls.
674
-
675
- #### Basic Structure
676
-
677
- The mock system consists of a response dictionary that maps GenLayer methods to their mocked responses:
678
-
679
- ```python
680
- mock_response = {
681
- "response": {}, # Optional: mocks gl.nondet.exec_prompt
682
- "eq_principle_prompt_comparative": {}, # Optional: mocks gl.eq_principle.prompt_comparative
683
- "eq_principle_prompt_non_comparative": {} # Optional: mocks gl.eq_principle.prompt_non_comparative
684
- }
685
-
686
- setup_validators(mock_response)
687
- ```
688
-
689
- #### Method Mappings
690
-
691
- | Mock Key | GenLayer Method |
692
- |----------|----------------|
693
- | `"response"` | `gl.nondet.exec_prompt` |
694
- | `"eq_principle_prompt_comparative"` | `gl.eq_principle.prompt_comparative` |
695
- | `"eq_principle_prompt_non_comparative"` | `gl.eq_principle.prompt_non_comparative` |
696
-
697
- #### How It Works
698
-
699
- The mock system works by pattern matching against the user message that gets built internally. When a GenLayer method is called:
700
-
701
- 1. A user message is constructed internally (`<user_message>`)
702
- 2. The mock system searches for strings within that message
703
- 3. If a matching string is found in the mock dictionary, the associated response is returned
704
-
705
- ##### String Matching Rules
706
-
707
- The system performs **substring matching** on the user message. The key in your mock dictionary must be contained within the actual user message.
708
-
709
- #### Examples
710
-
711
- ##### Basic Example
712
-
713
- ```python
714
- # Mock setup
715
- mock_response = {
716
- "eq_principle_prompt_comparative": {
717
- "The value of give_coin has to match": True
718
- }
719
- }
720
- setup_validators(mock_response)
721
-
722
- # In your contract
723
- result = gl.eq_principle.prompt_comparative(
724
- get_wizard_answer,
725
- "The value of give_coin has to match" # This string will be matched
726
- )
727
- # result will be True
728
- ```
729
-
730
- ##### Substring Matching Examples
731
-
732
- ✅ **Will work** - Partial match:
733
- ```python
734
- "eq_principle_prompt_comparative": {
735
- "The value of give_coin": True # Substring of the full message
736
- }
737
- ```
738
-
739
- ❌ **Won't work** - Extra words break the match:
740
- ```python
741
- "eq_principle_prompt_comparative": {
742
- "The good value of give_coin": True # "good" is not in the actual message
743
- }
744
- ```
745
-
746
- ##### Complete Example
747
-
748
- ```python
749
- from gltest import get_contract_factory
750
- from gltest.fixtures import setup_validators
751
-
752
- def test_with_mocked_llm(setup_validators):
753
- # Define mock responses
754
- mock_response = {
755
- "response": {
756
- "What is the weather?": "It's sunny today",
757
- "Calculate 2+2": "4"
758
- },
759
- "eq_principle_prompt_comparative": {
760
- "values must be equal": True,
761
- "amounts should match": False
762
- },
763
- "eq_principle_prompt_non_comparative": {
764
- "Is this valid?": True
765
- }
766
- }
767
-
768
- # Initialize the mock system
769
- setup_validators(mock_response)
770
-
771
- # Deploy and test your contract
772
- factory = get_contract_factory("MyLLMContract")
773
- contract = factory.deploy()
774
-
775
- # Your LLM methods will use the mocked responses
776
- result = contract.check_weather() # Uses mocked response
777
- ```
778
-
779
- #### Best Practices
780
-
781
- 1. **Be specific with match strings**: Use unique substrings that won't accidentally match other prompts
782
- 2. **Test your matches**: Verify that your mock strings actually appear in the generated user messages
783
- 3. **Keep mocks simple**: Mock responses should be minimal and focused on the test case
784
- 4. **Document your mocks**: Comment why specific responses are mocked for future reference
785
- 5. **Use with `--test-with-mocks` flag**: Enable mocking when running tests: `gltest --test-with-mocks`
786
-
787
- #### Notes
788
-
789
- - Mock responses are only available when running tests on localnet
790
- - The `setup_validators` fixture handles the mock setup when provided with a mock_response
791
- - Mocking is particularly useful for CI/CD pipelines where deterministic results are required
792
624
 
793
625
  ### Custom Transaction Context
794
626
 
@@ -839,17 +671,58 @@ def test_with_custom_transaction_context():
839
671
  )
840
672
  ```
841
673
 
674
+ ### Mock LLM Responses
675
+
676
+ The Mock LLM system allows you to simulate Large Language Model responses in GenLayer tests. This is essential for creating deterministic tests by providing predefined responses instead of relying on actual LLM calls.
677
+
678
+ #### Basic Structure
679
+
680
+ The mock system consists of a response dictionary that maps GenLayer methods to their mocked responses:
681
+
682
+ ```python
683
+ from gltest.types import MockedLLMResponse
684
+
685
+ mock_response: MockedLLMResponse = {
686
+ "nondet_exec_prompt": {}, # Optional: mocks gl.nondet.exec_prompt
687
+ "eq_principle_prompt_comparative": {}, # Optional: mocks gl.eq_principle.prompt_comparative
688
+ "eq_principle_prompt_non_comparative": {} # Optional: mocks gl.eq_principle.prompt_non_comparative
689
+ }
690
+ ```
691
+
692
+ #### Method Mappings
693
+
694
+ | Mock Key | GenLayer Method |
695
+ |----------|----------------|
696
+ | `"nondet_exec_prompt"` | `gl.nondet.exec_prompt` |
697
+ | `"eq_principle_prompt_comparative"` | `gl.eq_principle.prompt_comparative` |
698
+ | `"eq_principle_prompt_non_comparative"` | `gl.eq_principle.prompt_non_comparative` |
699
+
700
+ #### How It Works
701
+
702
+ The mock system works by pattern matching against the user message that gets built internally. When a GenLayer method is called:
703
+
704
+ 1. A user message is constructed internally (`<user_message>`)
705
+ 2. The mock system searches for strings within that message
706
+ 3. If a matching string is found in the mock dictionary, the associated response is returned
707
+
708
+ ##### String Matching Rules
709
+
710
+ The system performs **substring matching** on the user message. The key in your mock dictionary must be contained within the actual user message.
711
+
712
+
842
713
  #### Mock Validators with Transaction Context
843
714
 
844
715
  Combine mock validators with custom datetime for fully deterministic tests:
845
716
 
846
717
  ```python
718
+ from gltest.types import MockedLLMResponse
719
+
847
720
  def test_with_mocked_context():
848
721
  factory = get_contract_factory("LLMContract")
849
722
  validator_factory = get_validator_factory()
850
723
 
851
724
  # Define mock LLM responses
852
- mock_response = {
725
+ mock_response: MockedLLMResponse = {
853
726
  "nondet_exec_prompt": {
854
727
  "analyze this": "positive sentiment"
855
728
  },
@@ -1041,7 +914,7 @@ def test_validator_cloning():
1041
914
  tx_receipt = contract.set_value(
1042
915
  args=["new_value"],
1043
916
  ).transact(
1044
- wait_interval=2, # Increase wait interval between status checks
917
+ wait_interval=2000, # Increase wait interval between status checks
1045
918
  wait_retries=20, # Increase number of retry attempts
1046
919
  )
1047
920
  ```
@@ -1058,7 +931,7 @@ def test_validator_cloning():
1058
931
  # For critical operations, use more conservative settings
1059
932
  contract = factory.deploy(
1060
933
  consensus_max_rotations=10, # More rotations for better reliability
1061
- wait_interval=3, # Longer wait between checks
934
+ wait_interval=3000, # Longer wait between checks
1062
935
  wait_retries=30 # More retries for consensus
1063
936
  )
1064
937
  ```
@@ -1,10 +1,10 @@
1
- genlayer_test-0.9.0.dist-info/licenses/LICENSE,sha256=che_H4vE0QUx3HvWrAa1_jDEVInift0U6VO15-QqEls,1064
1
+ genlayer_test-0.10.1.dist-info/licenses/LICENSE,sha256=che_H4vE0QUx3HvWrAa1_jDEVInift0U6VO15-QqEls,1064
2
2
  gltest/__init__.py,sha256=49112x2CLdYwvCbBZ1laJmMk0NQ7S3u5YUbxPefqhrk,454
3
3
  gltest/accounts.py,sha256=HUmWguJMolggQaZNRPw-LGlRlQCjLLdUanKRowMv6pI,812
4
4
  gltest/assertions.py,sha256=0dEk0VxcHK4I7GZPHxJmz-2jaA60V499gOSR74rZbfM,1748
5
5
  gltest/clients.py,sha256=1dX6wmG3QCevQRLbSaFlHymZSb-sJ5aYwet1IoX2nbA,1554
6
6
  gltest/exceptions.py,sha256=deJPmrTe5gF33qkkKF2IVJY7lc_knI7Ql3N7jZ8aLZs,510
7
- gltest/fixtures.py,sha256=EJXmqcC3LD03v07mepacFl58lAdhbLj6bP5rtALYISk,2507
7
+ gltest/fixtures.py,sha256=omVjLh1kXXUyL7Oo8zzdooJBbSX-Qk-Aa-prp5MqvFc,833
8
8
  gltest/logging.py,sha256=jAkHsuMm-AEx1Xu1srU6W-0YzTwXJB48mCK-OVzAiN4,342
9
9
  gltest/types.py,sha256=H32fHrU5aFMaPHXgEWcHAmLWOZ9pBFVp8PK_ncpVOgM,940
10
10
  gltest/utils.py,sha256=-gHhjrS7i_GhDG3sKOq2qsTtYBt4HHgXHEXh-3RB_rI,573
@@ -25,14 +25,14 @@ gltest/validators/validator_factory.py,sha256=fpb-YyAKuWo4-pXBjrZ_TApYLsm6HHa6kG
25
25
  gltest_cli/logging.py,sha256=WXVhfq9vT6FtV_jxDqGEGia1ZWSIUKAfmWRnZd_gWQk,1266
26
26
  gltest_cli/main.py,sha256=Ti2-0Ev1x5_cM0D1UKqdgaDt80CDHEQGtdRne2qLm4M,53
27
27
  gltest_cli/config/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
28
- gltest_cli/config/constants.py,sha256=llQD_yiJDwpohMBWufBKcmY8BXLkBeLe5eTj0QUOxSU,603
28
+ gltest_cli/config/constants.py,sha256=z7njbU8WXYhTnp1NYrW4YI2jN-p114BP2UDSYF0qAJE,571
29
29
  gltest_cli/config/general.py,sha256=ezpoGsT8grO9zQH6RugV14b1GzeFt-htYToHQBJhNvY,186
30
- gltest_cli/config/plugin.py,sha256=o-tfm0KmbMe4iVLWGWUws5Wg2pwOEnfnASQv4ovXEXQ,7109
30
+ gltest_cli/config/plugin.py,sha256=rySUyo7OSG1SOiUx2Xrxiv-SfjaxJSWpT5KKKNRk4Mk,6719
31
31
  gltest_cli/config/pytest_context.py,sha256=Ze8JSkrwMTCE8jIhpzU_71CEXg92SiEPvSgNTp-gbS4,243
32
- gltest_cli/config/types.py,sha256=oKRRFvCQESt3oca6T79RATHl9BtLNrVCqGc9tB5hovY,10083
33
- gltest_cli/config/user.py,sha256=jtqhEkp2pEh67Pk4xcshXNcApeCBRjLesZgpqJQuCYg,13625
34
- genlayer_test-0.9.0.dist-info/METADATA,sha256=09bCBWnQ0MR5FYhKFqZWgx7-nThb5RgR8xfFhl0nxMs,40707
35
- genlayer_test-0.9.0.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
36
- genlayer_test-0.9.0.dist-info/entry_points.txt,sha256=RWPcSArBpz_G4BYioh5L8Q8hyClRbSgzLimjcWMp-BQ,94
37
- genlayer_test-0.9.0.dist-info/top_level.txt,sha256=GSdrnQbiLcZssmtCpbDgBTygsc8Bt_TPeYjwm0FmpdA,18
38
- genlayer_test-0.9.0.dist-info/RECORD,,
32
+ gltest_cli/config/types.py,sha256=nGKm2qTwo99M8DbGchj14IdIQ4lcNTDODZFHXdsi5rY,9506
33
+ gltest_cli/config/user.py,sha256=JeclpIVv4BT5COMW2xrEbTspI6e1RAYIOvd8ruPc3gM,12887
34
+ genlayer_test-0.10.1.dist-info/METADATA,sha256=TAZaEuDPIKWu3kDlIfASJ_M0BGPsFFV1TLMKPivnm68,36341
35
+ genlayer_test-0.10.1.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
36
+ genlayer_test-0.10.1.dist-info/entry_points.txt,sha256=RWPcSArBpz_G4BYioh5L8Q8hyClRbSgzLimjcWMp-BQ,94
37
+ genlayer_test-0.10.1.dist-info/top_level.txt,sha256=GSdrnQbiLcZssmtCpbDgBTygsc8Bt_TPeYjwm0FmpdA,18
38
+ genlayer_test-0.10.1.dist-info/RECORD,,
gltest/fixtures.py CHANGED
@@ -4,9 +4,8 @@ These fixtures can be imported and used in test files.
4
4
  """
5
5
 
6
6
  import pytest
7
- from gltest.clients import get_gl_client, get_gl_provider
7
+ from gltest.clients import get_gl_client
8
8
  from gltest.accounts import get_accounts, get_default_account
9
- from gltest_cli.config.general import get_general_config
10
9
 
11
10
 
12
11
  @pytest.fixture(scope="session")
@@ -37,51 +36,3 @@ def accounts():
37
36
  Scope: session - created once per test session
38
37
  """
39
38
  return get_accounts()
40
-
41
-
42
- @pytest.fixture(scope="function")
43
- def setup_validators():
44
- """
45
- Creates test validators for localnet environment.
46
-
47
- Args:
48
- mock_response (dict, optional): Mock validator response when using --test-with-mocks flag
49
- n_validators (int, optional): Number of validators to create (default: 5)
50
-
51
- Scope: function - created fresh for each test
52
- """
53
- general_config = get_general_config()
54
- provider = get_gl_provider()
55
-
56
- def _setup(mock_response=None, n_validators=5):
57
- if not general_config.check_local_rpc():
58
- return
59
- if general_config.get_test_with_mocks():
60
- for _ in range(n_validators):
61
- provider.make_request(
62
- method="sim_createValidator",
63
- params=[
64
- 8,
65
- "openai",
66
- "gpt-4o",
67
- {"temperature": 0.75, "max_tokens": 500},
68
- "openai-compatible",
69
- {
70
- "api_key_env_var": "OPENAIKEY",
71
- "api_url": "https://api.openai.com",
72
- "mock_response": mock_response if mock_response else {},
73
- },
74
- ],
75
- )
76
- else:
77
- provider.make_request(
78
- method="sim_createRandomValidators",
79
- params=[n_validators, 8, 12],
80
- )
81
-
82
- yield _setup
83
-
84
- if not general_config.check_local_rpc():
85
- return
86
-
87
- provider.make_request(method="sim_deleteAllValidators", params=[])
@@ -15,5 +15,4 @@ DEFAULT_NETWORK_ID = 61999
15
15
  # Defaults per network
16
16
  DEFAULT_WAIT_INTERVAL = 3000
17
17
  DEFAULT_WAIT_RETRIES = 50
18
- DEFAULT_TEST_WITH_MOCKS = False
19
18
  DEFAULT_LEADER_ONLY = False
@@ -15,7 +15,6 @@ from gltest_cli.config.pytest_context import _pytest_context
15
15
  from gltest_cli.config.constants import (
16
16
  DEFAULT_WAIT_INTERVAL,
17
17
  DEFAULT_WAIT_RETRIES,
18
- DEFAULT_TEST_WITH_MOCKS,
19
18
  DEFAULT_LEADER_ONLY,
20
19
  CHAINS,
21
20
  )
@@ -65,13 +64,6 @@ def pytest_addoption(parser):
65
64
  help="Target network (defaults to 'localnet' if no config file)",
66
65
  )
67
66
 
68
- group.addoption(
69
- "--test-with-mocks",
70
- action="store_true",
71
- default=DEFAULT_TEST_WITH_MOCKS,
72
- help="Test with mocks",
73
- )
74
-
75
67
  group.addoption(
76
68
  "--leader-only",
77
69
  action="store_true",
@@ -120,7 +112,6 @@ def pytest_configure(config):
120
112
  default_wait_retries = config.getoption("--default-wait-retries")
121
113
  rpc_url = config.getoption("--rpc-url")
122
114
  network = config.getoption("--network")
123
- test_with_mocks = config.getoption("--test-with-mocks")
124
115
  leader_only = config.getoption("--leader-only")
125
116
  chain_type = config.getoption("--chain-type")
126
117
 
@@ -135,7 +126,6 @@ def pytest_configure(config):
135
126
  plugin_config.default_wait_retries = int(default_wait_retries)
136
127
  plugin_config.rpc_url = rpc_url
137
128
  plugin_config.network_name = network
138
- plugin_config.test_with_mocks = test_with_mocks
139
129
  plugin_config.leader_only = leader_only
140
130
  plugin_config.chain_type = chain_type
141
131
 
@@ -175,7 +165,6 @@ def pytest_sessionstart(session):
175
165
  logger.info(
176
166
  f" Default wait retries: {general_config.get_default_wait_retries()}"
177
167
  )
178
- logger.info(f" Test with mocks: {general_config.get_test_with_mocks()}")
179
168
 
180
169
  if (
181
170
  general_config.get_leader_only()
@@ -7,7 +7,6 @@ from gltest_cli.config.constants import PRECONFIGURED_NETWORKS
7
7
  from gltest_cli.config.constants import (
8
8
  DEFAULT_WAIT_INTERVAL,
9
9
  DEFAULT_WAIT_RETRIES,
10
- DEFAULT_TEST_WITH_MOCKS,
11
10
  DEFAULT_LEADER_ONLY,
12
11
  CHAINS,
13
12
  )
@@ -21,7 +20,6 @@ class PluginConfig:
21
20
  default_wait_interval: Optional[int] = None
22
21
  default_wait_retries: Optional[int] = None
23
22
  network_name: Optional[str] = None
24
- test_with_mocks: bool = False
25
23
  leader_only: bool = False
26
24
  chain_type: Optional[str] = None
27
25
 
@@ -35,7 +33,6 @@ class NetworkConfigData:
35
33
  leader_only: bool = False
36
34
  default_wait_interval: Optional[int] = None
37
35
  default_wait_retries: Optional[int] = None
38
- test_with_mocks: bool = False
39
36
  chain_type: Optional[str] = None
40
37
 
41
38
  def __post_init__(self):
@@ -60,8 +57,6 @@ class NetworkConfigData:
60
57
  self.default_wait_retries, int
61
58
  ):
62
59
  raise ValueError("default_wait_retries must be an integer")
63
- if not isinstance(self.test_with_mocks, bool):
64
- raise TypeError("test_with_mocks must be a boolean")
65
60
  if self.chain_type is not None and not isinstance(self.chain_type, str):
66
61
  raise ValueError("chain_type must be a string")
67
62
 
@@ -223,15 +218,6 @@ class GeneralConfig:
223
218
  return self.plugin_config.network_name
224
219
  return self.user_config.default_network
225
220
 
226
- def get_test_with_mocks(self) -> bool:
227
- if self.plugin_config.test_with_mocks:
228
- return True
229
- network_name = self.get_network_name()
230
- if network_name in self.user_config.networks:
231
- network_config = self.user_config.networks[network_name]
232
- return network_config.test_with_mocks
233
- return DEFAULT_TEST_WITH_MOCKS
234
-
235
221
  def get_leader_only(self) -> bool:
236
222
  if self.plugin_config.leader_only:
237
223
  return True
@@ -245,4 +231,4 @@ class GeneralConfig:
245
231
  return self.get_chain_type() == "localnet"
246
232
 
247
233
  def check_studio_based_rpc(self) -> bool:
248
- return self.get_chain_type() == "studionet"
234
+ return self.get_chain_type() in ("studionet", "localnet")
gltest_cli/config/user.py CHANGED
@@ -15,7 +15,6 @@ from gltest_cli.config.constants import (
15
15
  PRECONFIGURED_NETWORKS,
16
16
  DEFAULT_WAIT_INTERVAL,
17
17
  DEFAULT_WAIT_RETRIES,
18
- DEFAULT_TEST_WITH_MOCKS,
19
18
  DEFAULT_LEADER_ONLY,
20
19
  CHAINS,
21
20
  )
@@ -31,7 +30,6 @@ VALID_NETWORK_KEYS = [
31
30
  "leader_only",
32
31
  "default_wait_interval",
33
32
  "default_wait_retries",
34
- "test_with_mocks",
35
33
  "chain_type",
36
34
  ]
37
35
  VALID_PATHS_KEYS = ["contracts", "artifacts"]
@@ -51,7 +49,6 @@ def get_default_user_config() -> UserConfig:
51
49
  leader_only=DEFAULT_LEADER_ONLY,
52
50
  default_wait_interval=DEFAULT_WAIT_INTERVAL,
53
51
  default_wait_retries=DEFAULT_WAIT_RETRIES,
54
- test_with_mocks=DEFAULT_TEST_WITH_MOCKS,
55
52
  chain_type="localnet",
56
53
  ),
57
54
  "studionet": NetworkConfigData(
@@ -62,7 +59,6 @@ def get_default_user_config() -> UserConfig:
62
59
  leader_only=DEFAULT_LEADER_ONLY,
63
60
  default_wait_interval=DEFAULT_WAIT_INTERVAL,
64
61
  default_wait_retries=DEFAULT_WAIT_RETRIES,
65
- test_with_mocks=DEFAULT_TEST_WITH_MOCKS,
66
62
  chain_type="studionet",
67
63
  ),
68
64
  "testnet_asimov": NetworkConfigData(
@@ -73,7 +69,6 @@ def get_default_user_config() -> UserConfig:
73
69
  leader_only=DEFAULT_LEADER_ONLY,
74
70
  default_wait_interval=DEFAULT_WAIT_INTERVAL,
75
71
  default_wait_retries=DEFAULT_WAIT_RETRIES,
76
- test_with_mocks=DEFAULT_TEST_WITH_MOCKS,
77
72
  chain_type="testnet_asimov",
78
73
  ),
79
74
  }
@@ -163,11 +158,6 @@ def validate_network_config(network_name: str, network_config: dict):
163
158
  f"network {network_name} default_wait_retries must be an integer"
164
159
  )
165
160
 
166
- if "test_with_mocks" in network_config and not isinstance(
167
- network_config["test_with_mocks"], bool
168
- ):
169
- raise ValueError(f"network {network_name} test_with_mocks must be a boolean")
170
-
171
161
  if "chain_type" in network_config:
172
162
  if not isinstance(network_config["chain_type"], str):
173
163
  raise ValueError(f"network {network_name} chain_type must be a string")
@@ -294,10 +284,6 @@ def _get_overridden_networks(raw_config: dict) -> tuple[dict, str]:
294
284
  networks_config[network_name].default_wait_retries = network_config[
295
285
  "default_wait_retries"
296
286
  ]
297
- if "test_with_mocks" in network_config:
298
- networks_config[network_name].test_with_mocks = network_config[
299
- "test_with_mocks"
300
- ]
301
287
  if "chain_type" in network_config:
302
288
  networks_config[network_name].chain_type = network_config["chain_type"]
303
289
  continue
@@ -313,7 +299,6 @@ def _get_overridden_networks(raw_config: dict) -> tuple[dict, str]:
313
299
  default_wait_retries = network_config.get(
314
300
  "default_wait_retries", DEFAULT_WAIT_RETRIES
315
301
  )
316
- test_with_mocks = network_config.get("test_with_mocks", DEFAULT_TEST_WITH_MOCKS)
317
302
  chain_type = network_config["chain_type"] # Required for custom networks
318
303
  networks_config[network_name] = NetworkConfigData(
319
304
  id=network_id,
@@ -323,7 +308,6 @@ def _get_overridden_networks(raw_config: dict) -> tuple[dict, str]:
323
308
  leader_only=leader_only,
324
309
  default_wait_interval=default_wait_interval,
325
310
  default_wait_retries=default_wait_retries,
326
- test_with_mocks=test_with_mocks,
327
311
  chain_type=chain_type,
328
312
  )
329
313
  return networks_config, user_default_network