execution-agent 0.1.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (37) hide show
  1. execution_agent/__init__.py +8 -0
  2. execution_agent/__main__.py +5 -0
  3. execution_agent/agent.py +955 -0
  4. execution_agent/commands_interface.json +7 -0
  5. execution_agent/config.py +21 -0
  6. execution_agent/context.py +1565 -0
  7. execution_agent/docker_helpers_static.py +593 -0
  8. execution_agent/env.py +61 -0
  9. execution_agent/exceptions.py +17 -0
  10. execution_agent/exit_artifacts.py +350 -0
  11. execution_agent/main.py +1234 -0
  12. execution_agent/prompt_files/c_guidelines +481 -0
  13. execution_agent/prompt_files/command_stuck +7 -0
  14. execution_agent/prompt_files/cpp_guidelines +481 -0
  15. execution_agent/prompt_files/cycle_instruction +51 -0
  16. execution_agent/prompt_files/java_guidelines +37 -0
  17. execution_agent/prompt_files/javascript_guidelines +69 -0
  18. execution_agent/prompt_files/latest_containter_technology +7 -0
  19. execution_agent/prompt_files/python_guidelines +48 -0
  20. execution_agent/prompt_files/remove_progress_bars +1 -0
  21. execution_agent/prompt_files/rust_guidelines +53 -0
  22. execution_agent/prompt_files/search_workflows_summary +121 -0
  23. execution_agent/prompt_files/steps_list.json +32 -0
  24. execution_agent/prompt_files/summarize_cycle +13 -0
  25. execution_agent/prompt_files/tools_list +99 -0
  26. execution_agent/prompt_logging.py +311 -0
  27. execution_agent/repetition.py +39 -0
  28. execution_agent/shared_utils.py +507 -0
  29. execution_agent/state_persistence.py +286 -0
  30. execution_agent/tools.py +1611 -0
  31. execution_agent/trace_to_bash.py +281 -0
  32. execution_agent-0.1.0.dist-info/METADATA +231 -0
  33. execution_agent-0.1.0.dist-info/RECORD +37 -0
  34. execution_agent-0.1.0.dist-info/WHEEL +5 -0
  35. execution_agent-0.1.0.dist-info/entry_points.txt +2 -0
  36. execution_agent-0.1.0.dist-info/licenses/LICENSE.md +46 -0
  37. execution_agent-0.1.0.dist-info/top_level.txt +1 -0
@@ -0,0 +1,481 @@
1
+ ## General Guidelines :
2
+ **General C/C++ Project Guidelines**
3
+
4
+ ```
5
+ **General C/C++ Project Guidelines**
6
+
7
+ 1. **Read README**
8
+
9
+ * Contains install, usage, and project-specific notes.
10
+
11
+ 2. **Check Dependencies**
12
+
13
+ * Look in README, `CMakeLists.txt`, `Makefile`, or `vcpkg.json`.
14
+ * Install required compiler and “-dev” packages.
15
+
16
+ 3. **Identify Build Tool**
17
+
18
+ * Find `Makefile` (Make) or `CMakeLists.txt` (CMake).
19
+
20
+ 4. **Build**
21
+
22
+ * **Make**:
23
+
24
+ ```bash
25
+ make
26
+ ```
27
+ * **CMake** (out-of-source):
28
+
29
+ ```bash
30
+ mkdir -p build && cd build
31
+ cmake .. # add -DCMAKE_BUILD_TYPE=Debug/Release or -G Ninja as needed
32
+ make -j$(nproc)
33
+ ```
34
+
35
+ 5. **Configuration**
36
+
37
+ * Check for `.conf` or `config.h`.
38
+ * Pass paths/flags if needed, e.g. `-DFoo_DIR=/path`.
39
+
40
+ 6. **Run Tests**
41
+
42
+ * **CTest**:
43
+
44
+ ```bash
45
+ ctest --output-on-failure
46
+ ```
47
+ * Or run test executables directly.
48
+
49
+ 7. **Run Executable**
50
+
51
+ * Follow README (e.g., `./myapp` or server start commands).
52
+
53
+ 8. **Troubleshoot**
54
+
55
+ * Search GitHub issues or web.
56
+ * Rebuild clean, enable verbose (`make VERBOSE=1`, `ninja -v`), grep for “error:”/“warning:”.
57
+
58
+ 9. **Documentation**
59
+
60
+ * Read Doxygen/API docs or inline comments for structure and usage.
61
+
62
+ ---
63
+
64
+ **Make/CMake–Specific Guide**
65
+
66
+ ### 1. Basic Workflow
67
+
68
+ 1. **Locate Build Files**
69
+
70
+ * CMake: top-level `CMakeLists.txt`.
71
+ * Make: root or subdirectory `Makefile`.
72
+
73
+ 2. **Prepare Build Directory**
74
+
75
+ ```bash
76
+ mkdir -p build && cd build
77
+ # If it exists:
78
+ rm -rf CMakeCache.txt CMakeFiles/*
79
+ ```
80
+
81
+ 3. **Configure (CMake)**
82
+
83
+ ```bash
84
+ cmake .. # default
85
+ cmake -DCMAKE_BUILD_TYPE=Debug ..
86
+ cmake -G Ninja .. # if using Ninja
87
+ ```
88
+
89
+ 4. **Build**
90
+
91
+ * **Make**:
92
+
93
+ ```bash
94
+ make -j$(nproc) # parallel
95
+ make -j1 # fail-fast
96
+ ```
97
+ * **Ninja**:
98
+
99
+ ```bash
100
+ ninja # stops on first error
101
+ ninja -v # verbose
102
+ ```
103
+
104
+ 5. **Run Tests**
105
+
106
+ ```bash
107
+ ctest -j$(nproc) --output-on-failure
108
+ ```
109
+
110
+ Or:
111
+
112
+ ```bash
113
+ make test # or make check
114
+ ```
115
+
116
+ Custom runners: follow README or look in `tests/`.
117
+
118
+ 6. **Check Exit Codes**
119
+
120
+ * Nonzero from `make`/`ninja`/`ctest` → failure; inspect logs or verbosity.
121
+
122
+ ---
123
+
124
+ ### 2. Common CMake Issues
125
+
126
+ 1. **Cannot Find Package X**
127
+
128
+ * **Symptom**:
129
+
130
+ ```
131
+ CMake Error: find_package(Foo) didn't find Foo
132
+ ```
133
+ * **Fix**:
134
+
135
+ * Install “foo-dev” (e.g. `sudo apt-get install libfoo-dev`).
136
+ * Or:
137
+
138
+ ```bash
139
+ cmake -DFoo_DIR=/path/to/foo/cmake ..
140
+ cmake -DCMAKE_PREFIX_PATH=/opt/foo ..
141
+ ```
142
+
143
+ 2. **Stale Cache / Persisting Options**
144
+
145
+ * **Symptom**: Changes not applied; missing headers despite install.
146
+ * **Fix**:
147
+
148
+ ```bash
149
+ cd build
150
+ rm -rf CMakeCache.txt CMakeFiles
151
+ cmake ..
152
+ ```
153
+
154
+ Or override with `-DVAR=…` on the command line.
155
+
156
+ 3. **Missing/Incorrect Include Directories**
157
+
158
+ * **Symptom**:
159
+
160
+ ```
161
+ fatal error: bar.h: No such file or directory
162
+ ```
163
+ * **Fix**:
164
+
165
+ * Check `target_include_directories(...)`:
166
+
167
+ ```bash
168
+ grep -R "target_include_directories" -n ../CMakeLists.txt
169
+ ```
170
+ * Build verbosely to inspect `-I` flags:
171
+
172
+ ```bash
173
+ make VERBOSE=1 # or ninja -v
174
+ ```
175
+ * Test manually:
176
+
177
+ ```bash
178
+ g++ -I/path/to/bar/include -c foo.cpp
179
+ ```
180
+ * Then add e.g.
181
+
182
+ ```cmake
183
+ target_include_directories(myTarget PUBLIC /path/to/bar/include)
184
+ ```
185
+
186
+ 4. **Undefined Reference (Linker)**
187
+
188
+ * **Symptom**:
189
+
190
+ ```
191
+ undefined reference to `Bar::baz()`
192
+ ```
193
+ * **Fix**:
194
+
195
+ * Ensure `target_link_libraries(foo PRIVATE BarLib)` in CMake.
196
+ * For static libs:
197
+
198
+ ```bash
199
+ g++ foo.o -o foo libbar.a
200
+ ```
201
+ * Avoid circular static-library dependencies; split or use shared libs.
202
+
203
+ 5. **No Tests Found / CTest Shows 0 Tests**
204
+
205
+ * **Symptom**:
206
+
207
+ ```
208
+ No tests were found!!!
209
+ ```
210
+ * **Fix**:
211
+
212
+ * In `CMakeLists.txt`, enable tests if behind an option:
213
+
214
+ ```cmake
215
+ option(ENABLE_TESTS "Enable tests" OFF)
216
+ if(ENABLE_TESTS)
217
+ add_subdirectory(tests)
218
+ endif()
219
+ ```
220
+
221
+ Then:
222
+
223
+ ```bash
224
+ cmake -DENABLE_TESTS=ON ..
225
+ make && ctest --output-on-failure
226
+ ```
227
+ * Verify test executables in `build/tests/`.
228
+
229
+ ---
230
+
231
+ ### 3. Common Make Issues
232
+
233
+ 1. **Wrong Compiler Flags**
234
+
235
+ * **Symptom**:
236
+
237
+ ```
238
+ gcc: error: unrecognized command line option ‘-std=c++17’
239
+ ```
240
+ * **Fix**:
241
+
242
+ * Edit Makefile:
243
+
244
+ ```makefile
245
+ CXX := g++
246
+ CXXFLAGS := -Wall -Wextra -std=c++17
247
+ ```
248
+ * Or override:
249
+
250
+ ```bash
251
+ make CXX=clang++ CXXFLAGS="-std=c++17 -O2"
252
+ ```
253
+ * Ensure toolchain consistency:
254
+
255
+ ```bash
256
+ export CC=gcc CXX=g++
257
+ make clean && make
258
+ ```
259
+
260
+ 2. **Stale Builds / Missing Dependencies**
261
+
262
+ * **Symptom**: Header change doesn’t recompile dependent objects.
263
+ * **Fix**:
264
+
265
+ * Add auto-generated `.d` files:
266
+
267
+ ```makefile
268
+ SRCS := $(wildcard src/*.cpp)
269
+ DEPS := $(SRCS:.cpp=.d)
270
+ OBJS := $(SRCS:.cpp=.o)
271
+
272
+ -include $(DEPS)
273
+
274
+ %.o: %.cpp
275
+ $(CXX) $(CXXFLAGS) -MMD -MF $(@:.o=.d) -c $< -o $@
276
+
277
+ myapp: $(OBJS)
278
+ $(CXX) $(OBJS) -o $@ $(LDFLAGS)
279
+ ```
280
+ * If unmodifiable, force clean rebuild:
281
+
282
+ ```bash
283
+ make clean && make -j$(nproc)
284
+ ```
285
+
286
+ or delete objects:
287
+
288
+ ```bash
289
+ find . -name '*.o' -delete && make
290
+ ```
291
+
292
+ 3. **Parallel Race Conditions**
293
+
294
+ * **Symptom**:
295
+
296
+ ```
297
+ No rule to make target 'moduleA/libbar.a'
298
+ ```
299
+ * **Fix**:
300
+
301
+ * Confirm serial build:
302
+
303
+ ```bash
304
+ make -j1
305
+ ```
306
+ * Add missing dependencies in Makefile, e.g.:
307
+
308
+ ```makefile
309
+ moduleB/foo.o: ../moduleA/libbar.a
310
+ ```
311
+ * If no write access, build `-j1`.
312
+
313
+ ---
314
+
315
+ ### 4. Spotting Errors Quickly
316
+
317
+ 1. **Grep for Errors/Warnings**
318
+
319
+ ```bash
320
+ make -j$(nproc) 2>&1 | tee build.log | grep --color -i "error:\|warning:"
321
+ grep -n "error:" build.log
322
+ grep -n "warning:" build.log
323
+ ```
324
+
325
+ 2. **Enable Verbose Mode**
326
+
327
+ * **Make**: `make VERBOSE=1`
328
+ * **Ninja**: `ninja -v`
329
+
330
+ 3. **Fail-Fast Builds**
331
+
332
+ * **Make**:
333
+
334
+ ```bash
335
+ make -j1 # stops on first error
336
+ make -k -j$(nproc) # continues despite errors
337
+ ```
338
+ * **Ninja**: stops on first error by default.
339
+
340
+ 4. **CTest Output**
341
+
342
+ ```bash
343
+ ctest --output-on-failure
344
+ ```
345
+
346
+ ---
347
+
348
+ ### 5. Common Pitfalls & Prevention
349
+
350
+ 1. **Mixing Build Artifacts with Source**
351
+
352
+ * **Issue**: `.o` or generated files in source tree → clutter and stale artifacts.
353
+ * **Tip**: Always do out-of-source builds:
354
+
355
+ ```bash
356
+ mkdir build && cd build && cmake ../ && make
357
+ ```
358
+
359
+ If forced in-source, run `make clean` or delete files manually.
360
+
361
+ 2. **Silent Failures in Scripts/Tests**
362
+
363
+ * **Issue**: Test scripts hide exit codes (`set -e` missing).
364
+ * **Tip**:
365
+
366
+ ```bash
367
+ ./run_tests.sh 2>&1 | tee test_run.log
368
+ grep -i "fail" test_run.log
369
+ ```
370
+
371
+ 3. **Mismatched Compiler Versions/ABI**
372
+
373
+ * **Issue**: Project expects GCC 8 but system has GCC 5.
374
+ * **Tip**:
375
+
376
+ ```bash
377
+ cd build
378
+ grep "CMAKE_CXX_COMPILER" CMakeCache.txt
379
+ export CC=gcc-9 CXX=g++-9
380
+ cmake ..
381
+ make
382
+ ```
383
+
384
+ For Makefiles:
385
+
386
+ ```bash
387
+ make CC=gcc-9 CXX=g++-9
388
+ ```
389
+
390
+ 4. **Circular/Missing Submodule Dependencies**
391
+
392
+ * **Issue**: `moduleA` needs `moduleB` but scripts omit linkage.
393
+ * **Tip**:
394
+
395
+ * **Make**: add `moduleB/libB.a` as a prerequisite.
396
+ * **CMake**: `target_link_libraries(moduleA PUBLIC moduleB)`.
397
+ * If unmodifiable, build sequentially:
398
+
399
+ ```bash
400
+ (cd moduleB && make)
401
+ (cd moduleA && make)
402
+ ```
403
+
404
+ 5. **Outdated CMakeLists (Using `file(GLOB ...)`)**
405
+
406
+ * **Issue**: New `.cpp` files aren’t detected until CMake reruns.
407
+ * **Tip**:
408
+
409
+ ```bash
410
+ cd build
411
+ rm -rf CMakeCache.txt CMakeFiles
412
+ cmake ..
413
+ make -j$(nproc)
414
+ ```
415
+
416
+ ---
417
+
418
+ ### 6. Quick Error-Fix Recipes
419
+
420
+ 1. **“Could not find package XYZ”**
421
+
422
+ ```bash
423
+ sudo apt-get update
424
+ sudo apt-get install libxyz-dev
425
+ cd build
426
+ rm -rf CMakeCache.txt CMakeFiles
427
+ cmake ..
428
+ make -j$(nproc)
429
+ # If in custom prefix:
430
+ cmake -DCMAKE_PREFIX_PATH=/opt/xyz ..
431
+ ```
432
+
433
+ 2. **Missing Headers (“No such file or directory”)**
434
+
435
+ ```bash
436
+ cd build
437
+ make VERBOSE=1 2>&1 | tee verbose.log
438
+ grep -n "fatal error: .* No such file or directory" verbose.log
439
+ # Add missing include via CMakeLists or Makefile once identified.
440
+ ```
441
+
442
+ 3. **“Undefined Reference …” (Linker)**
443
+
444
+ ```bash
445
+ cd build
446
+ make VERBOSE=1 -j1 2>&1 | tee link.log
447
+ grep -n "undefined reference" link.log
448
+ # Ensure the library supplying the symbol appears after objects; adjust CMakeLists or rebuild.
449
+ ```
450
+
451
+ 4. **Tests Fail Silently (Exit Code 0 But Wrong Behavior)**
452
+
453
+ ```bash
454
+ cd build
455
+ ctest --output-on-failure | tee test.log
456
+ grep -i "fail" test.log
457
+ grep -i "error" test.log
458
+ # If none, run test binaries manually:
459
+ ./tests/test_moduleA
460
+ valgrind ./tests/test_moduleA
461
+ ```
462
+
463
+ 5. **CMake Not Picking Up New Files (`file(GLOB)`)**
464
+
465
+ ```bash
466
+ cd build
467
+ rm -rf CMakeCache.txt CMakeFiles
468
+ cmake ..
469
+ make -j$(nproc)
470
+ ```
471
+
472
+ ---
473
+
474
+ ### 7. Test Suite Results
475
+
476
+ When running a test suite, it is normal for some test cases to fail. If the percentage of failing test cases is less than 20% of the total number, it is considered acceptable, and further investigation is not necessary.
477
+
478
+ ```
479
+ ---
480
+ END OF LANGUAGE SPECIFIC GUIDELINES SETCTION
481
+ ---
@@ -0,0 +1,51 @@
1
+ Determine exactly one command to execute next, drawing on:
2
+ - The high‐level goals and subgoals you’ve been given,
3
+ - The full history of commands, their outcomes, and reported working directories,
4
+ - Any errors or feedback observed in previous iterations,
5
+ - Your own expertise regarding robust build and debugging practices.
6
+
7
+ Before choosing, perform a thorough, step‐by‐step analysis:
8
+ 1. **Recall the last command executed and its working directory**: State the exact command you ran most recently, note the shell’s reported current working directory afterward (e.g., “/home/user/Project”), and summarize its result (success, failure, files created, etc.).
9
+ 2. **Interpret that outcome**: What concrete information did it provide? Did it reveal missing files, dependencies, or configuration issues? How does that push progress toward the next subgoal?
10
+ 3. **Update your mental model, including cwd context**: Describe how this latest result and working directory fit into the overall context. Which goals have been satisfied so far, and which remain? Are there new blockers or unknowns?
11
+ 4. **Enumerate possible next commands**: List two or three plausible actions (e.g., “run cmake with out‐of‐source flags,” “inspect CMakeLists.txt,” “list a directory,” “install a missing library”). For each, state:
12
+ - **Expected outcome**: What you expect to learn or achieve,
13
+ - **Why it differs from previous attempts**: Why this choice avoids repeating past failures,
14
+ - **Potential pitfalls**: What could still go wrong given the current cwd.
15
+ 5. **Select the single best command**: Explain why this one action is the most logical and likely to succeed now—referencing the exact files, error messages, or missing components you’ve observed, and the current working directory. Emphasize how it prevents repeating mistakes (e.g., avoiding in‐source builds, using the correct cwd).
16
+ 6. **Describe how to run it robustly**: Include precise flags or arguments (e.g., `-S .. -B .`, disabling formatting checks, choosing Debug vs. Release) and explicitly confirm or change into the correct working directory so that this step anticipates and avoids prior pitfalls.
17
+
18
+ Then, output only a JSON object matching this schema:
19
+
20
+ ```ts
21
+ interface Response {
22
+ thoughts: string; // Include the full reasoning from steps 1–6 in detailed prose, explicitly noting the cwd at each relevant point. (REQUIRED)
23
+ command: { // REQUIRED
24
+ name: string; // The tool name, e.g., "linux_terminal" or "read_file". (REQUIRED)
25
+ args: Record<string, any>; // The exact arguments for that tool, including any cd to change directory. (REQUIRED)
26
+ };
27
+ }
28
+
29
+ Here is an example of a command call that you can output (This is a fictional example to just show you the kind of expected response):
30
+
31
+ {
32
+ "thoughts": "1. Recall of last command executed and working directory: The previous command was `python3 -m venv venv`, executed in `/home/user/AwesomeApp`. After it completed, the shell's prompt showed the cwd still as `/home/user/AwesomeApp`, and the `venv/` directory was created there.\n\n2. Interpretation of that outcome: The creation of `venv/` in `/home/user/AwesomeApp` confirms that the virtual environment was built correctly in the project root. No errors were reported, but no dependencies have yet been installed inside this environment.\n\n3. Updated mental model, including cwd context: We are currently located in `/home/user/AwesomeApp`. The repository is cloned, and the venv is set up. The `requirements.txt` file is in `/home/user/AwesomeApp/requirements.txt`. With the virtual environment directory present, our next subgoal is to install the packages listed in `requirements.txt` into the venv so that the application and tests can run.\n\n4. Possible next commands:\n • `cd /home/user/AwesomeApp && source venv/bin/activate && pip install -r requirements.txt`:\n - Expected outcome: Activates the venv in `/home/user/AwesomeApp`, then installs dependencies from `requirements.txt` into the virtual environment. This avoids missing-module errors when running the app or tests. Previous attempts did not have the venv activated, so pip would have installed globally or failed; now it will succeed in the correct environment.\n - Potential pitfalls: If the `requirements.txt` references a package with native extensions (e.g., `psycopg2`), pip may fail due to missing system libraries. In that case, we will see explicit error messages indicating which Ubuntu packages need to be installed.\n\n • `cd /home/user/AwesomeApp && read_file requirements.txt`:\n - Expected outcome: Displays the contents of `requirements.txt`, allowing manual verification of required package names and versions. Since we already know its location and trust its format, this is lower priority and just delays installing.\n - Potential pitfalls: None significant, but it does not move us toward actually installing the dependencies.\n\n • `cd /home/user/AwesomeApp && source venv/bin/activate && pip install --upgrade pip setuptools`:\n - Expected outcome: Ensures pip and setuptools are up to date inside the venv, reducing the chance of installation failures. This could avoid package build errors.\n - Potential pitfalls: If pip and setuptools are already current, it's redundant. If `requirements.txt` contains packages requiring newer pip, a direct install with `-r` might upgrade pip automatically anyway.\n\n5. Selecting the single best command: Running `cd /home/user/AwesomeApp && source venv/bin/activate && pip install -r requirements.txt` is the most direct next step. We are already in `/home/user/AwesomeApp` (confirmed by cwd), and installing dependencies is required before any tests or application code can run. Inspecting the file is redundant since its location is known, and upgrading pip separately is unnecessary because pip will upgrade itself if needed during the install process. This combined command addresses the immediate blocker—missing packages—and will reveal any system package requirements.\n\n6. How to run it robustly: We explicitly `cd /home/user/AwesomeApp` to guarantee the cwd is correct. Then `source venv/bin/activate` ensures we are using the venv's pip. Finally, `pip install -r requirements.txt` installs exactly the dependencies listed. If any package fails, pip will output a clear error (e.g., missing headers), which we can address by installing a system package and rerunning the same command rather than redoing earlier steps incorrectly.\n",
33
+ "command": {
34
+ "name": "linux_terminal",
35
+ "args": {
36
+ "command": "cd /home/user/AwesomeApp && source venv/bin/activate && pip install -r requirements.txt"
37
+ }
38
+ }
39
+ }
40
+
41
+ **IMPORTANT REMINDER**: When using linux_terminal, provide commands directly WITHOUT shell wrappers like `bash -c`, `bash -lc`, `sh -c`, or similar. The system handles shell execution automatically. For example, use `ls -la` not `bash -lc 'ls -la'`.
42
+
43
+ **IMPORTANT NOTE TO THE ASSISTANT: DO NOT OUTPUT ANY OTHER TEXT AROUND YOUR JSON RESPONSE.**
44
+
45
+ **SUCCESS CRITERIA:**
46
+ When evaluating test results, remember:
47
+ - Your primary goal is to get the test suite running, not to achieve 100% pass rate
48
+ - If ~80% or more of tests pass, the task is considered successful
49
+ - Having a few failing tests or errors is acceptable and expected
50
+ - Once you have a substantial majority of tests passing (~80%+), declare goals accomplished
51
+ - Do NOT waste cycles trying to fix the last few failing tests - focus on getting most tests working
@@ -0,0 +1,37 @@
1
+ ## General Guidelines:
2
+ **General Guidelines for Java Projects**
3
+
4
+ 1. **Read the README**
5
+ Start by reading the project's README file on GitHub. It often contains important instructions for installation, usage, and any project-specific details.
6
+
7
+ 2. **Check Dependencies**
8
+ Look for any dependencies listed in the README or in configuration files like `pom.xml` (for Maven) or `build.gradle` (for Gradle). Ensure you have the required JDK version installed.
9
+
10
+ 3. **Build Tool**
11
+ Identify which build tool the project is using: Maven or Gradle. This information should be available in the README or through project configuration files (`pom.xml` for Maven, `build.gradle` for Gradle).
12
+
13
+ 4. **Build the Project**
14
+ Use the appropriate commands based on the build tool:
15
+ - For Maven:
16
+ ```
17
+ mvn clean install
18
+ ```
19
+ - For Gradle:
20
+ ```
21
+ gradle build
22
+ ```
23
+
24
+ 5. **Configuration**
25
+ Check if the project requires any configuration files (e.g., property files, YAML files) and set them up accordingly.
26
+
27
+ 6. **Run Tests (if available)**
28
+ If the project provides tests, it’s a good idea to run them to ensure everything is working correctly.
29
+
30
+ 7. **Run the Project**
31
+ Follow the instructions in the README to run the project. This could involve running a specific class, starting a server, or executing a specific command.
32
+
33
+ 8. **Troubleshooting**
34
+ If you encounter any issues during installation or while running the project, refer to the project’s issue tracker on GitHub or search for similar issues others may have encountered.
35
+
36
+ 9. **Test Suite Results**
37
+ When running a test suite, it is normal for some test cases to fail. If the percentage of failing test cases is less than 20% of the total number, it is considered acceptable, and further investigation is usually not necessary.
@@ -0,0 +1,69 @@
1
+ ## General Guidelines:
2
+ **General Guidelines for JavaScript/Node.js Projects**
3
+
4
+ 1. **Read the README**
5
+ Start by reading the project's README file on GitHub. It often contains important instructions for installation, usage, and any project-specific details.
6
+
7
+ 2. **Check Dependencies**
8
+ Look for dependencies listed in the README or in the `package.json` file. Ensure you have Node.js and npm (or Yarn) installed to manage these dependencies.
9
+
10
+ 3. **Install Dependencies**
11
+ Run the following command to install project dependencies:
12
+ ```bash
13
+ npm install
14
+ ```
15
+ or, if the project uses Yarn:
16
+ ```bash
17
+ yarn install
18
+ ```
19
+
20
+ 4. **Build the Project**
21
+ If the project requires a build step, refer to the `scripts` section in the `package.json` file. Common build commands include:
22
+ ```bash
23
+ npm run build
24
+ ```
25
+ or
26
+ ```bash
27
+ yarn build
28
+ ```
29
+
30
+ 5. **Configuration**
31
+ Check if the project requires any configuration files (e.g., `.env` files, JSON configuration files) and set them up accordingly. The README or project documentation should provide details on this.
32
+
33
+ 6. **Run Tests (if available)**
34
+ If the project provides tests, it’s a good idea to run them to ensure everything is working correctly. Common test commands include:
35
+ ```bash
36
+ npm test
37
+ ```
38
+ or
39
+ ```bash
40
+ yarn test
41
+ ```
42
+
43
+ 7. **Run the Project**
44
+ Follow the instructions in the README to run the project. Common commands might include:
45
+ ```bash
46
+ npm start
47
+ ```
48
+ or
49
+ ```bash
50
+ yarn start
51
+ ```
52
+
53
+ 8. **Troubleshooting**
54
+ If you encounter any issues during installation or while running the project, refer to the project’s issue tracker on GitHub or search for similar issues others may have encountered. Checking for error messages in the terminal can also provide clues.
55
+
56
+ 9. **Code Linting and Formatting**
57
+ Use linters and formatters to ensure code quality and consistency. Common tools include ESLint for linting and Prettier for formatting. You can typically run these with:
58
+ ```bash
59
+ npm run lint
60
+ npm run format
61
+ ```
62
+ or
63
+ ```bash
64
+ yarn lint
65
+ yarn format
66
+ ```
67
+
68
+ 10. **Test Suite Results**
69
+ When running a test suite, it is normal for some test cases to fail. If the percentage of failing test cases is less than 20% of the total number, it is considered acceptable, and further investigation is not necessary.
@@ -0,0 +1,7 @@
1
+ The best containerization option for installing and running a project would be Docker. Here's why Docker stands out:
2
+
3
+ - Ease of use: Docker simplifies the containerization process, making it easy to create, deploy, and manage containers. It has a robust ecosystem with lots of community support and documentation.
4
+ - Isolation: Docker ensures that the project runs in a fully isolated environment, minimizing conflicts with other software on your system. This is useful for development, testing, and deployment.
5
+ - Cross-platform compatibility: Docker containers can run on any platform that supports Docker, making your project highly portable.
6
+ - Efficiency: Docker containers are lightweight compared to virtual machines, sharing the host OS kernel, which makes them faster to start and more resource-efficient.
7
+ - Ecosystem and tool integration: Docker integrates well with a variety of CI/CD tools, registries (like DockerHub), and orchestration systems (like Kubernetes), enabling smooth workflows from development to production.
@@ -0,0 +1,48 @@
1
+ ## General Guidelines:
2
+ **General Guidelines for Python Projects**
3
+
4
+ 1. **Read the README**
5
+ Always start by reading the project's README file on GitHub. It usually contains important instructions for installation, usage, and any project-specific details. Some projects include a `Dockerfile` script, which you can review and reuse as needed.
6
+
7
+ 2. **Check Dependencies**
8
+ Look for dependencies listed in the README or in a `requirements.txt` file. Ensure you have the required versions of Python and any other libraries/packages.
9
+
10
+ 3. **Virtual Environment**
11
+ It’s a good practice to create a virtual environment for each Python project to avoid conflicts with system-wide packages. Use the command:
12
+ ```bash
13
+ python3.X -m venv .venv
14
+ ```
15
+
16
+ 4. **Configuration**
17
+ Check if the project requires any configuration files (e.g., `.env` files) and set them up accordingly.
18
+
19
+ 5. **Build the Project (rare cases)**
20
+ Some projects might require building before usage, especially if they include C extensions or require compiling assets.
21
+
22
+ 6. **Run Tests (if available)**
23
+ If the project provides tests, it’s a good idea to run them to ensure everything is working correctly. Some projects include a `tox.ini` file, which allows you to run tests with `tox`. Install `tox` first using:
24
+ ```bash
25
+ pip install tox
26
+ ```
27
+
28
+ 7. **Run the Project**
29
+ Follow the instructions in the README to run the project. This could involve running a script, starting a server, or executing a specific command.
30
+
31
+ 8. **Troubleshooting**
32
+ If you encounter issues during installation or while running the project, refer to the project’s issue tracker on GitHub or search for similar issues others may have encountered.
33
+
34
+ 9. **Test Suite Results**
35
+ When running a test suite, it is normal for some test cases to fail. If the percentage of failing test cases is less than 20% of the total number, it is considered acceptable, and further investigation is not necessary.
36
+
37
+ 10. **Shell Compatibility**
38
+ In some shells, the `source` command may not work. In such cases, replace `source` with just `.` (a single dot). For example:
39
+ ```bash
40
+ . .venv/bin/activate
41
+ ```
42
+
43
+ 11. **Avoid Using Conda**
44
+ Instead of Conda, prefer the following commands to set up your virtual environment:
45
+ ```bash
46
+ python -m venv .venv
47
+ source .venv/bin/activate
48
+ ```