scriptgini 0.1.2__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (39) hide show
  1. scriptgini-0.1.2/PKG-INFO +381 -0
  2. scriptgini-0.1.2/README.md +367 -0
  3. scriptgini-0.1.2/app/__init__.py +0 -0
  4. scriptgini-0.1.2/app/agents/__init__.py +0 -0
  5. scriptgini-0.1.2/app/agents/prompts.py +147 -0
  6. scriptgini-0.1.2/app/agents/script_gini_agent.py +342 -0
  7. scriptgini-0.1.2/app/config.py +59 -0
  8. scriptgini-0.1.2/app/database.py +23 -0
  9. scriptgini-0.1.2/app/llm/__init__.py +0 -0
  10. scriptgini-0.1.2/app/llm/provider.py +192 -0
  11. scriptgini-0.1.2/app/main.py +76 -0
  12. scriptgini-0.1.2/app/models/__init__.py +0 -0
  13. scriptgini-0.1.2/app/models/bulk_job.py +67 -0
  14. scriptgini-0.1.2/app/models/generated_script.py +39 -0
  15. scriptgini-0.1.2/app/models/project.py +46 -0
  16. scriptgini-0.1.2/app/models/script_run.py +32 -0
  17. scriptgini-0.1.2/app/models/test_case.py +34 -0
  18. scriptgini-0.1.2/app/routers/__init__.py +0 -0
  19. scriptgini-0.1.2/app/routers/analytics.py +73 -0
  20. scriptgini-0.1.2/app/routers/bulk_jobs.py +277 -0
  21. scriptgini-0.1.2/app/routers/demo.py +86 -0
  22. scriptgini-0.1.2/app/routers/projects.py +51 -0
  23. scriptgini-0.1.2/app/routers/scripts.py +549 -0
  24. scriptgini-0.1.2/app/routers/test_cases.py +64 -0
  25. scriptgini-0.1.2/app/schemas/__init__.py +0 -0
  26. scriptgini-0.1.2/app/schemas/analytics.py +27 -0
  27. scriptgini-0.1.2/app/schemas/bulk_job.py +48 -0
  28. scriptgini-0.1.2/app/schemas/generated_script.py +50 -0
  29. scriptgini-0.1.2/app/schemas/project.py +36 -0
  30. scriptgini-0.1.2/app/schemas/test_case.py +34 -0
  31. scriptgini-0.1.2/app/services/git_export.py +133 -0
  32. scriptgini-0.1.2/pyproject.toml +28 -0
  33. scriptgini-0.1.2/scriptgini.egg-info/PKG-INFO +381 -0
  34. scriptgini-0.1.2/scriptgini.egg-info/SOURCES.txt +37 -0
  35. scriptgini-0.1.2/scriptgini.egg-info/dependency_links.txt +1 -0
  36. scriptgini-0.1.2/scriptgini.egg-info/top_level.txt +1 -0
  37. scriptgini-0.1.2/setup.cfg +4 -0
  38. scriptgini-0.1.2/tests/test_api.py +196 -0
  39. scriptgini-0.1.2/tests/test_coverage.py +2171 -0
@@ -0,0 +1,381 @@
1
+ Metadata-Version: 2.4
2
+ Name: scriptgini
3
+ Version: 0.1.2
4
+ Summary: Agentic AI system that converts functional test cases into automation test scripts.
5
+ Author: ScriptGini Team
6
+ License: Proprietary
7
+ Classifier: Programming Language :: Python :: 3
8
+ Classifier: Programming Language :: Python :: 3 :: Only
9
+ Classifier: Programming Language :: Python :: 3.11
10
+ Classifier: Programming Language :: Python :: 3.12
11
+ Classifier: Framework :: FastAPI
12
+ Requires-Python: >=3.11
13
+ Description-Content-Type: text/markdown
14
+
15
+ # ScriptGini
16
+
17
+ > **Enterprise-grade Agentic AI system that converts functional test cases into high-quality, review-ready automation test scripts.**
18
+
19
+ ---
20
+
21
+ ## What is ScriptGini?
22
+
23
+ ScriptGini is an AI-powered test automation engine built for Quality Engineering teams. You feed it a functional test case and an Application Under Test (AUT) URL — it returns a production-ready automation script in your chosen framework, generated by a multi-step LangGraph agent that reasons about test intent before writing a single line of code.
24
+
25
+ ---
26
+
27
+ ## Features
28
+
29
+ - **Agentic 3-node LangGraph pipeline** — Intent analysis → Script generation → Quality review
30
+ - **Multi-provider LLM support** — OpenAI, Ollama (local), OpenRouter, Google Gemini, AWS Bedrock
31
+ - **Framework-agnostic output** — Playwright Python, Selenium Python, UFT VBScript, Cypress JS
32
+ - **Intelligent selector strategy** — Role → Label → data-testid → CSS → XPath (last resort)
33
+ - **Project & AUT management** — Store multiple projects, each with its own base URL and defaults
34
+ - **Full test case history** — Every generated script is stored in SQLite with status and token usage
35
+ - **Execution history persistence** — Every run is stored in `script_runs` with stdout/stderr, exit code, and duration
36
+ - **Hardened execution sandbox** — Script runs use isolated Python mode, static safety validation, and restricted environment variables
37
+ - **Bulk job orchestration** — Project-level bulk generate and bulk run with pollable job status
38
+ - **Run analytics dashboard** — Project-level pass/fail/timeout metrics and recent failure feed
39
+ - **Richer test case intake** — Import `.txt`, `.md`, `.json`, `.csv`, `.feature`, `.yml/.yaml`, and `.xlsx`
40
+ - **Import preview mapping** — Preview parsed scenarios in the UI before creating a project workspace
41
+ - **REST API** — FastAPI with auto-generated Swagger UI
42
+ - **Alembic migrations** — Safe, versioned schema management over SQLite
43
+
44
+ ---
45
+
46
+ ## Tech Stack
47
+
48
+ | Layer | Technology |
49
+ |---|---|
50
+ | API | FastAPI + Uvicorn |
51
+ | Agentic AI | LangGraph + LangChain |
52
+ | LLM Providers | OpenAI, Ollama, OpenRouter, Gemini, Bedrock |
53
+ | Database | SQLite |
54
+ | ORM | SQLAlchemy 2.0 |
55
+ | Migrations | Alembic |
56
+ | Config | Pydantic Settings (.env) |
57
+
58
+ ---
59
+
60
+ ## Quick Start
61
+
62
+ ### Windows
63
+
64
+ ```bat
65
+ start.bat
66
+ ```
67
+
68
+ ### Linux / macOS
69
+
70
+ ```bash
71
+ chmod +x start.sh
72
+ ./start.sh
73
+ ```
74
+
75
+ The script will:
76
+ 1. Create a Python virtual environment
77
+ 2. Install all dependencies
78
+ 3. Copy `.env.example` → `.env` if missing (edit it before re-running)
79
+ 4. Run Alembic migrations
80
+ 5. Start the server and open Swagger UI in your browser
81
+
82
+ To load a ready-made sample workspace, use the `Load Demo Project` button in the web UI or call `POST /api/v1/demo/load`.
83
+
84
+ ---
85
+
86
+ ## Configuration
87
+
88
+ Copy `.env.example` to `.env` and fill in the values you need:
89
+
90
+ ```bash
91
+ cp .env.example .env
92
+ ```
93
+
94
+ ```env
95
+ # Choose your default provider
96
+ DEFAULT_LLM_PROVIDER=openrouter # openai | ollama | openrouter | gemini | bedrock
97
+
98
+ # OpenAI
99
+ OPENAI_API_KEY=your_openai_api_key_here
100
+
101
+ # Ollama (local — no key needed)
102
+ OLLAMA_BASE_URL=http://localhost:11434
103
+ OLLAMA_MODEL=llama3
104
+ OLLAMA_NUM_PREDICT=700
105
+
106
+ # Generation latency controls
107
+ LLM_REQUEST_TIMEOUT_SECONDS=45
108
+ SCRIPT_GENERATION_TIMEOUT_SECONDS=180
109
+ SKIP_REVIEW_FOR_OLLAMA=true
110
+
111
+ # OpenRouter
112
+ OPENROUTER_API_KEY=your_openrouter_api_key_here
113
+ OPENROUTER_MODEL=openai/gpt-4o
114
+
115
+ # Google Gemini
116
+ GOOGLE_API_KEY=your_google_api_key_here
117
+
118
+ # AWS Bedrock
119
+ AWS_ACCESS_KEY_ID=your_aws_access_key_id
120
+ AWS_SECRET_ACCESS_KEY=your_aws_secret_access_key
121
+ AWS_REGION_NAME=us-east-1
122
+ ```
123
+
124
+ > `.env` is git-ignored. Never commit real API keys.
125
+
126
+ If local generation feels slow, reduce `OLLAMA_NUM_PREDICT`, keep `SKIP_REVIEW_FOR_OLLAMA=true`, or switch to a smaller/faster Ollama model.
127
+
128
+ ---
129
+
130
+ ## API Reference
131
+
132
+ Once running, visit:
133
+
134
+ | URL | Description |
135
+ |---|---|
136
+ | `http://localhost:8000/docs` | Swagger UI (interactive) |
137
+ | `http://localhost:8000/redoc` | ReDoc |
138
+ | `http://localhost:8000/health` | Health check |
139
+
140
+ ### Core Workflow
141
+
142
+ #### 1. Create a Project (AUT)
143
+
144
+ ```http
145
+ POST /api/v1/projects/
146
+ ```
147
+
148
+ ```json
149
+ {
150
+ "name": "My Web App",
151
+ "aut_base_url": "https://example.com",
152
+ "default_framework": "playwright_python",
153
+ "selector_preference": "role",
154
+ "auth_hints": "Login with admin/admin on /login"
155
+ }
156
+ ```
157
+
158
+ #### 2. Add a Test Case
159
+
160
+ ```http
161
+ POST /api/v1/projects/{project_id}/test-cases/
162
+ ```
163
+
164
+ ```json
165
+ {
166
+ "title": "TC-001 Successful Login",
167
+ "format": "step_based",
168
+ "content": "Step 1: Navigate to /login\nStep 2: Enter username 'admin'\nStep 3: Enter password 'admin123'\nStep 4: Click Login button\nExpected: User is redirected to /dashboard and sees 'Welcome' message",
169
+ "preconditions": "User account exists in the system",
170
+ "test_data_hints": "username=admin, password=admin123"
171
+ }
172
+ ```
173
+
174
+ #### 3. Generate a Script
175
+
176
+ ```http
177
+ POST /api/v1/projects/{project_id}/test-cases/{tc_id}/scripts/generate
178
+ ```
179
+
180
+ ```json
181
+ {
182
+ "llm_provider": "openrouter",
183
+ "llm_model": "openai/gpt-4o",
184
+ "framework": "playwright_python"
185
+ }
186
+ ```
187
+
188
+ Returns `202 Accepted` immediately. The agent runs in the background.
189
+
190
+ #### 4. Poll for the Result
191
+
192
+ ```http
193
+ GET /api/v1/projects/{project_id}/test-cases/{tc_id}/scripts/{script_id}
194
+ ```
195
+
196
+ Status values: `pending` → `generating` → `completed` | `failed`
197
+
198
+ #### 5. Run a Generated Playwright Script
199
+
200
+ ```http
201
+ POST /api/v1/projects/{project_id}/test-cases/{tc_id}/scripts/{script_id}/run
202
+ ```
203
+
204
+ Returns a persisted run record with:
205
+ - `status` (`completed` | `failed` | `timed_out`)
206
+ - `stdout`, `stderr`
207
+ - `exit_code`, `duration_seconds`
208
+
209
+ Execution safeguards:
210
+ - Script content is statically validated before execution.
211
+ - Unsafe imports and unsafe builtin calls are rejected and persisted as failed runs.
212
+ - Runtime uses Python isolated mode with a restricted environment.
213
+
214
+ #### 6. List Script Run History
215
+
216
+ ```http
217
+ GET /api/v1/projects/{project_id}/test-cases/{tc_id}/scripts/{script_id}/runs
218
+ ```
219
+
220
+ #### 7. Bulk Generate Scripts (Project-level)
221
+
222
+ ```http
223
+ POST /api/v1/projects/{project_id}/scripts/bulk-generate
224
+ ```
225
+
226
+ ```json
227
+ {
228
+ "llm_provider": "openrouter",
229
+ "llm_model": "openai/gpt-4o",
230
+ "framework": "playwright_python",
231
+ "test_case_ids": [1, 2, 3]
232
+ }
233
+ ```
234
+
235
+ #### 8. Bulk Run Latest Completed Scripts
236
+
237
+ ```http
238
+ POST /api/v1/projects/{project_id}/scripts/bulk-run
239
+ ```
240
+
241
+ #### 9. Poll Bulk Job Status
242
+
243
+ ```http
244
+ GET /api/v1/projects/{project_id}/scripts/bulk-jobs/{job_id}
245
+ ```
246
+
247
+ #### 10. Get Run Analytics (Project-level)
248
+
249
+ ```http
250
+ GET /api/v1/projects/{project_id}/analytics/runs
251
+ ```
252
+
253
+ Returns aggregate execution metrics and latest failure details.
254
+
255
+ ---
256
+
257
+ ## LangGraph Agent Pipeline
258
+
259
+ ```
260
+ ┌─────────────────┐ ┌──────────────────┐ ┌───────────────┐
261
+ │ parse_intent │────▶│ generate_script │────▶│ review_script │
262
+ │ │ │ │ │ │
263
+ │ Extracts: │ │ Produces full │ │ QA checks: │
264
+ │ • Business goal │ │ framework- │ │ • Assertions │
265
+ │ • Actions list │ │ specific script │ │ • TODO markers│
266
+ │ • Assertions │ │ │ │ • Rewrites if │
267
+ │ • Preconditions │ │ │ │ needed │
268
+ └─────────────────┘ └──────────────────┘ └───────────────┘
269
+ ```
270
+
271
+ ---
272
+
273
+ ## Supported Frameworks
274
+
275
+ | Key | Framework |
276
+ |---|---|
277
+ | `playwright_python` | Playwright for Python (default) |
278
+ | `selenium_python` | Selenium WebDriver Python |
279
+ | `uft_vbscript` | UFT / QTP VBScript |
280
+ | `cypress_js` | Cypress JavaScript |
281
+
282
+ ---
283
+
284
+ ## Project Structure
285
+
286
+ ```
287
+ scriptgini/
288
+ ├── app/
289
+ │ ├── main.py # FastAPI application
290
+ │ ├── config.py # Settings loaded from .env
291
+ │ ├── database.py # SQLAlchemy engine + session
292
+ │ ├── models/
293
+ │ │ ├── project.py # Project / AUT model
294
+ │ │ ├── test_case.py # Test case model
295
+ │ │ └── generated_script.py # Script history model
296
+ │ ├── schemas/ # Pydantic request/response schemas
297
+ │ ├── routers/
298
+ │ │ ├── projects.py # CRUD — projects
299
+ │ │ ├── test_cases.py # CRUD — test cases
300
+ │ │ └── scripts.py # Generate + history
301
+ │ ├── agents/
302
+ │ │ ├── script_gini_agent.py # LangGraph graph definition
303
+ │ │ └── prompts.py # All prompt templates
304
+ │ └── llm/
305
+ │ └── provider.py # LLM provider factory
306
+ ├── alembic/ # Database migration scripts
307
+ ├── alembic.ini
308
+ ├── requirements.txt
309
+ ├── .env.example # Template — copy to .env
310
+ ├── start.bat # Windows launcher
311
+ └── start.sh # Linux / macOS launcher
312
+ ```
313
+
314
+ ---
315
+
316
+ ## Database Migrations
317
+
318
+ Migrations are handled automatically by `start.bat` / `start.sh`.
319
+
320
+ To run manually:
321
+
322
+ ```bash
323
+ # Apply all pending migrations
324
+ alembic upgrade head
325
+
326
+ # Create a new migration after model changes
327
+ alembic revision --autogenerate -m "description"
328
+
329
+ # Rollback one step
330
+ alembic downgrade -1
331
+ ```
332
+
333
+ ---
334
+
335
+ ## Quality Gate Policy
336
+
337
+ Every check-in is expected to pass:
338
+
339
+ 1. Unit tests
340
+ 2. **100%** coverage on `app/`
341
+ 3. `pip-audit`
342
+ 4. Trivy filesystem scan
343
+
344
+ Local commands:
345
+
346
+ ```bat
347
+ test.bat
348
+ audit.bat
349
+ trivy.bat
350
+ ```
351
+
352
+ ```bash
353
+ ./test.sh
354
+ ./audit.sh
355
+ ./trivy.sh
356
+ ```
357
+
358
+ A CI gate is configured in `.github/workflows/quality-gate.yml` to enforce the same checks on push/PR.
359
+
360
+ ---
361
+
362
+ ## Adding a New LLM Provider
363
+
364
+ 1. Add config keys to `app/config.py`
365
+ 2. Add a new `_provider()` function in `app/llm/provider.py`
366
+ 3. Register it in `get_llm()` and the `LLMProvider` type alias
367
+ 4. Add the corresponding key to `.env.example`
368
+
369
+ ---
370
+
371
+ ## Security Notes
372
+
373
+ - `.env` is git-ignored — never commit API keys
374
+ - The API has no authentication by default — add an API key middleware before exposing to a network
375
+ - UI validation only — the agent never makes live requests to the AUT
376
+
377
+ ---
378
+
379
+ ## License
380
+
381
+ MIT