aigent-team 0.1.0 → 0.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,46 @@
1
+ # Skill: Parallel Agent Orchestration
2
+
3
+ **Trigger**: When implementing a feature that requires multiple specialist agents working simultaneously.
4
+
5
+ ## Steps
6
+
7
+ 1. **Map the dependency graph**:
8
+ - List all subtasks and which agent owns each
9
+ - Identify dependencies: which tasks must complete before others can start
10
+ - Group independent tasks that can run in parallel
11
+ - Example:
12
+ ```
13
+ BA (specs) → [FE (UI) + BE (API)] → QA (integration tests) → DevOps (deploy config)
14
+ ```
15
+
16
+ 2. **Define the API contract first** (if FE + BE are both involved):
17
+ - Have BA produce the API contract specification
18
+ - Both FE and BE must acknowledge the contract before starting
19
+ - Contract includes: endpoints, request/response schemas, error codes, auth requirements
20
+
21
+ 3. **Spawn parallel agents with full context**:
22
+ - Each agent gets: task description, relevant specs, file scope, acceptance criteria
23
+ - Each agent gets: the API contract (if applicable)
24
+ - Each agent gets: constraints and deadlines
25
+
26
+ 4. **Monitor and coordinate**:
27
+ - Check agent outputs at each milestone
28
+ - If one agent's output changes the contract, pause and realign all affected agents
29
+ - Resolve conflicts immediately — don't let agents proceed on divergent assumptions
30
+
31
+ 5. **Integration checkpoint**:
32
+ - After parallel work completes, verify FE and BE outputs match the same contract
33
+ - Have QA write integration tests that exercise the full flow
34
+ - Run all unit tests from all agents together
35
+
36
+ 6. **Final review**:
37
+ - Review combined diff for consistency
38
+ - Verify no duplicate code or conflicting patterns across agent outputs
39
+ - Confirm all acceptance criteria are met
40
+
41
+ ## Expected Output
42
+
43
+ - Dependency graph (which tasks depend on which)
44
+ - Agent assignments with full context templates
45
+ - Integration verification results
46
+ - Combined review summary
@@ -0,0 +1,38 @@
1
+ # Skill: Sprint Review
2
+
3
+ **Trigger**: When reviewing completed work at the end of a sprint or milestone, assessing quality and completeness.
4
+
5
+ ## Steps
6
+
7
+ 1. **Gather deliverables** — Collect all outputs from the sprint:
8
+ - List all PRs merged or in review
9
+ - List all tasks completed and their acceptance criteria status
10
+ - Identify any tasks that were carried over (incomplete)
11
+
12
+ 2. **Quality assessment** — For each deliverable:
13
+ - Does it meet the acceptance criteria defined by BA?
14
+ - Has it passed code review by the relevant specialist agent?
15
+ - Are tests written and passing (unit, integration, E2E as appropriate)?
16
+ - Are there any open review comments or unresolved discussions?
17
+
18
+ 3. **Cross-team alignment check**:
19
+ - FE and BE APIs match — no mismatched contracts
20
+ - Shared components modified by one team don't break another
21
+ - Database changes are compatible with both current and previous app versions
22
+
23
+ 4. **Technical debt assessment**:
24
+ - Were any shortcuts taken that need follow-up tickets?
25
+ - Are there TODO comments that should be tracked?
26
+ - Were any rules or constraints violated with justification?
27
+
28
+ 5. **Produce sprint summary**:
29
+ - Completed: what was delivered and working
30
+ - Carried over: what wasn't finished and why
31
+ - Risks: what might cause issues in the next sprint
32
+ - Improvements: what went well, what should change
33
+
34
+ ## Expected Output
35
+
36
+ - Sprint summary document with completed/carried/risks sections
37
+ - Quality scorecard per deliverable
38
+ - Follow-up tickets for technical debt or unfinished work
@@ -0,0 +1,27 @@
1
+ # Rules (Hard Constraints)
2
+
3
+ ## Scope Rules
4
+ - **DO NOT** modify production source code — only test files, test utilities, and test configuration
5
+ - **DO NOT** modify infrastructure or deployment configs
6
+ - You may read any source file to understand behavior, but only write test code
7
+
8
+ ## Action Rules
9
+ - **NEVER** commit `.only` or `.skip` on tests — all tests must run in CI
10
+ - **NEVER** use `sleep()` or fixed delays in tests — use explicit waits and polling
11
+ - **NEVER** disable or delete existing tests without documenting the reason
12
+ - **NEVER** mock what you can test against a real implementation (prefer integration over mocking)
13
+ - **DO NOT** write tests that depend on execution order or shared mutable state
14
+ - **DO NOT** assert on implementation details (internal state, private methods, CSS classes)
15
+
16
+ ## Escalation Rules — Stop and Ask
17
+ - Test coverage would drop below the project threshold after changes
18
+ - Flaky test requires infrastructure fix (timing, race condition, external dependency)
19
+ - Cannot write meaningful test because the source code has no testable interface
20
+ - Security test reveals an actual vulnerability — report immediately, don't just log it
21
+ - Performance test shows regression > 20% from baseline
22
+
23
+ ## Output Rules
24
+ - Every test must have a clear description that reads as a behavior specification
25
+ - No empty test bodies or placeholder tests — every `it()` must assert something
26
+ - Test data must be self-contained — no dependency on external fixtures or seed data that isn't in the test
27
+ - E2E tests must clean up after themselves (created records, uploaded files, etc.)
@@ -0,0 +1,50 @@
1
+ # Skill: Diagnose Flaky Test
2
+
3
+ **Trigger**: When a test is intermittently failing in CI or locally, passing on retry but failing inconsistently.
4
+
5
+ ## Steps
6
+
7
+ 1. **Reproduce the flake** — Run the test in a loop to confirm:
8
+ ```bash
9
+ # Run 20 times, stop on first failure
10
+ for i in $(seq 1 20); do npx vitest run path/to/test.ts || break; done
11
+ # Or with Jest
12
+ npx jest --forceExit --runInBand path/to/test.ts --repeat=20
13
+ ```
14
+
15
+ 2. **Classify the flake** — Common root causes:
16
+ - **Timing**: test depends on setTimeout, animation frames, or network delays
17
+ - **Shared state**: tests modify global/module state that leaks between runs
18
+ - **Order dependency**: test passes alone but fails when run with others
19
+ - **Race condition**: async operations complete in unpredictable order
20
+ - **External dependency**: test hits a real API, database, or file system
21
+ - **Resource exhaustion**: port conflicts, file descriptor leaks, memory pressure
22
+
23
+ 3. **Isolate** — Narrow down the cause:
24
+ ```bash
25
+ # Run the failing test alone
26
+ npx vitest run --testNamePattern "specific test name"
27
+ # Run with the test before it
28
+ npx vitest run path/to/test.ts --sequence
29
+ # Check for shared state
30
+ npx vitest run --isolate path/to/test.ts
31
+ ```
32
+
33
+ 4. **Fix by category**:
34
+ - **Timing**: Replace `sleep()` with explicit waits (`waitFor`, `waitForElement`, polling)
35
+ - **Shared state**: Add proper `beforeEach`/`afterEach` cleanup, avoid global mutations
36
+ - **Order dependency**: Make each test fully self-contained with its own setup
37
+ - **Race condition**: Use proper async/await, avoid fire-and-forget promises
38
+ - **External dependency**: Mock the external service, use test containers for databases
39
+
40
+ 5. **Verify the fix** — Run the loop again to confirm stability:
41
+ ```bash
42
+ for i in $(seq 1 50); do npx vitest run path/to/test.ts || { echo "FAILED on run $i"; exit 1; }; done
43
+ echo "All 50 runs passed"
44
+ ```
45
+
46
+ ## Expected Output
47
+
48
+ - Root cause classification (timing, shared state, race condition, etc.)
49
+ - Specific fix applied with explanation
50
+ - Verification that the test passes 50 consecutive runs
@@ -0,0 +1,56 @@
1
+ # Skill: Generate Test Data
2
+
3
+ **Trigger**: When setting up test fixtures, creating seed data for E2E tests, or testing edge cases and boundary conditions.
4
+
5
+ ## Steps
6
+
7
+ 1. **Analyze data requirements**:
8
+ - Read the schema/types for entities involved
9
+ - Identify required fields, constraints (unique, foreign keys, enums)
10
+ - Identify edge cases: empty strings, max-length values, unicode, special characters
11
+
12
+ 2. **Create factory functions** — Use the project's factory pattern:
13
+ ```typescript
14
+ // Example with factory pattern
15
+ function createUser(overrides?: Partial<User>): User {
16
+ return {
17
+ id: crypto.randomUUID(),
18
+ name: `Test User ${Date.now()}`,
19
+ email: `test-${Date.now()}@example.com`,
20
+ role: 'user',
21
+ createdAt: new Date(),
22
+ ...overrides,
23
+ };
24
+ }
25
+ ```
26
+
27
+ 3. **Generate edge case data sets**:
28
+ - **Boundary values**: empty string, 1 char, max length
29
+ - **Unicode**: emoji, CJK characters, RTL text, zero-width characters
30
+ - **Numeric edges**: 0, -1, MAX_SAFE_INTEGER, NaN, Infinity
31
+ - **Date edges**: epoch, far future, timezone boundaries, DST transitions
32
+ - **Null/undefined**: every nullable field tested with null
33
+
34
+ 4. **Create relationship data** — For entities with foreign keys:
35
+ - Build parent entities before children
36
+ - Create both happy-path and orphan scenarios
37
+ - Test cascade delete/update behavior
38
+
39
+ 5. **Seed data for E2E**:
40
+ ```typescript
41
+ async function seedTestDatabase() {
42
+ const admin = await createUser({ role: 'admin' });
43
+ const users = await Promise.all(
44
+ Array.from({ length: 10 }, () => createUser())
45
+ );
46
+ // Create related entities...
47
+ return { admin, users };
48
+ }
49
+ ```
50
+
51
+ ## Expected Output
52
+
53
+ - Factory functions for each entity involved
54
+ - Edge case data sets organized by category
55
+ - Seed script for E2E test environment
56
+ - Cleanup function to reset test data after runs