ma-agents 1.4.0 → 1.5.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/package.json +2 -2
- package/skills/README.md +25 -0
- package/skills/logging-best-practices/SKILL.md +46 -0
- package/skills/logging-best-practices/examples/cpp.md +36 -0
- package/skills/logging-best-practices/examples/csharp.md +49 -0
- package/skills/logging-best-practices/examples/javascript.md +77 -0
- package/skills/logging-best-practices/examples/python.md +57 -0
- package/skills/logging-best-practices/references/logging-standards.md +29 -0
- package/skills/logging-best-practices/skill.json +13 -0
- package/skills/test-accompanied-development/SKILL.md +39 -0
- package/skills/test-accompanied-development/skill.json +12 -0
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "ma-agents",
|
|
3
|
-
"version": "1.
|
|
3
|
+
"version": "1.5.0",
|
|
4
4
|
"description": "NPX tool to install skills for AI coding agents (Claude Code, Gemini, Copilot, Kilocode, Cline, Cursor)",
|
|
5
5
|
"main": "index.js",
|
|
6
6
|
"bin": {
|
|
@@ -32,4 +32,4 @@
|
|
|
32
32
|
"engines": {
|
|
33
33
|
"node": ">=14.0.0"
|
|
34
34
|
}
|
|
35
|
-
}
|
|
35
|
+
}
|
package/skills/README.md
CHANGED
|
@@ -191,6 +191,31 @@ Creates production-ready hardened Docker configurations following security best
|
|
|
191
191
|
|
|
192
192
|
---
|
|
193
193
|
|
|
194
|
+
### 5. Test-Accompanied Development (TAD)
|
|
195
|
+
**Directory:** `test-accompanied-development/`
|
|
196
|
+
|
|
197
|
+
Enforces a "Test-Alongside" policy where every public method is accompanied by a corresponding unit test. It ensures high code quality by mandating that all public interfaces are verified by automated tests at the moment of creation.
|
|
198
|
+
|
|
199
|
+
**Key Features:**
|
|
200
|
+
- ✅ **Automatic Enforcement**: Mandates tests for every new public method.
|
|
201
|
+
- ✅ **Integration**: References `test-generator` for implementation standards.
|
|
202
|
+
- ✅ **Workflow Guardrail**: Prevents "test debt" by requiring tests alongside code.
|
|
203
|
+
|
|
204
|
+
---
|
|
205
|
+
|
|
206
|
+
### 6. Logging Best Practices
|
|
207
|
+
**Directory:** `logging-best-practices/`
|
|
208
|
+
|
|
209
|
+
Standardizes structured logging (JSON) across Backend, Frontend, Realtime, and Algorithmic domains. Enforces mandatory exception logging and OpenTelemetry-aligned context fields.
|
|
210
|
+
|
|
211
|
+
**Key Features:**
|
|
212
|
+
- ✅ **Structured Logging**: Mandates JSON format for machine-readability.
|
|
213
|
+
- ✅ **Domain-Specific**: Tailored guidance for Web, Server, and High-Performance Algorithm domains.
|
|
214
|
+
- ✅ **Mandatory Exceptions**: Every catch block must log full stack traces.
|
|
215
|
+
- ✅ **Context-Rich**: Standardizes fields like `trace_id`, `placement`, and `process_name`.
|
|
216
|
+
|
|
217
|
+
---
|
|
218
|
+
|
|
194
219
|
## Requirements
|
|
195
220
|
|
|
196
221
|
### All Skills
|
|
@@ -0,0 +1,46 @@
|
|
|
1
|
+
# Logging Best Practices
|
|
2
|
+
|
|
3
|
+
Enforce structured, context-rich logging according to market standards (OpenTelemetry) across all application domains.
|
|
4
|
+
|
|
5
|
+
## Policy
|
|
6
|
+
|
|
7
|
+
**All logs must be structured (preferably JSON) and include mandatory context. Every exception MUST be logged with its full stack trace.**
|
|
8
|
+
|
|
9
|
+
## Core Mandatory Fields
|
|
10
|
+
|
|
11
|
+
Every log entry must contain:
|
|
12
|
+
- `datetime`: ISO 8601 timestamp with timezone.
|
|
13
|
+
- `severity`: Standard level (DEBUG, INFO, WARN, ERROR, CRITICAL).
|
|
14
|
+
- `message`: Clear, concise description of the event.
|
|
15
|
+
- `placement`: File name and line number where the log was triggered.
|
|
16
|
+
- `process_name`: Name of the service or application.
|
|
17
|
+
- `container_id`: (If applicable) Docker/K8s container identifier.
|
|
18
|
+
- `trace_id` / `span_id`: For distributed tracing and request correlation.
|
|
19
|
+
|
|
20
|
+
## Domain-Specific Requirements
|
|
21
|
+
|
|
22
|
+
### 1. Backend Systems
|
|
23
|
+
- **Log**: Incoming/outgoing requests (method, status, duration).
|
|
24
|
+
- **Log**: Database query latencies and connection states.
|
|
25
|
+
- **Mandatory**: Full exception details in catch blocks.
|
|
26
|
+
|
|
27
|
+
### 2. Frontend Applications
|
|
28
|
+
- **Log**: Client-side errors (JS runtime, UI crashes).
|
|
29
|
+
- **Log**: User interaction context (last clicked component, breadcrumbs).
|
|
30
|
+
- **Context**: Browser version, OS, Resolution.
|
|
31
|
+
|
|
32
|
+
### 3. Realtime & Algorithmic Work
|
|
33
|
+
- **Log**: Iteration throughput and step-by-step latency.
|
|
34
|
+
- **Log**: Mathematical anomalies or convergence failures.
|
|
35
|
+
- **Mandatory**: Timeout exceptions and resource exhaustion warnings.
|
|
36
|
+
|
|
37
|
+
## Rules
|
|
38
|
+
|
|
39
|
+
- **No PII/Secrets**: Never log passwords, keys, or private user data.
|
|
40
|
+
- **Asynchronous**: Prefer non-blocking logging to maintain performance.
|
|
41
|
+
- **Traceability**: Always include `trace_id` in logs that are part of a request flow.
|
|
42
|
+
- **Exception Policy**: Use the `ERROR` level for caught exceptions that affect flow, and `CRITICAL` for system-wide failures.
|
|
43
|
+
|
|
44
|
+
## Reference
|
|
45
|
+
|
|
46
|
+
See [logging-standards.md](./references/logging-standards.md) for detailed field definitions and level guidance.
|
|
@@ -0,0 +1,36 @@
|
|
|
1
|
+
# C++ Logging Examples
|
|
2
|
+
|
|
3
|
+
## Structured Logging with `spdlog`
|
|
4
|
+
|
|
5
|
+
```cpp
|
|
6
|
+
#include "spdlog/spdlog.h"
|
|
7
|
+
#include "spdlog/sinks/stdout_color_sinks.h"
|
|
8
|
+
#include <exception>
|
|
9
|
+
|
|
10
|
+
void run_realtime_loop() {
|
|
11
|
+
auto logger = spdlog::get("realtime_logger");
|
|
12
|
+
|
|
13
|
+
try {
|
|
14
|
+
// Realtime/Algorithmic domain logging
|
|
15
|
+
logger->info("Computation step started. Input size: {}. Placement: {}:{}",
|
|
16
|
+
1024, __FILE__, __LINE__);
|
|
17
|
+
|
|
18
|
+
if (check_anomaly()) {
|
|
19
|
+
logger->warn("Numerical anomaly detected! Severity: WARN");
|
|
20
|
+
}
|
|
21
|
+
|
|
22
|
+
} catch (const std::exception& e) {
|
|
23
|
+
// Mandatory Exception Logging
|
|
24
|
+
logger->critical("Critical failure in realtime loop! Error: {}. File: {}. Line: {}",
|
|
25
|
+
e.what(), __FILE__, __LINE__);
|
|
26
|
+
throw;
|
|
27
|
+
}
|
|
28
|
+
}
|
|
29
|
+
|
|
30
|
+
// Global setup for JSON output
|
|
31
|
+
void setup_logging() {
|
|
32
|
+
// Note: spdlog requires a custom formatter or sink for pure JSON output
|
|
33
|
+
// to match OTel standards perfectly.
|
|
34
|
+
spdlog::set_pattern("{\"datetime\":\"%Y-%m-%dT%H:%M:%SZ\",\"severity\":\"%l\",\"message\":\"%v\",\"process_name\":\"engine_v1\"}");
|
|
35
|
+
}
|
|
36
|
+
```
|
|
@@ -0,0 +1,49 @@
|
|
|
1
|
+
# C# Logging Examples
|
|
2
|
+
|
|
3
|
+
## Structured Logging with Serilog
|
|
4
|
+
|
|
5
|
+
```csharp
|
|
6
|
+
using Serilog;
|
|
7
|
+
using System;
|
|
8
|
+
|
|
9
|
+
public class DataService
|
|
10
|
+
{
|
|
11
|
+
private readonly ILogger _logger = Log.ForContext<DataService>();
|
|
12
|
+
|
|
13
|
+
public void ProcessAlgorithm(double[] data)
|
|
14
|
+
{
|
|
15
|
+
try
|
|
16
|
+
{
|
|
17
|
+
_logger.Information("Algorithm iteration started. Data points: {Count}. Placement: {Placement}",
|
|
18
|
+
data.Length, "DataService.cs:45");
|
|
19
|
+
|
|
20
|
+
// Realtime/Algorithmic specific logging
|
|
21
|
+
var startTime = DateTime.UtcNow;
|
|
22
|
+
RunComplexMath(data);
|
|
23
|
+
var duration = (DateTime.UtcNow - startTime).TotalMilliseconds;
|
|
24
|
+
|
|
25
|
+
_logger.Information("Iteration complete. Latency: {Latency}ms", duration);
|
|
26
|
+
}
|
|
27
|
+
catch (Exception ex)
|
|
28
|
+
{
|
|
29
|
+
// Mandatory Exception Logging
|
|
30
|
+
_logger.Error(ex, "Algorithm execution failed at {Placement}. Container: {ContainerId}",
|
|
31
|
+
"DataService.cs:55", Environment.GetEnvironmentVariable("HOSTNAME"));
|
|
32
|
+
}
|
|
33
|
+
}
|
|
34
|
+
}
|
|
35
|
+
```
|
|
36
|
+
|
|
37
|
+
## Microsoft.Extensions.Logging (JSON Console)
|
|
38
|
+
|
|
39
|
+
```csharp
|
|
40
|
+
// In Program.cs
|
|
41
|
+
builder.Logging.AddJsonConsole(options => {
|
|
42
|
+
options.TimestampFormat = "yyyy-MM-ddTHH:mm:ssZ ";
|
|
43
|
+
options.JsonWriterOptions = new JsonWriterOptions { Indented = true };
|
|
44
|
+
});
|
|
45
|
+
|
|
46
|
+
// Usage
|
|
47
|
+
_logger.LogError(exception, "Request failed at {Placement}. TraceId: {TraceId}",
|
|
48
|
+
"OrderController.cs:120", HttpContext.TraceIdentifier);
|
|
49
|
+
```
|
|
@@ -0,0 +1,77 @@
|
|
|
1
|
+
# JavaScript/TypeScript Logging Examples
|
|
2
|
+
|
|
3
|
+
## Backend (Node.js with `pino` or `winston`)
|
|
4
|
+
|
|
5
|
+
```typescript
|
|
6
|
+
import pino from 'pino';
|
|
7
|
+
|
|
8
|
+
const logger = pino({
|
|
9
|
+
level: process.env.LOG_LEVEL || 'info',
|
|
10
|
+
formatters: {
|
|
11
|
+
level: (label) => {
|
|
12
|
+
return { severity: label.toUpperCase() };
|
|
13
|
+
},
|
|
14
|
+
},
|
|
15
|
+
base: {
|
|
16
|
+
process_name: 'api-gateway',
|
|
17
|
+
container_id: process.env.HOSTNAME || 'unknown'
|
|
18
|
+
}
|
|
19
|
+
});
|
|
20
|
+
|
|
21
|
+
async function handleRequest(req, res) {
|
|
22
|
+
const traceId = req.headers['x-trace-id'];
|
|
23
|
+
try {
|
|
24
|
+
logger.info({
|
|
25
|
+
msg: 'Handling incoming request',
|
|
26
|
+
trace_id: traceId,
|
|
27
|
+
path: req.path,
|
|
28
|
+
placement: 'router.ts:12'
|
|
29
|
+
});
|
|
30
|
+
|
|
31
|
+
// ... logic
|
|
32
|
+
} catch (error) {
|
|
33
|
+
// Mandatory Exception Logging
|
|
34
|
+
logger.error({
|
|
35
|
+
msg: 'Request handler failed',
|
|
36
|
+
trace_id: traceId,
|
|
37
|
+
err: error, // Pino automatically formats the stack trace
|
|
38
|
+
severity: 'ERROR',
|
|
39
|
+
placement: 'router.ts:25'
|
|
40
|
+
});
|
|
41
|
+
res.status(500).send('Internal Server Error');
|
|
42
|
+
}
|
|
43
|
+
}
|
|
44
|
+
```
|
|
45
|
+
|
|
46
|
+
## Frontend (Browser)
|
|
47
|
+
|
|
48
|
+
```javascript
|
|
49
|
+
const logToServer = async (logEntry) => {
|
|
50
|
+
try {
|
|
51
|
+
await fetch('/api/logs', {
|
|
52
|
+
method: 'POST',
|
|
53
|
+
body: JSON.stringify({
|
|
54
|
+
...logEntry,
|
|
55
|
+
datetime: new Date().toISOString(),
|
|
56
|
+
browser: navigator.userAgent,
|
|
57
|
+
process_name: 'frontend-spa'
|
|
58
|
+
})
|
|
59
|
+
});
|
|
60
|
+
} catch (e) {
|
|
61
|
+
console.error('Failed to ship logs', e);
|
|
62
|
+
}
|
|
63
|
+
};
|
|
64
|
+
|
|
65
|
+
// Global error handler
|
|
66
|
+
window.onerror = function(msg, url, lineNo, columnNo, error) {
|
|
67
|
+
logToServer({
|
|
68
|
+
severity: 'ERROR',
|
|
69
|
+
message: msg,
|
|
70
|
+
placement: `${url}:${lineNo}`,
|
|
71
|
+
exception: {
|
|
72
|
+
message: error?.message,
|
|
73
|
+
stack: error?.stack
|
|
74
|
+
}
|
|
75
|
+
});
|
|
76
|
+
};
|
|
77
|
+
```
|
|
@@ -0,0 +1,57 @@
|
|
|
1
|
+
# Python Logging Examples
|
|
2
|
+
|
|
3
|
+
## Structured Logging with `structlog`
|
|
4
|
+
|
|
5
|
+
```python
|
|
6
|
+
import structlog
|
|
7
|
+
import os
|
|
8
|
+
|
|
9
|
+
logger = structlog.get_logger()
|
|
10
|
+
|
|
11
|
+
def process_data(data):
|
|
12
|
+
try:
|
|
13
|
+
# Algorithmic step logging
|
|
14
|
+
logger.info("calculation_step_started",
|
|
15
|
+
step="matrix_multiplication",
|
|
16
|
+
data_size=len(data),
|
|
17
|
+
placement="processor.py:45")
|
|
18
|
+
|
|
19
|
+
result = perform_complex_math(data)
|
|
20
|
+
return result
|
|
21
|
+
except Exception as e:
|
|
22
|
+
# Mandatory Exception Logging
|
|
23
|
+
logger.error("calculation_failed",
|
|
24
|
+
exception_type=type(e).__name__,
|
|
25
|
+
exception_msg=str(e),
|
|
26
|
+
stack_trace=True, # structlog captures this
|
|
27
|
+
severity="ERROR",
|
|
28
|
+
placement="processor.py:52",
|
|
29
|
+
container_id=os.getenv("HOSTNAME"))
|
|
30
|
+
raise
|
|
31
|
+
```
|
|
32
|
+
|
|
33
|
+
## Standard Library with JSON Formatter
|
|
34
|
+
|
|
35
|
+
```python
|
|
36
|
+
import logging
|
|
37
|
+
import json
|
|
38
|
+
from datetime import datetime
|
|
39
|
+
|
|
40
|
+
class JsonFormatter(logging.Formatter):
|
|
41
|
+
def format(self, record):
|
|
42
|
+
log_entry = {
|
|
43
|
+
"datetime": datetime.utcnow().isoformat(),
|
|
44
|
+
"severity": record.levelname,
|
|
45
|
+
"message": record.getMessage(),
|
|
46
|
+
"placement": f"{record.filename}:{record.lineno}",
|
|
47
|
+
"process_name": record.processName,
|
|
48
|
+
"trace_id": getattr(record, 'trace_id', 'none')
|
|
49
|
+
}
|
|
50
|
+
if record.exc_info:
|
|
51
|
+
log_entry["exception"] = self.formatException(record.exc_info)
|
|
52
|
+
return json.dumps(log_entry)
|
|
53
|
+
|
|
54
|
+
# usage
|
|
55
|
+
logger = logging.getLogger("backend_service")
|
|
56
|
+
logger.error("Database connection failed", exc_info=True)
|
|
57
|
+
```
|
|
@@ -0,0 +1,29 @@
|
|
|
1
|
+
# Logging Standards Reference
|
|
2
|
+
|
|
3
|
+
This document defines the semantic conventions and levels used in the Logging Best Practices skill.
|
|
4
|
+
|
|
5
|
+
## Log Levels
|
|
6
|
+
|
|
7
|
+
| Level | Usage |
|
|
8
|
+
| :--- | :--- |
|
|
9
|
+
| **TRACE** | Fine-grained informational events (mostly for debugging logic flows). |
|
|
10
|
+
| **DEBUG** | Detailed information for developer troubleshooting. |
|
|
11
|
+
| **INFO** | Regular operational events (startup, shutdown, successful requests). |
|
|
12
|
+
| **WARN** | Potential issues or degraded states that don't stop the service. |
|
|
13
|
+
| **ERROR** | Operational failures that affect a specific request or operation. |
|
|
14
|
+
| **CRITICAL** | System-wide failures requiring immediate attention. |
|
|
15
|
+
|
|
16
|
+
## OpenTelemetry Semantic Conventions
|
|
17
|
+
|
|
18
|
+
To ensure interoperability, use the following field names where possible:
|
|
19
|
+
|
|
20
|
+
- `timestamp`: The time when the event occurred.
|
|
21
|
+
- `severity_text`: The string representation of the log level.
|
|
22
|
+
- `body`: The primary log message.
|
|
23
|
+
- `attributes.service.name`: The value of `process_name`.
|
|
24
|
+
- `attributes.container.id`: The value of `container_id`.
|
|
25
|
+
- `attributes.code.filepath`: Path to the source file.
|
|
26
|
+
- `attributes.code.lineno`: Line number in the source file.
|
|
27
|
+
- `attributes.exception.type`: Class name of the exception.
|
|
28
|
+
- `attributes.exception.message`: Message from the exception.
|
|
29
|
+
- `attributes.exception.stacktrace`: Full stack trace.
|
|
@@ -0,0 +1,13 @@
|
|
|
1
|
+
{
|
|
2
|
+
"name": "Logging Best Practices",
|
|
3
|
+
"description": "Standardizes structured logging across Backend, Frontend, Realtime, and Algorithmic domains with mandatory exception handling.",
|
|
4
|
+
"version": "1.0.0",
|
|
5
|
+
"author": "Antigravity",
|
|
6
|
+
"tags": [
|
|
7
|
+
"logging",
|
|
8
|
+
"observability",
|
|
9
|
+
"json",
|
|
10
|
+
"opentelemetry",
|
|
11
|
+
"quality"
|
|
12
|
+
]
|
|
13
|
+
}
|
|
@@ -0,0 +1,39 @@
|
|
|
1
|
+
# Test-Accompanied Development (TAD)
|
|
2
|
+
|
|
3
|
+
Enforce a "Test-Alongside" policy where every public method is accompanied by a corresponding unit test.
|
|
4
|
+
|
|
5
|
+
## Purpose
|
|
6
|
+
|
|
7
|
+
To ensure high code quality and maintainability by mandating that all public interfaces are verified by automated tests at the moment of creation.
|
|
8
|
+
|
|
9
|
+
## Policy
|
|
10
|
+
|
|
11
|
+
**Every new public method/function you write MUST be accompanied by at least one unit test.**
|
|
12
|
+
|
|
13
|
+
## When to Use
|
|
14
|
+
|
|
15
|
+
- Use this skill **every time** you are about to write a new public method or function.
|
|
16
|
+
- This skill should be active during the coding phase of any feature or bug fix.
|
|
17
|
+
|
|
18
|
+
## Instructions
|
|
19
|
+
|
|
20
|
+
1. **Identify Public Exports**: When preparing to write a new class, module, or function, identify which methods will be public/exported.
|
|
21
|
+
2. **Plan the Test**: Before or immediately after writing the method signature, plan the corresponding test cases (Happy Path, Edge Cases, Error Cases).
|
|
22
|
+
3. **Write the Method**: Implement the public method.
|
|
23
|
+
4. **Write the Tests**: Immediately write the unit tests for the method.
|
|
24
|
+
- Refer to the `test-generator` skill for best practices on how to structure these tests (AAA pattern, Mocking, etc.).
|
|
25
|
+
- Ensure tests are placed in the appropriate test directory of the project.
|
|
26
|
+
5. **Verify**: Run the tests to ensure they pass before considering the method "done".
|
|
27
|
+
|
|
28
|
+
## Rules
|
|
29
|
+
|
|
30
|
+
- **No Public Method without Tests**: Do not consider a public method complete until its corresponding test file exists and passes.
|
|
31
|
+
- **Refer to Test Generator**: Use the `test-generator` skill as the standard for test quality and structure.
|
|
32
|
+
- **Traceability**: Mention the test file location when adding the public method.
|
|
33
|
+
|
|
34
|
+
## Example Workflow
|
|
35
|
+
|
|
36
|
+
1. **Agent**: "I am adding a `calculateTotal` method to the `InvoiceService`. I will also create `InvoiceService.test.ts` to verify it."
|
|
37
|
+
2. **Agent**: [Writes `calculateTotal` in `InvoiceService.ts`]
|
|
38
|
+
3. **Agent**: [Writes tests in `InvoiceService.test.ts` using `test-generator` patterns]
|
|
39
|
+
4. **Agent**: "Method and tests are complete. Running tests now..."
|
|
@@ -0,0 +1,12 @@
|
|
|
1
|
+
{
|
|
2
|
+
"name": "Test-Accompanied Development",
|
|
3
|
+
"description": "Enforces writing unit tests for every new public method created by the agent.",
|
|
4
|
+
"version": "1.0.0",
|
|
5
|
+
"author": "Antigravity",
|
|
6
|
+
"tags": [
|
|
7
|
+
"testing",
|
|
8
|
+
"quality",
|
|
9
|
+
"policy",
|
|
10
|
+
"tdd"
|
|
11
|
+
]
|
|
12
|
+
}
|