codeharness 0.25.0 → 0.25.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,53 @@
1
+ # Node.js OTLP Auto-Instrumentation
2
+
3
+ ## Packages to Install
4
+
5
+ ```bash
6
+ npm install --save-dev \
7
+ @opentelemetry/auto-instrumentations-node \
8
+ @opentelemetry/exporter-metrics-otlp-http \
9
+ @opentelemetry/exporter-logs-otlp-http \
10
+ @opentelemetry/exporter-trace-otlp-http
11
+ ```
12
+
13
+ ## Start Script Modification
14
+
15
+ Add `--require @opentelemetry/auto-instrumentations-node/register` to the project's start script in `package.json`.
16
+
17
+ Before:
18
+ ```json
19
+ "scripts": {
20
+ "start": "node dist/index.js"
21
+ }
22
+ ```
23
+
24
+ After:
25
+ ```json
26
+ "scripts": {
27
+ "start": "node --require @opentelemetry/auto-instrumentations-node/register dist/index.js"
28
+ }
29
+ ```
30
+
31
+ ## Environment Variables
32
+
33
+ Set these in the project's environment or `.env` file:
34
+
35
+ ```
36
+ OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
37
+ OTEL_SERVICE_NAME={project_name}
38
+ OTEL_TRACES_EXPORTER=otlp
39
+ OTEL_METRICS_EXPORTER=otlp
40
+ OTEL_LOGS_EXPORTER=otlp
41
+ ```
42
+
43
+ ## Verification
44
+
45
+ After starting the application, verify instrumentation:
46
+
47
+ ```bash
48
+ # Check logs are flowing
49
+ curl 'localhost:9428/select/logsql/query?query=*&limit=5'
50
+
51
+ # Check metrics are flowing
52
+ curl 'localhost:8428/api/v1/query?query=up'
53
+ ```
@@ -0,0 +1,51 @@
1
+ # Python OTLP Auto-Instrumentation
2
+
3
+ ## Packages to Install
4
+
5
+ ```bash
6
+ pip install opentelemetry-distro opentelemetry-exporter-otlp
7
+ opentelemetry-bootstrap -a install
8
+ ```
9
+
10
+ ## Start Script Modification
11
+
12
+ Wrap the application start command with `opentelemetry-instrument`.
13
+
14
+ Before:
15
+ ```bash
16
+ python -m myapp
17
+ ```
18
+
19
+ After:
20
+ ```bash
21
+ opentelemetry-instrument python -m myapp
22
+ ```
23
+
24
+ If using a framework runner (gunicorn, uvicorn):
25
+ ```bash
26
+ opentelemetry-instrument gunicorn myapp:app
27
+ ```
28
+
29
+ ## Environment Variables
30
+
31
+ Set these in the project's environment or `.env` file:
32
+
33
+ ```
34
+ OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
35
+ OTEL_SERVICE_NAME={project_name}
36
+ OTEL_TRACES_EXPORTER=otlp
37
+ OTEL_METRICS_EXPORTER=otlp
38
+ OTEL_LOGS_EXPORTER=otlp
39
+ ```
40
+
41
+ ## Verification
42
+
43
+ After starting the application, verify instrumentation:
44
+
45
+ ```bash
46
+ # Check logs are flowing
47
+ curl 'localhost:9428/select/logsql/query?query=*&limit=5'
48
+
49
+ # Check metrics are flowing
50
+ curl 'localhost:8428/api/v1/query?query=up'
51
+ ```
@@ -0,0 +1,80 @@
1
+ # Rust OTLP Instrumentation
2
+
3
+ ## Packages to Install
4
+
5
+ ```bash
6
+ cargo add opentelemetry opentelemetry-otlp tracing-opentelemetry tracing-subscriber
7
+ ```
8
+
9
+ ## Setup Code
10
+
11
+ Add the following to your `main.rs` to initialize the OTLP tracing pipeline:
12
+
13
+ ```rust
14
+ use opentelemetry::trace::TracerProvider;
15
+ use opentelemetry_otlp::WithExportConfig;
16
+ use tracing_subscriber::layer::SubscriberExt;
17
+ use tracing_subscriber::util::SubscriberInitExt;
18
+
19
+ fn init_tracing() {
20
+ let otlp_exporter = opentelemetry_otlp::SpanExporter::builder()
21
+ .with_http()
22
+ .with_endpoint(
23
+ std::env::var("OTEL_EXPORTER_OTLP_ENDPOINT")
24
+ .unwrap_or_else(|_| "http://localhost:4318".into()),
25
+ )
26
+ .build()
27
+ .expect("failed to create OTLP exporter");
28
+
29
+ let provider = opentelemetry::sdk::trace::TracerProvider::builder()
30
+ .with_batch_exporter(otlp_exporter)
31
+ .build();
32
+
33
+ let tracer = provider.tracer("app");
34
+ let otel_layer = tracing_opentelemetry::layer().with_tracer(tracer);
35
+
36
+ tracing_subscriber::registry()
37
+ .with(otel_layer)
38
+ .with(tracing_subscriber::fmt::layer())
39
+ .init();
40
+ }
41
+ ```
42
+
43
+ Call `init_tracing()` at the start of `main()`.
44
+
45
+ ## Function Instrumentation
46
+
47
+ Use the `#[tracing::instrument]` attribute to trace individual functions:
48
+
49
+ ```rust
50
+ #[tracing::instrument]
51
+ fn process_request(id: u64, payload: &str) -> Result<(), Error> {
52
+ tracing::info!("processing request");
53
+ // ...
54
+ Ok(())
55
+ }
56
+ ```
57
+
58
+ ## Environment Variables
59
+
60
+ Set these in `.env.codeharness` or your environment:
61
+
62
+ ```
63
+ OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
64
+ OTEL_SERVICE_NAME={project_name}
65
+ OTEL_TRACES_EXPORTER=otlp
66
+ OTEL_METRICS_EXPORTER=otlp
67
+ OTEL_LOGS_EXPORTER=otlp
68
+ ```
69
+
70
+ ## Verification
71
+
72
+ After starting the application, verify traces are being exported:
73
+
74
+ ```bash
75
+ # Check logs are flowing
76
+ curl 'localhost:9428/select/logsql/query?query=*&limit=5'
77
+
78
+ # Check metrics are flowing
79
+ curl 'localhost:8428/api/v1/query?query=up'
80
+ ```
@@ -0,0 +1,37 @@
1
+ You are an autonomous coding agent executing a sprint for the codeharness project.
2
+
3
+ ## Your Mission
4
+
5
+ Run the `/harness-run` command to execute the next story in the sprint.
6
+
7
+ ## Instructions
8
+
9
+ 1. **Run `/harness-run`** — this is the sprint execution skill that:
10
+ - Reads sprint-status.yaml at `{{sprintStatusPath}}` to find the next story
11
+ - Picks the first story with status NOT `done` (handles `backlog`, `ready-for-dev`, `in-progress`, `review`, and `verified`)
12
+ - Executes the appropriate BMAD workflow for the story's current status
13
+
14
+ 2. **Follow all BMAD workflows** — the /harness-run skill handles this, but if prompted:
15
+ - Use `/bmad-dev-story` for implementation
16
+ - Use code-review workflow for quality checks
17
+ - Ensure tests pass and coverage meets targets
18
+
19
+ 3. **Do not skip verification** — every story must pass verification gates
20
+ (tests, coverage, showboat proof) before being marked done.
21
+
22
+ ## Verification Gates
23
+
24
+ After completing a story, run `codeharness verify --story <id>` to verify.
25
+ If verification fails, fix the issues and re-verify. The story is not done
26
+ until verification passes.
27
+
28
+ ## Project Context
29
+
30
+ - **Project directory:** `{{projectDir}}`
31
+ - **Sprint status:** `{{sprintStatusPath}}`
32
+
33
+ ## Important
34
+
35
+ - Do NOT implement your own task-picking logic. Let /harness-run handle it.
36
+ - Do NOT write to sprint-state.json or sprint-status.yaml. The orchestrator owns all status writes.
37
+ - Focus on one story per session. Ralph will spawn a new session for the next story.