te.js 2.1.0 → 2.1.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +197 -196
- package/auto-docs/analysis/handler-analyzer.js +58 -58
- package/auto-docs/analysis/source-resolver.js +101 -101
- package/auto-docs/constants.js +37 -37
- package/auto-docs/docs-llm/index.js +7 -7
- package/auto-docs/docs-llm/prompts.js +222 -222
- package/auto-docs/docs-llm/provider.js +132 -132
- package/auto-docs/index.js +146 -146
- package/auto-docs/openapi/endpoint-processor.js +277 -277
- package/auto-docs/openapi/generator.js +107 -107
- package/auto-docs/openapi/level3.js +131 -131
- package/auto-docs/openapi/spec-builders.js +244 -244
- package/auto-docs/ui/docs-ui.js +186 -186
- package/auto-docs/utils/logger.js +17 -17
- package/auto-docs/utils/strip-usage.js +10 -10
- package/cli/docs-command.js +315 -315
- package/cli/fly-command.js +71 -71
- package/cli/index.js +56 -56
- package/cors/index.js +71 -0
- package/database/index.js +165 -165
- package/database/mongodb.js +146 -146
- package/database/redis.js +201 -201
- package/docs/README.md +36 -36
- package/docs/ammo.md +362 -362
- package/docs/api-reference.md +490 -490
- package/docs/auto-docs.md +216 -216
- package/docs/cli.md +152 -152
- package/docs/configuration.md +275 -275
- package/docs/database.md +390 -390
- package/docs/error-handling.md +438 -438
- package/docs/file-uploads.md +333 -333
- package/docs/getting-started.md +214 -214
- package/docs/middleware.md +355 -355
- package/docs/rate-limiting.md +393 -393
- package/docs/routing.md +302 -302
- package/lib/llm/client.js +73 -0
- package/lib/llm/index.js +7 -0
- package/lib/llm/parse.js +89 -0
- package/package.json +64 -62
- package/rate-limit/algorithms/fixed-window.js +141 -141
- package/rate-limit/algorithms/sliding-window.js +147 -147
- package/rate-limit/algorithms/token-bucket.js +115 -115
- package/rate-limit/base.js +165 -165
- package/rate-limit/index.js +147 -147
- package/rate-limit/storage/base.js +104 -104
- package/rate-limit/storage/memory.js +101 -101
- package/rate-limit/storage/redis.js +88 -88
- package/server/ammo/body-parser.js +220 -220
- package/server/ammo/dispatch-helper.js +103 -103
- package/server/ammo/enhancer.js +57 -57
- package/server/ammo.js +454 -415
- package/server/endpoint.js +97 -74
- package/server/error.js +9 -9
- package/server/errors/code-context.js +125 -125
- package/server/errors/llm-error-service.js +140 -140
- package/server/files/helper.js +33 -33
- package/server/files/uploader.js +143 -143
- package/server/handler.js +158 -119
- package/server/target.js +185 -175
- package/server/targets/middleware-validator.js +22 -22
- package/server/targets/path-validator.js +21 -21
- package/server/targets/registry.js +160 -160
- package/server/targets/shoot-validator.js +21 -21
- package/te.js +428 -402
- package/utils/auto-register.js +17 -17
- package/utils/configuration.js +64 -64
- package/utils/errors-llm-config.js +84 -84
- package/utils/request-logger.js +43 -43
- package/utils/status-codes.js +82 -82
- package/utils/tejas-entrypoint-html.js +18 -18
package/docs/auto-docs.md
CHANGED
|
@@ -1,216 +1,216 @@
|
|
|
1
|
-
# Auto-Documentation
|
|
2
|
-
|
|
3
|
-
Tejas can automatically generate an OpenAPI 3.0 specification from your registered targets. An LLM analyzes your handler source code to produce accurate summaries, request/response schemas, and descriptions — then you can serve interactive API docs with a single line of code.
|
|
4
|
-
|
|
5
|
-
## Quick Start
|
|
6
|
-
|
|
7
|
-
```bash
|
|
8
|
-
# Generate an OpenAPI spec interactively
|
|
9
|
-
npx tejas generate:docs
|
|
10
|
-
```
|
|
11
|
-
|
|
12
|
-
```javascript
|
|
13
|
-
// Serve the generated docs in your app
|
|
14
|
-
app.serveDocs({ specPath: './openapi.json' });
|
|
15
|
-
app.takeoff();
|
|
16
|
-
```
|
|
17
|
-
|
|
18
|
-
Visit `http://localhost:1403/docs` to see the interactive Scalar API reference.
|
|
19
|
-
|
|
20
|
-
## How It Works
|
|
21
|
-
|
|
22
|
-
```
|
|
23
|
-
Target files → Handler analysis → LLM enhancement → OpenAPI 3.0 spec → Scalar UI
|
|
24
|
-
```
|
|
25
|
-
|
|
26
|
-
1. **Handler analysis** — Tejas reads each handler's source code and detects which HTTP methods it handles (`ammo.GET`, `ammo.POST`, etc.). Handlers without method checks are treated as accepting all methods.
|
|
27
|
-
2. **LLM enhancement** — The handler source (and optionally its dependencies) is sent to an LLM, which generates summaries, parameter descriptions, request/response schemas, and tags.
|
|
28
|
-
3. **Spec generation** — Results are assembled into a valid OpenAPI 3.0 document.
|
|
29
|
-
4. **Optional level-3 post-processing** — Tags are reordered by importance and an `API_OVERVIEW.md` page is generated.
|
|
30
|
-
|
|
31
|
-
## Enhancement Levels
|
|
32
|
-
|
|
33
|
-
The `level` option controls how much context the LLM receives and how much work it does:
|
|
34
|
-
|
|
35
|
-
| Level | Name | Context Sent to LLM | Output |
|
|
36
|
-
|-------|------|---------------------|--------|
|
|
37
|
-
| **1** | Moderate | Handler source code only (~hundreds of tokens per endpoint) | Summaries, schemas, tags |
|
|
38
|
-
| **2** | High | Handler + full dependency chain from imports (~thousands of tokens per endpoint) | More accurate schemas and descriptions |
|
|
39
|
-
| **3** | Comprehensive | Same as level 2, plus post-processing | Everything from level 2, plus: reordered tags by importance, `API_OVERVIEW.md` page |
|
|
40
|
-
|
|
41
|
-
Higher levels produce better documentation but use more LLM tokens.
|
|
42
|
-
|
|
43
|
-
## Endpoint Metadata
|
|
44
|
-
|
|
45
|
-
You can provide explicit metadata when registering endpoints. This metadata is used directly in the OpenAPI spec and takes priority over LLM-generated content:
|
|
46
|
-
|
|
47
|
-
```javascript
|
|
48
|
-
const users = new Target('/users');
|
|
49
|
-
|
|
50
|
-
users.register('/', {
|
|
51
|
-
summary: 'User operations',
|
|
52
|
-
description: 'Create and list users',
|
|
53
|
-
methods: ['GET', 'POST'],
|
|
54
|
-
request: {
|
|
55
|
-
name: { type: 'string', required: true },
|
|
56
|
-
email: { type: 'string', required: true }
|
|
57
|
-
},
|
|
58
|
-
response: {
|
|
59
|
-
200: { description: 'Success' },
|
|
60
|
-
201: { description: 'User created' },
|
|
61
|
-
400: { description: 'Validation error' }
|
|
62
|
-
}
|
|
63
|
-
}, (ammo) => {
|
|
64
|
-
if (ammo.GET) return ammo.fire(userService.list());
|
|
65
|
-
if (ammo.POST) return ammo.fire(201, userService.create(ammo.payload));
|
|
66
|
-
ammo.notAllowed();
|
|
67
|
-
});
|
|
68
|
-
```
|
|
69
|
-
|
|
70
|
-
The metadata object is optional. When omitted, the LLM infers everything from the handler source.
|
|
71
|
-
|
|
72
|
-
## LLM Provider Configuration
|
|
73
|
-
|
|
74
|
-
Tejas uses an OpenAI-compatible API for LLM calls. This works with OpenAI, OpenRouter, Ollama, and any provider that implements the OpenAI chat completions endpoint.
|
|
75
|
-
|
|
76
|
-
### Via `tejas.config.json`
|
|
77
|
-
|
|
78
|
-
```json
|
|
79
|
-
{
|
|
80
|
-
"docs": {
|
|
81
|
-
"llm": {
|
|
82
|
-
"baseURL": "https://api.openai.com/v1",
|
|
83
|
-
"apiKey": "sk-...",
|
|
84
|
-
"model": "gpt-4o-mini"
|
|
85
|
-
}
|
|
86
|
-
}
|
|
87
|
-
}
|
|
88
|
-
```
|
|
89
|
-
|
|
90
|
-
### Via Environment Variables
|
|
91
|
-
|
|
92
|
-
```bash
|
|
93
|
-
LLM_BASE_URL=https://api.openai.com/v1
|
|
94
|
-
LLM_API_KEY=sk-...
|
|
95
|
-
LLM_MODEL=gpt-4o-mini
|
|
96
|
-
```
|
|
97
|
-
|
|
98
|
-
### Using Ollama (Local)
|
|
99
|
-
|
|
100
|
-
```json
|
|
101
|
-
{
|
|
102
|
-
"docs": {
|
|
103
|
-
"llm": {
|
|
104
|
-
"baseURL": "http://localhost:11434/v1",
|
|
105
|
-
"model": "llama3"
|
|
106
|
-
}
|
|
107
|
-
}
|
|
108
|
-
}
|
|
109
|
-
```
|
|
110
|
-
|
|
111
|
-
No API key is required for local providers.
|
|
112
|
-
|
|
113
|
-
## Configuration Reference
|
|
114
|
-
|
|
115
|
-
All options live under the `docs` key in `tejas.config.json`:
|
|
116
|
-
|
|
117
|
-
```json
|
|
118
|
-
{
|
|
119
|
-
"docs": {
|
|
120
|
-
"dirTargets": "targets",
|
|
121
|
-
"output": "./openapi.json",
|
|
122
|
-
"title": "My API",
|
|
123
|
-
"version": "1.0.0",
|
|
124
|
-
"description": "API description",
|
|
125
|
-
"level": 1,
|
|
126
|
-
"llm": {
|
|
127
|
-
"baseURL": "https://api.openai.com/v1",
|
|
128
|
-
"apiKey": "sk-...",
|
|
129
|
-
"model": "gpt-4o-mini"
|
|
130
|
-
},
|
|
131
|
-
"overviewPath": "./API_OVERVIEW.md",
|
|
132
|
-
"productionBranch": "main"
|
|
133
|
-
}
|
|
134
|
-
}
|
|
135
|
-
```
|
|
136
|
-
|
|
137
|
-
| Key | Type | Default | Description |
|
|
138
|
-
|-----|------|---------|-------------|
|
|
139
|
-
| `dirTargets` | string | `"targets"` | Directory containing `.target.js` files |
|
|
140
|
-
| `output` | string | `"./openapi.json"` | Output file path for the generated spec |
|
|
141
|
-
| `title` | string | `"API"` | API title in the OpenAPI `info` block |
|
|
142
|
-
| `version` | string | `"1.0.0"` | API version in the OpenAPI `info` block |
|
|
143
|
-
| `description` | string | `""` | API description |
|
|
144
|
-
| `level` | number | `1` | Enhancement level (1–3) |
|
|
145
|
-
| `llm` | object | — | LLM provider configuration (see above) |
|
|
146
|
-
| `overviewPath` | string | `"./API_OVERVIEW.md"` | Path for the generated overview page (level 3 only) |
|
|
147
|
-
| `productionBranch` | string | `"main"` | Branch that triggers `docs:on-push` |
|
|
148
|
-
|
|
149
|
-
## Serving API Docs
|
|
150
|
-
|
|
151
|
-
Use `serveDocs()` to serve an interactive [Scalar](https://scalar.com) API reference UI:
|
|
152
|
-
|
|
153
|
-
```javascript
|
|
154
|
-
import Tejas from 'te.js';
|
|
155
|
-
|
|
156
|
-
const app = new Tejas();
|
|
157
|
-
|
|
158
|
-
app.serveDocs({ specPath: './openapi.json' });
|
|
159
|
-
|
|
160
|
-
app.takeoff();
|
|
161
|
-
```
|
|
162
|
-
|
|
163
|
-
This registers two routes:
|
|
164
|
-
|
|
165
|
-
| Route | Description |
|
|
166
|
-
|-------|-------------|
|
|
167
|
-
| `GET /docs` | Interactive Scalar API reference UI |
|
|
168
|
-
| `GET /docs/openapi.json` | Raw OpenAPI spec JSON |
|
|
169
|
-
|
|
170
|
-
### serveDocs Options
|
|
171
|
-
|
|
172
|
-
```javascript
|
|
173
|
-
app.serveDocs({
|
|
174
|
-
specPath: './openapi.json', // Path to the spec file (relative to cwd)
|
|
175
|
-
scalarConfig: { // Scalar UI configuration
|
|
176
|
-
layout: 'modern', // 'modern' or 'classic'
|
|
177
|
-
theme: 'default',
|
|
178
|
-
showSidebar: true,
|
|
179
|
-
hideTestRequestButton: false
|
|
180
|
-
}
|
|
181
|
-
});
|
|
182
|
-
```
|
|
183
|
-
|
|
184
|
-
See the [Scalar configuration reference](https://scalar.com/products/api-references/configuration) for all available UI options.
|
|
185
|
-
|
|
186
|
-
## CLI Commands
|
|
187
|
-
|
|
188
|
-
| Command | Description |
|
|
189
|
-
|---------|-------------|
|
|
190
|
-
| `tejas generate:docs` | Interactive OpenAPI generation |
|
|
191
|
-
| `tejas generate:docs --ci` | Non-interactive mode (for CI/CD) |
|
|
192
|
-
| `tejas docs:on-push` | Generate docs when pushing to production branch |
|
|
193
|
-
|
|
194
|
-
See the [CLI Reference](./cli.md) for full details.
|
|
195
|
-
|
|
196
|
-
## Workflow Example
|
|
197
|
-
|
|
198
|
-
A typical workflow for maintaining API docs:
|
|
199
|
-
|
|
200
|
-
```bash
|
|
201
|
-
# 1. Generate docs during development
|
|
202
|
-
npx tejas generate:docs
|
|
203
|
-
|
|
204
|
-
# 2. Serve docs in your app
|
|
205
|
-
# (add app.serveDocs({ specPath: './openapi.json' }) to your entry file)
|
|
206
|
-
|
|
207
|
-
# 3. Auto-regenerate on push to main
|
|
208
|
-
# (add tejas docs:on-push to your pre-push hook)
|
|
209
|
-
```
|
|
210
|
-
|
|
211
|
-
## Next Steps
|
|
212
|
-
|
|
213
|
-
- [CLI Reference](./cli.md) — Detailed CLI command documentation
|
|
214
|
-
- [Configuration](./configuration.md) — Full framework configuration reference
|
|
215
|
-
- [Routing](./routing.md) — Learn about endpoint metadata
|
|
216
|
-
|
|
1
|
+
# Auto-Documentation
|
|
2
|
+
|
|
3
|
+
Tejas can automatically generate an OpenAPI 3.0 specification from your registered targets. An LLM analyzes your handler source code to produce accurate summaries, request/response schemas, and descriptions — then you can serve interactive API docs with a single line of code.
|
|
4
|
+
|
|
5
|
+
## Quick Start
|
|
6
|
+
|
|
7
|
+
```bash
|
|
8
|
+
# Generate an OpenAPI spec interactively
|
|
9
|
+
npx tejas generate:docs
|
|
10
|
+
```
|
|
11
|
+
|
|
12
|
+
```javascript
|
|
13
|
+
// Serve the generated docs in your app
|
|
14
|
+
app.serveDocs({ specPath: './openapi.json' });
|
|
15
|
+
app.takeoff();
|
|
16
|
+
```
|
|
17
|
+
|
|
18
|
+
Visit `http://localhost:1403/docs` to see the interactive Scalar API reference.
|
|
19
|
+
|
|
20
|
+
## How It Works
|
|
21
|
+
|
|
22
|
+
```
|
|
23
|
+
Target files → Handler analysis → LLM enhancement → OpenAPI 3.0 spec → Scalar UI
|
|
24
|
+
```
|
|
25
|
+
|
|
26
|
+
1. **Handler analysis** — Tejas reads each handler's source code and detects which HTTP methods it handles (`ammo.GET`, `ammo.POST`, etc.). Handlers without method checks are treated as accepting all methods.
|
|
27
|
+
2. **LLM enhancement** — The handler source (and optionally its dependencies) is sent to an LLM, which generates summaries, parameter descriptions, request/response schemas, and tags.
|
|
28
|
+
3. **Spec generation** — Results are assembled into a valid OpenAPI 3.0 document.
|
|
29
|
+
4. **Optional level-3 post-processing** — Tags are reordered by importance and an `API_OVERVIEW.md` page is generated.
|
|
30
|
+
|
|
31
|
+
## Enhancement Levels
|
|
32
|
+
|
|
33
|
+
The `level` option controls how much context the LLM receives and how much work it does:
|
|
34
|
+
|
|
35
|
+
| Level | Name | Context Sent to LLM | Output |
|
|
36
|
+
|-------|------|---------------------|--------|
|
|
37
|
+
| **1** | Moderate | Handler source code only (~hundreds of tokens per endpoint) | Summaries, schemas, tags |
|
|
38
|
+
| **2** | High | Handler + full dependency chain from imports (~thousands of tokens per endpoint) | More accurate schemas and descriptions |
|
|
39
|
+
| **3** | Comprehensive | Same as level 2, plus post-processing | Everything from level 2, plus: reordered tags by importance, `API_OVERVIEW.md` page |
|
|
40
|
+
|
|
41
|
+
Higher levels produce better documentation but use more LLM tokens.
|
|
42
|
+
|
|
43
|
+
## Endpoint Metadata
|
|
44
|
+
|
|
45
|
+
You can provide explicit metadata when registering endpoints. This metadata is used directly in the OpenAPI spec and takes priority over LLM-generated content:
|
|
46
|
+
|
|
47
|
+
```javascript
|
|
48
|
+
const users = new Target('/users');
|
|
49
|
+
|
|
50
|
+
users.register('/', {
|
|
51
|
+
summary: 'User operations',
|
|
52
|
+
description: 'Create and list users',
|
|
53
|
+
methods: ['GET', 'POST'],
|
|
54
|
+
request: {
|
|
55
|
+
name: { type: 'string', required: true },
|
|
56
|
+
email: { type: 'string', required: true }
|
|
57
|
+
},
|
|
58
|
+
response: {
|
|
59
|
+
200: { description: 'Success' },
|
|
60
|
+
201: { description: 'User created' },
|
|
61
|
+
400: { description: 'Validation error' }
|
|
62
|
+
}
|
|
63
|
+
}, (ammo) => {
|
|
64
|
+
if (ammo.GET) return ammo.fire(userService.list());
|
|
65
|
+
if (ammo.POST) return ammo.fire(201, userService.create(ammo.payload));
|
|
66
|
+
ammo.notAllowed();
|
|
67
|
+
});
|
|
68
|
+
```
|
|
69
|
+
|
|
70
|
+
The metadata object is optional. When omitted, the LLM infers everything from the handler source.
|
|
71
|
+
|
|
72
|
+
## LLM Provider Configuration
|
|
73
|
+
|
|
74
|
+
Tejas uses an OpenAI-compatible API for LLM calls. This works with OpenAI, OpenRouter, Ollama, and any provider that implements the OpenAI chat completions endpoint.
|
|
75
|
+
|
|
76
|
+
### Via `tejas.config.json`
|
|
77
|
+
|
|
78
|
+
```json
|
|
79
|
+
{
|
|
80
|
+
"docs": {
|
|
81
|
+
"llm": {
|
|
82
|
+
"baseURL": "https://api.openai.com/v1",
|
|
83
|
+
"apiKey": "sk-...",
|
|
84
|
+
"model": "gpt-4o-mini"
|
|
85
|
+
}
|
|
86
|
+
}
|
|
87
|
+
}
|
|
88
|
+
```
|
|
89
|
+
|
|
90
|
+
### Via Environment Variables
|
|
91
|
+
|
|
92
|
+
```bash
|
|
93
|
+
LLM_BASE_URL=https://api.openai.com/v1
|
|
94
|
+
LLM_API_KEY=sk-...
|
|
95
|
+
LLM_MODEL=gpt-4o-mini
|
|
96
|
+
```
|
|
97
|
+
|
|
98
|
+
### Using Ollama (Local)
|
|
99
|
+
|
|
100
|
+
```json
|
|
101
|
+
{
|
|
102
|
+
"docs": {
|
|
103
|
+
"llm": {
|
|
104
|
+
"baseURL": "http://localhost:11434/v1",
|
|
105
|
+
"model": "llama3"
|
|
106
|
+
}
|
|
107
|
+
}
|
|
108
|
+
}
|
|
109
|
+
```
|
|
110
|
+
|
|
111
|
+
No API key is required for local providers.
|
|
112
|
+
|
|
113
|
+
## Configuration Reference
|
|
114
|
+
|
|
115
|
+
All options live under the `docs` key in `tejas.config.json`:
|
|
116
|
+
|
|
117
|
+
```json
|
|
118
|
+
{
|
|
119
|
+
"docs": {
|
|
120
|
+
"dirTargets": "targets",
|
|
121
|
+
"output": "./openapi.json",
|
|
122
|
+
"title": "My API",
|
|
123
|
+
"version": "1.0.0",
|
|
124
|
+
"description": "API description",
|
|
125
|
+
"level": 1,
|
|
126
|
+
"llm": {
|
|
127
|
+
"baseURL": "https://api.openai.com/v1",
|
|
128
|
+
"apiKey": "sk-...",
|
|
129
|
+
"model": "gpt-4o-mini"
|
|
130
|
+
},
|
|
131
|
+
"overviewPath": "./API_OVERVIEW.md",
|
|
132
|
+
"productionBranch": "main"
|
|
133
|
+
}
|
|
134
|
+
}
|
|
135
|
+
```
|
|
136
|
+
|
|
137
|
+
| Key | Type | Default | Description |
|
|
138
|
+
|-----|------|---------|-------------|
|
|
139
|
+
| `dirTargets` | string | `"targets"` | Directory containing `.target.js` files |
|
|
140
|
+
| `output` | string | `"./openapi.json"` | Output file path for the generated spec |
|
|
141
|
+
| `title` | string | `"API"` | API title in the OpenAPI `info` block |
|
|
142
|
+
| `version` | string | `"1.0.0"` | API version in the OpenAPI `info` block |
|
|
143
|
+
| `description` | string | `""` | API description |
|
|
144
|
+
| `level` | number | `1` | Enhancement level (1–3) |
|
|
145
|
+
| `llm` | object | — | LLM provider configuration (see above) |
|
|
146
|
+
| `overviewPath` | string | `"./API_OVERVIEW.md"` | Path for the generated overview page (level 3 only) |
|
|
147
|
+
| `productionBranch` | string | `"main"` | Branch that triggers `docs:on-push` |
|
|
148
|
+
|
|
149
|
+
## Serving API Docs
|
|
150
|
+
|
|
151
|
+
Use `serveDocs()` to serve an interactive [Scalar](https://scalar.com) API reference UI:
|
|
152
|
+
|
|
153
|
+
```javascript
|
|
154
|
+
import Tejas from 'te.js';
|
|
155
|
+
|
|
156
|
+
const app = new Tejas();
|
|
157
|
+
|
|
158
|
+
app.serveDocs({ specPath: './openapi.json' });
|
|
159
|
+
|
|
160
|
+
app.takeoff();
|
|
161
|
+
```
|
|
162
|
+
|
|
163
|
+
This registers two routes:
|
|
164
|
+
|
|
165
|
+
| Route | Description |
|
|
166
|
+
|-------|-------------|
|
|
167
|
+
| `GET /docs` | Interactive Scalar API reference UI |
|
|
168
|
+
| `GET /docs/openapi.json` | Raw OpenAPI spec JSON |
|
|
169
|
+
|
|
170
|
+
### serveDocs Options
|
|
171
|
+
|
|
172
|
+
```javascript
|
|
173
|
+
app.serveDocs({
|
|
174
|
+
specPath: './openapi.json', // Path to the spec file (relative to cwd)
|
|
175
|
+
scalarConfig: { // Scalar UI configuration
|
|
176
|
+
layout: 'modern', // 'modern' or 'classic'
|
|
177
|
+
theme: 'default',
|
|
178
|
+
showSidebar: true,
|
|
179
|
+
hideTestRequestButton: false
|
|
180
|
+
}
|
|
181
|
+
});
|
|
182
|
+
```
|
|
183
|
+
|
|
184
|
+
See the [Scalar configuration reference](https://scalar.com/products/api-references/configuration) for all available UI options.
|
|
185
|
+
|
|
186
|
+
## CLI Commands
|
|
187
|
+
|
|
188
|
+
| Command | Description |
|
|
189
|
+
|---------|-------------|
|
|
190
|
+
| `tejas generate:docs` | Interactive OpenAPI generation |
|
|
191
|
+
| `tejas generate:docs --ci` | Non-interactive mode (for CI/CD) |
|
|
192
|
+
| `tejas docs:on-push` | Generate docs when pushing to production branch |
|
|
193
|
+
|
|
194
|
+
See the [CLI Reference](./cli.md) for full details.
|
|
195
|
+
|
|
196
|
+
## Workflow Example
|
|
197
|
+
|
|
198
|
+
A typical workflow for maintaining API docs:
|
|
199
|
+
|
|
200
|
+
```bash
|
|
201
|
+
# 1. Generate docs during development
|
|
202
|
+
npx tejas generate:docs
|
|
203
|
+
|
|
204
|
+
# 2. Serve docs in your app
|
|
205
|
+
# (add app.serveDocs({ specPath: './openapi.json' }) to your entry file)
|
|
206
|
+
|
|
207
|
+
# 3. Auto-regenerate on push to main
|
|
208
|
+
# (add tejas docs:on-push to your pre-push hook)
|
|
209
|
+
```
|
|
210
|
+
|
|
211
|
+
## Next Steps
|
|
212
|
+
|
|
213
|
+
- [CLI Reference](./cli.md) — Detailed CLI command documentation
|
|
214
|
+
- [Configuration](./configuration.md) — Full framework configuration reference
|
|
215
|
+
- [Routing](./routing.md) — Learn about endpoint metadata
|
|
216
|
+
|