heymark 1.0.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +220 -0
- package/package.json +52 -0
- package/rules/ai-behavior.md +87 -0
- package/rules/code-conventions.md +98 -0
- package/rules/readme-writing.md +69 -0
- package/rules/research-workflow.md +79 -0
- package/rules/rule-writing.md +71 -0
- package/scripts/lib/config.js +62 -0
- package/scripts/lib/parser.js +82 -0
- package/scripts/lib/repo.js +76 -0
- package/scripts/sync.js +237 -0
- package/scripts/tools/claude.js +42 -0
- package/scripts/tools/codex.js +24 -0
- package/scripts/tools/copilot.js +48 -0
- package/scripts/tools/cursor.js +41 -0
package/README.md
ADDED
|
@@ -0,0 +1,220 @@
|
|
|
1
|
+
# Heymark
|
|
2
|
+
|
|
3
|
+
AI 코딩 도구의 컨벤션을 중앙에서 관리하고, 각 도구 형식으로 자동 변환하는 시스템.
|
|
4
|
+
|
|
5
|
+
1. [Overview](#overview)
|
|
6
|
+
2. [Features](#features)
|
|
7
|
+
3. [Tech Stack](#tech-stack)
|
|
8
|
+
4. [Publishing](#publishing)
|
|
9
|
+
5. [Integration](#integration)
|
|
10
|
+
6. [Getting Started](#getting-started)
|
|
11
|
+
7. [Tool Support](#tool-support)
|
|
12
|
+
|
|
13
|
+
## Overview
|
|
14
|
+
|
|
15
|
+
프로젝트마다 AI 도구별 규칙 파일을 따로 작성하면 관리가 파편화된다.
|
|
16
|
+
이 시스템은 단일 진실 공급원(Single Source of Truth) 원칙에 따라, 한 곳에서 작성한 규칙을 여러 AI 도구 형식으로 자동 변환한다.
|
|
17
|
+
규칙을 한 번만 작성하면 Cursor, Claude Code, GitHub Copilot, OpenAI Codex 등에서 즉시 사용할 수 있다.
|
|
18
|
+
|
|
19
|
+
## Features
|
|
20
|
+
|
|
21
|
+
- **단일 소스 관리**: 마크다운 파일 하나로 모든 AI 도구의 규칙 통합 관리
|
|
22
|
+
- **자동 형식 변환**: 4종 AI 도구의 네이티브 형식으로 자동 변환 (YAML frontmatter, AGENTS.md 등)
|
|
23
|
+
- **선택적 변환**: 특정 도구만 선택하여 변환 가능
|
|
24
|
+
- **NPM 패키지 배포**: NPM registry를 통한 public 배포로 간편한 설치 및 버전 관리
|
|
25
|
+
- **플러그인 구조**: 변환 모듈 추가만으로 새 도구 지원 확장
|
|
26
|
+
|
|
27
|
+
## Tech Stack
|
|
28
|
+
|
|
29
|
+
- **Runtime**: Node.js
|
|
30
|
+
- **Language**: JavaScript
|
|
31
|
+
- **Core**: File system API, YAML frontmatter parsing
|
|
32
|
+
|
|
33
|
+
## Publishing
|
|
34
|
+
|
|
35
|
+
패키지 관리자용 가이드 (일반 사용자는 [Integration](#integration) 참고).
|
|
36
|
+
|
|
37
|
+
### 초기 설정 (한 번만)
|
|
38
|
+
|
|
39
|
+
```bash
|
|
40
|
+
# NPM 로그인
|
|
41
|
+
npm login
|
|
42
|
+
# Username: your-npm-username
|
|
43
|
+
# Email: your-email@example.com
|
|
44
|
+
```
|
|
45
|
+
|
|
46
|
+
### 배포 프로세스
|
|
47
|
+
|
|
48
|
+
```bash
|
|
49
|
+
# 1. 규칙 수정 후 테스트
|
|
50
|
+
node scripts/sync.js --preview
|
|
51
|
+
|
|
52
|
+
# 2. 버전 업데이트 (자동으로 Git 태그 생성)
|
|
53
|
+
npm version patch # 또는 minor, major
|
|
54
|
+
|
|
55
|
+
# 3. GitHub에 푸시
|
|
56
|
+
git push --follow-tags # 커밋과 태그를 함께 푸시
|
|
57
|
+
|
|
58
|
+
# 4. NPM에 배포
|
|
59
|
+
npm publish
|
|
60
|
+
```
|
|
61
|
+
|
|
62
|
+
**버전 관리:**
|
|
63
|
+
|
|
64
|
+
- `patch` (1.0.0 → 1.0.1): 버그 수정, 오타 수정
|
|
65
|
+
- `minor` (1.0.0 → 1.1.0): 새 규칙 추가, 기능 개선
|
|
66
|
+
- `major` (1.0.0 → 2.0.0): 호환성 깨지는 변경
|
|
67
|
+
|
|
68
|
+
**Git 태그 설명:**
|
|
69
|
+
|
|
70
|
+
- `npm version` 명령어는 자동으로 Git 태그를 생성합니다
|
|
71
|
+
- `git push --follow-tags`: 일반 커밋과 태그를 함께 푸시 (추천)
|
|
72
|
+
- 또는 `git push && git push --tags`: 커밋 푸시 후 모든 태그 푸시
|
|
73
|
+
|
|
74
|
+
## Integration
|
|
75
|
+
|
|
76
|
+
### Installation
|
|
77
|
+
|
|
78
|
+
```bash
|
|
79
|
+
# 패키지 설치
|
|
80
|
+
npm install --save-dev heymark
|
|
81
|
+
```
|
|
82
|
+
|
|
83
|
+
### 초기 설정 (규칙 소스 = 원격 GitHub 저장소)
|
|
84
|
+
|
|
85
|
+
규칙은 **원격 GitHub 저장소(Public/Private)** 에서 읽습니다.
|
|
86
|
+
최초 1회만 설정하면 됩니다.
|
|
87
|
+
|
|
88
|
+
```bash
|
|
89
|
+
# 규칙 소스 설정 (프로젝트 루트에 .heymark.json 생성)
|
|
90
|
+
npx heymark init <GitHub-저장소-URL>
|
|
91
|
+
```
|
|
92
|
+
|
|
93
|
+
예시:
|
|
94
|
+
|
|
95
|
+
```bash
|
|
96
|
+
# HTTPS (Private이면 Git credential/토큰 설정 필요)
|
|
97
|
+
npx heymark init https://github.com/org/my-rules.git
|
|
98
|
+
|
|
99
|
+
# SSH (Private 저장소 권장)
|
|
100
|
+
npx heymark init git@github.com:org/my-rules.git
|
|
101
|
+
|
|
102
|
+
# 저장소 안에서 .md가 하위 폴더에 있을 때
|
|
103
|
+
npx heymark init https://github.com/org/my-rules.git --dir rules --branch main
|
|
104
|
+
```
|
|
105
|
+
|
|
106
|
+
설정은 프로젝트 루트의 `.heymark.json`에 저장됩니다. 이후 `heymark` 실행 시 해당 저장소를 clone/pull 한 뒤 `.md` 파일들을 변환합니다.
|
|
107
|
+
|
|
108
|
+
### Usage with npx
|
|
109
|
+
|
|
110
|
+
설치 없이 바로 실행하거나, 설치 후 npx로 실행할 수 있습니다.
|
|
111
|
+
규칙은 **원격 GitHub 저장소**에서 가져오며, 해당 저장소 안의 마크다운 파일을 각 AI 도구 형식으로 변환해 현재 프로젝트에 생성합니다.
|
|
112
|
+
|
|
113
|
+
```bash
|
|
114
|
+
# .heymark.json에 설정된 외부 규칙 저장소에서 가져와 모든 도구 형식으로 변환 (기존 생성 파일 삭제 후 새로 생성)
|
|
115
|
+
npx heymark
|
|
116
|
+
|
|
117
|
+
# 이번에만 다른 외부 저장소 사용 (.heymark.json 무시)
|
|
118
|
+
npx heymark --source https://github.com/org/other-rules.git
|
|
119
|
+
|
|
120
|
+
# 특정 도구만 변환
|
|
121
|
+
npx heymark -t cursor,claude
|
|
122
|
+
|
|
123
|
+
# 미리보기 (파일 생성 없이 변환 결과만 확인)
|
|
124
|
+
npx heymark --preview
|
|
125
|
+
|
|
126
|
+
# 이전에 생성된 도구별 파일 삭제
|
|
127
|
+
npx heymark --clean
|
|
128
|
+
|
|
129
|
+
# 도움말
|
|
130
|
+
npx heymark --help
|
|
131
|
+
```
|
|
132
|
+
|
|
133
|
+
**설치 없이 바로 사용:**
|
|
134
|
+
|
|
135
|
+
```bash
|
|
136
|
+
# 설치 없이 최신 버전으로 실행
|
|
137
|
+
npx heymark@latest
|
|
138
|
+
```
|
|
139
|
+
|
|
140
|
+
**패키지 업데이트:**
|
|
141
|
+
|
|
142
|
+
```bash
|
|
143
|
+
npm update heymark
|
|
144
|
+
npx heymark
|
|
145
|
+
```
|
|
146
|
+
|
|
147
|
+
**동작 방식:**
|
|
148
|
+
|
|
149
|
+
- 기본적으로 기존 생성된 파일을 삭제한 후 새로 생성합니다
|
|
150
|
+
- 이를 통해 이전 버전의 파일이 남지 않고 깔끔하게 동기화됩니다
|
|
151
|
+
|
|
152
|
+
## Getting Started
|
|
153
|
+
|
|
154
|
+
규칙을 작성하고 로컬에서 테스트하는 방법.
|
|
155
|
+
|
|
156
|
+
### Writing Rules
|
|
157
|
+
|
|
158
|
+
규칙 소스 디렉터리(외부 레포)에 마크다운 파일 작성. YAML frontmatter로 메타데이터 정의:
|
|
159
|
+
|
|
160
|
+
```markdown
|
|
161
|
+
---
|
|
162
|
+
description: "AI assistant behavior guidelines"
|
|
163
|
+
globs: "**/*.ts,**/*.tsx"
|
|
164
|
+
alwaysApply: true
|
|
165
|
+
---
|
|
166
|
+
|
|
167
|
+
# Rule Title
|
|
168
|
+
|
|
169
|
+
Rule content...
|
|
170
|
+
```
|
|
171
|
+
|
|
172
|
+
### Local Testing
|
|
173
|
+
|
|
174
|
+
```bash
|
|
175
|
+
# 규칙 소스 설정 (최초 1회, GitHub 저장소 URL)
|
|
176
|
+
node scripts/sync.js init https://github.com/org/my-rules.git
|
|
177
|
+
|
|
178
|
+
# 모든 도구 형식으로 변환
|
|
179
|
+
node scripts/sync.js
|
|
180
|
+
|
|
181
|
+
# 이번에만 다른 저장소 사용
|
|
182
|
+
node scripts/sync.js --source https://github.com/org/other-rules.git
|
|
183
|
+
|
|
184
|
+
# 특정 도구만 변환
|
|
185
|
+
node scripts/sync.js -t cursor,claude
|
|
186
|
+
|
|
187
|
+
# 미리보기 (파일 생성 없이 확인)
|
|
188
|
+
node scripts/sync.js --preview
|
|
189
|
+
|
|
190
|
+
# 생성된 파일 삭제
|
|
191
|
+
node scripts/sync.js --clean
|
|
192
|
+
```
|
|
193
|
+
|
|
194
|
+
## Tool Support
|
|
195
|
+
|
|
196
|
+
### Supported Tools
|
|
197
|
+
|
|
198
|
+
| Tool | Output Format | Key Features |
|
|
199
|
+
| :------------- | :-------------------------- | :------------------------------ |
|
|
200
|
+
| Cursor | `.cursor/rules/*.mdc` | YAML frontmatter, glob 매칭 |
|
|
201
|
+
| Claude Code | `.claude/skills/*/SKILL.md` | 스킬 디렉토리 구조 |
|
|
202
|
+
| GitHub Copilot | `.github/instructions/*.md` | applyTo 패턴 매칭 |
|
|
203
|
+
| OpenAI Codex | `AGENTS.md` | 단일 파일 병합 (Agent Rules v1) |
|
|
204
|
+
|
|
205
|
+
### Adding New Tools
|
|
206
|
+
|
|
207
|
+
변환 모듈을 추가하면 자동 인식된다. 필수 export 인터페이스:
|
|
208
|
+
|
|
209
|
+
```javascript
|
|
210
|
+
module.exports = {
|
|
211
|
+
name: "Tool Name",
|
|
212
|
+
output: "output/path/pattern",
|
|
213
|
+
generate(rules, projectRoot) {
|
|
214
|
+
/* ... */
|
|
215
|
+
},
|
|
216
|
+
clean(ruleNames, projectRoot) {
|
|
217
|
+
/* ... */
|
|
218
|
+
},
|
|
219
|
+
};
|
|
220
|
+
```
|
package/package.json
ADDED
|
@@ -0,0 +1,52 @@
|
|
|
1
|
+
{
|
|
2
|
+
"name": "heymark",
|
|
3
|
+
"version": "1.0.1",
|
|
4
|
+
"description": "Centralized AI coding tool conventions with auto-conversion to multiple formats",
|
|
5
|
+
"main": "scripts/sync.js",
|
|
6
|
+
"bin": {
|
|
7
|
+
"heymark": "scripts/sync.js"
|
|
8
|
+
},
|
|
9
|
+
"scripts": {
|
|
10
|
+
"sync": "node scripts/sync.js",
|
|
11
|
+
"init": "node scripts/sync.js init",
|
|
12
|
+
"test": "echo \"Error: no test specified\" && exit 1"
|
|
13
|
+
},
|
|
14
|
+
"keywords": [
|
|
15
|
+
"ai",
|
|
16
|
+
"coding",
|
|
17
|
+
"conventions",
|
|
18
|
+
"cursor",
|
|
19
|
+
"claude",
|
|
20
|
+
"copilot",
|
|
21
|
+
"codex",
|
|
22
|
+
"rules"
|
|
23
|
+
],
|
|
24
|
+
"author": {
|
|
25
|
+
"name": "i2na",
|
|
26
|
+
"email": "yena.e121@gmail.com",
|
|
27
|
+
"url": "https://github.com/i2na"
|
|
28
|
+
},
|
|
29
|
+
"contributors": [
|
|
30
|
+
{
|
|
31
|
+
"name": "yezzero",
|
|
32
|
+
"email": "yeyeonggim06@gmail.com",
|
|
33
|
+
"url": "https://github.com/yezzero"
|
|
34
|
+
}
|
|
35
|
+
],
|
|
36
|
+
"license": "MIT",
|
|
37
|
+
"engines": {
|
|
38
|
+
"node": ">=14.0.0"
|
|
39
|
+
},
|
|
40
|
+
"files": [
|
|
41
|
+
"scripts/",
|
|
42
|
+
"rules/",
|
|
43
|
+
"README.md"
|
|
44
|
+
],
|
|
45
|
+
"repository": {
|
|
46
|
+
"type": "git",
|
|
47
|
+
"url": "git+https://github.com/MosslandOpenDevs/heymark.git"
|
|
48
|
+
},
|
|
49
|
+
"publishConfig": {
|
|
50
|
+
"access": "public"
|
|
51
|
+
}
|
|
52
|
+
}
|
|
@@ -0,0 +1,87 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: "AI assistant behavior: surgical code changes, communication style, and commenting philosophy"
|
|
3
|
+
alwaysApply: true
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# AI Behavior
|
|
7
|
+
|
|
8
|
+
## Code Modification
|
|
9
|
+
|
|
10
|
+
- Modify **only** lines directly causing the problem; change the **minimum** required
|
|
11
|
+
- **Never** refactor, reorganize, or reformat unrelated code
|
|
12
|
+
- Maintain original file structure, patterns, and coding style
|
|
13
|
+
- Implement only when **100% certain**; ask clarifying questions if insufficient context
|
|
14
|
+
- Present multiple options with trade-offs when several valid solutions exist
|
|
15
|
+
- Provide **full, exact code** for all changed lines
|
|
16
|
+
- Use `// ... existing code` only for truly unchanged sections
|
|
17
|
+
- Never use placeholders (`// TODO: implement`) or omit error handling
|
|
18
|
+
- Favor clarity over cleverness; use early returns for error conditions
|
|
19
|
+
|
|
20
|
+
## Commenting
|
|
21
|
+
|
|
22
|
+
Code must be self-documenting. Add comments **only** with approved tags:
|
|
23
|
+
|
|
24
|
+
| Tag | Usage |
|
|
25
|
+
| :-------------- | :------------------------------------------- |
|
|
26
|
+
| `@note` | Critical context or non-obvious behavior |
|
|
27
|
+
| `@todo(owner):` | Future work with assigned owner (mandatory) |
|
|
28
|
+
| `@wip` | Temporary code, remove before merge |
|
|
29
|
+
| `@deprecated` | Marked for removal, must specify alternative |
|
|
30
|
+
|
|
31
|
+
- English only, lowercase after tags, one line max
|
|
32
|
+
- **Prohibited:** decorative separators (`// ===`, `// ---`), obvious statements, chatty explanations, uncommented dead code
|
|
33
|
+
|
|
34
|
+
## Communication
|
|
35
|
+
|
|
36
|
+
| Context | Language |
|
|
37
|
+
| :---------------------- | :------- |
|
|
38
|
+
| User-facing | Korean |
|
|
39
|
+
| Code, comments, commits | English |
|
|
40
|
+
|
|
41
|
+
- **Tone:** Direct, concise. No filler, no hedging ("maybe", "possibly"), no pleasantries
|
|
42
|
+
- **Structure:** Root cause (1-2 sentences) → Solution (code) → Reasoning (only if non-obvious)
|
|
43
|
+
|
|
44
|
+
**Bad:** "안녕하세요! 도와드리겠습니다. 이 문제는 여러 원인이 있을 수 있는데..."
|
|
45
|
+
**Good:** "`userId`가 `number | undefined`인데 `string`을 기대합니다. 타입 가드를 추가합니다."
|
|
46
|
+
|
|
47
|
+
## Anti-Defensive Coding
|
|
48
|
+
|
|
49
|
+
Write for the **expected successful flow**. Trust TypeScript compiler-enforced types. Let errors propagate naturally unless catching is required.
|
|
50
|
+
|
|
51
|
+
**Avoid:**
|
|
52
|
+
|
|
53
|
+
- Redundant null checks: `if (data && data.user && data.user.name)`
|
|
54
|
+
- Unnecessary fallbacks: `const name = user?.name || "Unknown"`
|
|
55
|
+
- Blanket try-catch around every function
|
|
56
|
+
- Runtime type checks for TS-enforced types: `if (typeof id === 'number')`
|
|
57
|
+
|
|
58
|
+
**Exceptions** — defensive code IS appropriate at:
|
|
59
|
+
|
|
60
|
+
- Public API boundaries (user input, external API responses)
|
|
61
|
+
- Security or financial operations
|
|
62
|
+
- Known unreliable sources (legacy systems, third-party APIs)
|
|
63
|
+
|
|
64
|
+
```typescript
|
|
65
|
+
// ❌ Over-defensive
|
|
66
|
+
function getUserEmail(userId: number): string {
|
|
67
|
+
try {
|
|
68
|
+
if (!userId || typeof userId !== "number") return "";
|
|
69
|
+
const user = database.getUser(userId);
|
|
70
|
+
if (!user || !user.email || typeof user.email !== "string") return "";
|
|
71
|
+
return user.email || "no-email@example.com";
|
|
72
|
+
} catch {
|
|
73
|
+
return "";
|
|
74
|
+
}
|
|
75
|
+
}
|
|
76
|
+
|
|
77
|
+
// ✅ Happy path
|
|
78
|
+
function getUserEmail(userId: number): string {
|
|
79
|
+
return database.getUser(userId).email;
|
|
80
|
+
}
|
|
81
|
+
```
|
|
82
|
+
|
|
83
|
+
## Decision Priority
|
|
84
|
+
|
|
85
|
+
When rules conflict: **Security** > **User requirements** > **This guide** > **Language conventions** > **Preference**
|
|
86
|
+
|
|
87
|
+
When uncertain: ask clarifying questions, state assumptions explicitly, admit knowledge gaps.
|
|
@@ -0,0 +1,98 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: "TypeScript and React naming conventions, file structure, and type patterns"
|
|
3
|
+
globs: "**/*.ts,**/*.tsx,**/*.js,**/*.jsx"
|
|
4
|
+
alwaysApply: true
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# TypeScript & React Conventions
|
|
8
|
+
|
|
9
|
+
## File Naming
|
|
10
|
+
|
|
11
|
+
| Type | Convention | Example |
|
|
12
|
+
| :---------------- | :----------------------- | :--------------------- |
|
|
13
|
+
| Utility / Service | `kebab-case.ts` | `user-service.ts` |
|
|
14
|
+
| React Component | `PascalCase.tsx` | `DashboardLayout.tsx` |
|
|
15
|
+
| CSS Module | `PascalCase.module.scss` | `Button.module.scss` |
|
|
16
|
+
| Test | `*.test.ts(x)` | `user-service.test.ts` |
|
|
17
|
+
| Type Definition | `*.types.ts` | `api.types.ts` |
|
|
18
|
+
| Constants | `*.constants.ts` | `routes.constants.ts` |
|
|
19
|
+
|
|
20
|
+
## Code Naming
|
|
21
|
+
|
|
22
|
+
| Element | Convention | Example |
|
|
23
|
+
| :------------- | :--------------------- | :----------------------- |
|
|
24
|
+
| Variable | `camelCase` | `userName`, `isActive` |
|
|
25
|
+
| Function | `camelCase` | `getUserData()` |
|
|
26
|
+
| Constant | `UPPER_SNAKE_CASE` | `API_BASE_URL` |
|
|
27
|
+
| Private member | `_camelCase` | `_internalCache` |
|
|
28
|
+
| Boolean | `is/has/should` prefix | `isLoading`, `hasAccess` |
|
|
29
|
+
| Component | `PascalCase` | `UserCard` |
|
|
30
|
+
| Hook | `use` prefix | `useAuth()` |
|
|
31
|
+
| HOC | `with` prefix | `withAuth()` |
|
|
32
|
+
| Event handler | `handle` prefix | `handleSubmit()` |
|
|
33
|
+
| Render helper | `render` prefix | `renderHeader()` |
|
|
34
|
+
|
|
35
|
+
## Types & Interfaces
|
|
36
|
+
|
|
37
|
+
| Type | Convention | Example |
|
|
38
|
+
| :--------- | :---------------------- | :-------------------------- |
|
|
39
|
+
| Interface | `I` + PascalCase | `IUser`, `IApiResponse` |
|
|
40
|
+
| Type alias | `T` + PascalCase | `TConfig`, `TRequestBody` |
|
|
41
|
+
| Enum | `E` + PascalCase | `EUserRole`, `EStatus` |
|
|
42
|
+
| Props | ComponentName + `Props` | `ButtonProps`, `ModalProps` |
|
|
43
|
+
| Generic | Single uppercase | `T`, `K`, `V` |
|
|
44
|
+
|
|
45
|
+
Usage: **Interface** → object shapes, **Type alias** → unions/intersections, **Enum** → fixed value sets (string values in `UPPER_CASE`)
|
|
46
|
+
|
|
47
|
+
```typescript
|
|
48
|
+
interface IUser {
|
|
49
|
+
id: number;
|
|
50
|
+
email: string;
|
|
51
|
+
role: EUserRole;
|
|
52
|
+
}
|
|
53
|
+
|
|
54
|
+
type TApiResponse<T> = TSuccessResponse<T> | TErrorResponse;
|
|
55
|
+
|
|
56
|
+
enum EUserRole {
|
|
57
|
+
Admin = "ADMIN",
|
|
58
|
+
User = "USER",
|
|
59
|
+
Guest = "GUEST",
|
|
60
|
+
}
|
|
61
|
+
```
|
|
62
|
+
|
|
63
|
+
- Avoid `any`; use `unknown` or generics
|
|
64
|
+
- Explicit return types for public API functions
|
|
65
|
+
- Use type guards for runtime narrowing
|
|
66
|
+
|
|
67
|
+
## File Structure
|
|
68
|
+
|
|
69
|
+
Import order (separate each group with a blank line):
|
|
70
|
+
|
|
71
|
+
1. React / framework (`react`, `next`)
|
|
72
|
+
2. External libraries (`axios`, `lodash`)
|
|
73
|
+
3. Internal aliases (`@/components`, `@/hooks`)
|
|
74
|
+
4. Relative imports (`./`, `../`)
|
|
75
|
+
5. Type-only imports (`import type`)
|
|
76
|
+
6. Styles (`.module.scss`)
|
|
77
|
+
|
|
78
|
+
Within-file order: **Imports → Types → Constants → Private helpers → Public exports**
|
|
79
|
+
|
|
80
|
+
## Component Pattern
|
|
81
|
+
|
|
82
|
+
```typescript
|
|
83
|
+
// Named function export (preferred over React.FC)
|
|
84
|
+
export function UserCard({ userId }: UserCardProps) {
|
|
85
|
+
// 1. hooks
|
|
86
|
+
// 2. state (descriptive names: isLoading, userData — not loading, data)
|
|
87
|
+
// 3. effects (all deps in dependency array)
|
|
88
|
+
// 4. handlers (handleXxx)
|
|
89
|
+
// 5. render helpers (renderXxx)
|
|
90
|
+
// 6. return JSX
|
|
91
|
+
}
|
|
92
|
+
```
|
|
93
|
+
|
|
94
|
+
## Formatting
|
|
95
|
+
|
|
96
|
+
- Max **100 characters** per line
|
|
97
|
+
- Prefer destructuring: `const { email, role } = user`
|
|
98
|
+
- Optional chaining `?.` max 2 levels deep
|
|
@@ -0,0 +1,69 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: "README document writing guide: structure, style, and format conventions"
|
|
3
|
+
globs: "README.md"
|
|
4
|
+
alwaysApply: false
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# README Writing Guide
|
|
8
|
+
|
|
9
|
+
## Structure
|
|
10
|
+
|
|
11
|
+
README는 다음 순서를 따른다:
|
|
12
|
+
|
|
13
|
+
1. **Title** — 프로젝트를 대표하는 심플한 영어 이름
|
|
14
|
+
2. **Intro** — 1-2줄 한국어 프로젝트 소개
|
|
15
|
+
3. **Table of Contents** — 마크다운 테이블 형태
|
|
16
|
+
4. **Overview** — 프로젝트 배경과 핵심 가치 (3-5문장)
|
|
17
|
+
5. **Features** — 핵심 기능만 (추상적 설명 금지)
|
|
18
|
+
6. **Tech Stack** — 핵심 기술만 (사소한 라이브러리 제외)
|
|
19
|
+
7. **Getting Started** — 설치 및 실행 (최소 단계)
|
|
20
|
+
|
|
21
|
+
## Format
|
|
22
|
+
|
|
23
|
+
### Language
|
|
24
|
+
|
|
25
|
+
- 제목, 소제목: **영어**
|
|
26
|
+
- 본문: **한국어**
|
|
27
|
+
- 코드, 명령어: 영어
|
|
28
|
+
|
|
29
|
+
### Style
|
|
30
|
+
|
|
31
|
+
- 이모지 금지
|
|
32
|
+
- 장황한 설명 금지; 간결하고 공식적인 어조
|
|
33
|
+
- 바뀌기 쉬운 파일명, 변수명, 경로는 언급하지 않음
|
|
34
|
+
- HTML 태그 최소화
|
|
35
|
+
|
|
36
|
+
### Table of Contents
|
|
37
|
+
|
|
38
|
+
```markdown
|
|
39
|
+
1. [Overview](#overview)
|
|
40
|
+
2. [Features](#features)
|
|
41
|
+
3. [Tech Stack](#tech-stack)
|
|
42
|
+
4. [Getting Started](#getting-started)
|
|
43
|
+
```
|
|
44
|
+
|
|
45
|
+
### Features
|
|
46
|
+
|
|
47
|
+
- 실제로 **구분 가능한 대표 기능**만 나열
|
|
48
|
+
- 각 기능은 한 줄 설명
|
|
49
|
+
- 추상적 표현 금지: ~~"강력한 성능"~~, ~~"혁신적인 아키텍처"~~
|
|
50
|
+
|
|
51
|
+
### Tech Stack
|
|
52
|
+
|
|
53
|
+
- 프레임워크, 언어, 핵심 인프라만
|
|
54
|
+
- 유틸 라이브러리, 개발 도구 제외
|
|
55
|
+
- 테이블 또는 간단한 리스트
|
|
56
|
+
|
|
57
|
+
### Getting Started
|
|
58
|
+
|
|
59
|
+
- Prerequisites (필요 시만)
|
|
60
|
+
- 설치 명령어 (코드 블록)
|
|
61
|
+
- 실행 명령어 (코드 블록)
|
|
62
|
+
|
|
63
|
+
## Anti-Patterns
|
|
64
|
+
|
|
65
|
+
- 7개 이상 섹션의 장황한 README
|
|
66
|
+
- 뱃지, 이모지, 장식적 요소
|
|
67
|
+
- 추상적 기능 소개
|
|
68
|
+
- 자주 변하는 파일명/변수명 언급
|
|
69
|
+
- 불필요한 Contributing, License, Changelog 섹션
|
|
@@ -0,0 +1,79 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: "Technical research workflow for generating in-depth reports with web search and analysis"
|
|
3
|
+
globs: ""
|
|
4
|
+
alwaysApply: false
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# Research Workflow
|
|
8
|
+
|
|
9
|
+
Activate when user explicitly requests research ("research X", "investigate Y", "compare Z"). Not for simple coding questions, quick fact-checks, debugging, or routine implementation.
|
|
10
|
+
|
|
11
|
+
## Principles
|
|
12
|
+
|
|
13
|
+
- Base conclusions on **official docs, technical blogs, papers, whitepapers**
|
|
14
|
+
- **Never** speculate or present assumptions as facts
|
|
15
|
+
- Cite all sources with direct URLs
|
|
16
|
+
- No emojis in research documents
|
|
17
|
+
- Explain **how** things work, not just features; identify limitations and edge cases
|
|
18
|
+
- Use clear technical language; avoid marketing language ("blazingly fast", "revolutionary")
|
|
19
|
+
|
|
20
|
+
## Process
|
|
21
|
+
|
|
22
|
+
### Phase 1: Multi-Angle Search (5+ queries)
|
|
23
|
+
|
|
24
|
+
1. Official documentation (include version numbers and year)
|
|
25
|
+
2. Technical comparisons and benchmarks
|
|
26
|
+
3. Implementation guides and best practices
|
|
27
|
+
4. Known issues and limitations
|
|
28
|
+
5. Alternative solutions
|
|
29
|
+
|
|
30
|
+
Start broad, narrow based on findings.
|
|
31
|
+
|
|
32
|
+
### Phase 2: Deep Extraction
|
|
33
|
+
|
|
34
|
+
Fetch full content of 3-5 critical URLs:
|
|
35
|
+
- API specs, technical references
|
|
36
|
+
- Architecture and system design docs
|
|
37
|
+
- Benchmark data, performance analyses
|
|
38
|
+
- Security advisories
|
|
39
|
+
|
|
40
|
+
### Phase 3: Synthesis
|
|
41
|
+
|
|
42
|
+
- Identify patterns and contradictions across sources
|
|
43
|
+
- Evaluate trade-offs between approaches
|
|
44
|
+
- Assess feasibility for the specific use case
|
|
45
|
+
- Form architectural recommendations
|
|
46
|
+
|
|
47
|
+
## Report Format
|
|
48
|
+
|
|
49
|
+
```
|
|
50
|
+
# [Topic]
|
|
51
|
+
|
|
52
|
+
**Date:** YYYY-MM-DD
|
|
53
|
+
**Scope:** [Brief description]
|
|
54
|
+
**Key Finding:** [One sentence]
|
|
55
|
+
|
|
56
|
+
## Executive Summary
|
|
57
|
+
3-5 sentences: what, why, key findings, constraints.
|
|
58
|
+
|
|
59
|
+
## [Thematic Sections]
|
|
60
|
+
Organize by theme, not by source:
|
|
61
|
+
- Technical Feasibility
|
|
62
|
+
- Architecture Considerations
|
|
63
|
+
- Performance Characteristics
|
|
64
|
+
- Alternative Approaches
|
|
65
|
+
|
|
66
|
+
## Conclusion
|
|
67
|
+
Recommendations, next steps, risks/unknowns.
|
|
68
|
+
|
|
69
|
+
## Sources
|
|
70
|
+
1. [Title](URL) - Description
|
|
71
|
+
```
|
|
72
|
+
|
|
73
|
+
## Output
|
|
74
|
+
|
|
75
|
+
- **Filename:** descriptive English, kebab-case (`redis-vs-memcached-comparison.md`)
|
|
76
|
+
- **Location:** `/research/`, `/docs/research/`, or project root
|
|
77
|
+
- **Format:** Markdown with code blocks, mermaid diagrams, comparison tables
|
|
78
|
+
- **Language:** English, present tense, active voice, define acronyms on first use
|
|
79
|
+
- Include minimal runnable code examples where relevant
|
|
@@ -0,0 +1,71 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: "Guide for creating rule documents: structure, token efficiency, and writing principles"
|
|
3
|
+
globs: "rules/*.md"
|
|
4
|
+
alwaysApply: false
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# Rule Writing Guide
|
|
8
|
+
|
|
9
|
+
Rules in `rules/` are consumed by AI coding assistants. Every token loads per request — **brevity is critical**.
|
|
10
|
+
|
|
11
|
+
## Frontmatter
|
|
12
|
+
|
|
13
|
+
```yaml
|
|
14
|
+
---
|
|
15
|
+
description: "One-line summary for AI relevance judgment"
|
|
16
|
+
globs: "**/*.ts,**/*.tsx" # File patterns for auto-attachment (empty if N/A)
|
|
17
|
+
alwaysApply: true # true = always loaded, false = conditional
|
|
18
|
+
---
|
|
19
|
+
```
|
|
20
|
+
|
|
21
|
+
- `description`: AI가 자동 로딩 여부를 판단할 때 참조하는 요약
|
|
22
|
+
- `globs`: 해당 파일 패턴 작업 시 자동 적용
|
|
23
|
+
- `alwaysApply`: 항상 로드 여부
|
|
24
|
+
|
|
25
|
+
## Structure
|
|
26
|
+
|
|
27
|
+
1. **Title** (`# Rule Name`) — short, descriptive
|
|
28
|
+
2. **Scope** (optional) — one sentence on when this rule activates
|
|
29
|
+
3. **Sections** — organized by theme
|
|
30
|
+
4. No redundant sections (stated rules should not be restated as checklists)
|
|
31
|
+
|
|
32
|
+
## Writing Principles
|
|
33
|
+
|
|
34
|
+
### Token Efficiency
|
|
35
|
+
|
|
36
|
+
- Every line must be an **actionable instruction**
|
|
37
|
+
- Remove meta-sentences: ~~"This rule defines how the AI should..."~~
|
|
38
|
+
- Remove knowledge AI already possesses
|
|
39
|
+
- **Tables** for mappings (most token-efficient format)
|
|
40
|
+
- **Bullet lists** for constraints, not paragraphs
|
|
41
|
+
- One code example per concept max; omit if a table already conveys it
|
|
42
|
+
|
|
43
|
+
### Clarity
|
|
44
|
+
|
|
45
|
+
- **Imperative voice:** "Use camelCase" not "You should use camelCase"
|
|
46
|
+
- **Bold** key terms and constraints
|
|
47
|
+
- Concrete examples over abstract descriptions
|
|
48
|
+
- No hedging: use "always", "never", "must" — not "try to", "consider"
|
|
49
|
+
|
|
50
|
+
### Completeness
|
|
51
|
+
|
|
52
|
+
- Cover all conventions without redundancy
|
|
53
|
+
- Include ✅/❌ examples only when the rule is counterintuitive
|
|
54
|
+
- Add inline context only when the rule needs explanation
|
|
55
|
+
|
|
56
|
+
## Anti-Patterns
|
|
57
|
+
|
|
58
|
+
- Chatty introductions: ~~"Welcome! This document will help you..."~~
|
|
59
|
+
- Redundant checklists restating earlier rules
|
|
60
|
+
- Over-documenting _why_ a rule exists (unless non-obvious)
|
|
61
|
+
- Emojis, excessive separators, decorative formatting
|
|
62
|
+
- Stale references to specific file names or paths that change frequently
|
|
63
|
+
|
|
64
|
+
## Quality Test
|
|
65
|
+
|
|
66
|
+
A well-written rule document:
|
|
67
|
+
|
|
68
|
+
- Can be followed by any AI without ambiguity
|
|
69
|
+
- Contains **zero** filler sentences
|
|
70
|
+
- Uses ≤50% tokens of a verbose equivalent
|
|
71
|
+
- Handles edge cases inline, not in separate sections
|
|
@@ -0,0 +1,62 @@
|
|
|
1
|
+
"use strict";
|
|
2
|
+
|
|
3
|
+
const fs = require("fs");
|
|
4
|
+
const path = require("path");
|
|
5
|
+
|
|
6
|
+
const CONFIG_FILENAME = ".heymark.json";
|
|
7
|
+
|
|
8
|
+
/**
|
|
9
|
+
* rulesSource: GitHub 저장소 URL (https://github.com/org/repo 또는 git@github.com:org/repo.git)
|
|
10
|
+
* branch: 브랜치 (기본 main)
|
|
11
|
+
* rulesSourceDir: 저장소 내부에서 .md가 있는 하위 디렉터리 (기본 "" = 루트)
|
|
12
|
+
* @typedef {{ rulesSource: string, branch?: string, rulesSourceDir?: string }} RuleBookConfig
|
|
13
|
+
*/
|
|
14
|
+
|
|
15
|
+
/**
|
|
16
|
+
* 프로젝트 루트에서 .heymark.json을 읽습니다.
|
|
17
|
+
* @param {string} projectRoot - 프로젝트 루트 (보통 process.cwd())
|
|
18
|
+
* @returns {RuleBookConfig | null}
|
|
19
|
+
*/
|
|
20
|
+
function loadConfig(projectRoot) {
|
|
21
|
+
const configPath = path.join(projectRoot, CONFIG_FILENAME);
|
|
22
|
+
if (!fs.existsSync(configPath)) return null;
|
|
23
|
+
|
|
24
|
+
try {
|
|
25
|
+
const raw = fs.readFileSync(configPath, "utf8");
|
|
26
|
+
const data = JSON.parse(raw);
|
|
27
|
+
if (!data || typeof data.rulesSource !== "string" || !data.rulesSource.trim()) {
|
|
28
|
+
return null;
|
|
29
|
+
}
|
|
30
|
+
return {
|
|
31
|
+
rulesSource: data.rulesSource.trim(),
|
|
32
|
+
branch: typeof data.branch === "string" && data.branch.trim() ? data.branch.trim() : "main",
|
|
33
|
+
rulesSourceDir: typeof data.rulesSourceDir === "string" ? data.rulesSourceDir.trim() : "",
|
|
34
|
+
};
|
|
35
|
+
} catch {
|
|
36
|
+
return null;
|
|
37
|
+
}
|
|
38
|
+
}
|
|
39
|
+
|
|
40
|
+
/**
|
|
41
|
+
* 초기 설정 파일을 생성합니다. (원격 GitHub 저장소 URL 사용)
|
|
42
|
+
* @param {string} projectRoot
|
|
43
|
+
* @param {RuleBookConfig} config - { rulesSource: repoUrl, branch?, rulesSourceDir? }
|
|
44
|
+
*/
|
|
45
|
+
function writeConfig(projectRoot, config) {
|
|
46
|
+
const configPath = path.join(projectRoot, CONFIG_FILENAME);
|
|
47
|
+
const toWrite = {
|
|
48
|
+
rulesSource: config.rulesSource,
|
|
49
|
+
branch: config.branch || "main",
|
|
50
|
+
};
|
|
51
|
+
if (config.rulesSourceDir) {
|
|
52
|
+
toWrite.rulesSourceDir = config.rulesSourceDir;
|
|
53
|
+
}
|
|
54
|
+
fs.writeFileSync(configPath, JSON.stringify(toWrite, null, 2), "utf8");
|
|
55
|
+
return configPath;
|
|
56
|
+
}
|
|
57
|
+
|
|
58
|
+
module.exports = {
|
|
59
|
+
CONFIG_FILENAME,
|
|
60
|
+
loadConfig,
|
|
61
|
+
writeConfig,
|
|
62
|
+
};
|
|
@@ -0,0 +1,82 @@
|
|
|
1
|
+
"use strict";
|
|
2
|
+
|
|
3
|
+
const fs = require("fs");
|
|
4
|
+
const path = require("path");
|
|
5
|
+
|
|
6
|
+
const GENERATED_MARKER = "<!-- @generated by sync-rules - DO NOT EDIT -->";
|
|
7
|
+
|
|
8
|
+
function parseFrontmatter(content) {
|
|
9
|
+
const match = content.match(/^---\r?\n([\s\S]+?)\r?\n---\r?\n?([\s\S]*)$/);
|
|
10
|
+
if (!match) return { metadata: {}, body: content.trim() };
|
|
11
|
+
|
|
12
|
+
const metadata = {};
|
|
13
|
+
match[1].split(/\r?\n/).forEach((line) => {
|
|
14
|
+
const idx = line.indexOf(":");
|
|
15
|
+
if (idx === -1) return;
|
|
16
|
+
const key = line.slice(0, idx).trim();
|
|
17
|
+
let value = line.slice(idx + 1).trim();
|
|
18
|
+
if (
|
|
19
|
+
(value.startsWith('"') && value.endsWith('"')) ||
|
|
20
|
+
(value.startsWith("'") && value.endsWith("'"))
|
|
21
|
+
) {
|
|
22
|
+
value = value.slice(1, -1);
|
|
23
|
+
}
|
|
24
|
+
if (value === "true") value = true;
|
|
25
|
+
else if (value === "false") value = false;
|
|
26
|
+
metadata[key] = value;
|
|
27
|
+
});
|
|
28
|
+
|
|
29
|
+
return { metadata, body: match[2].trim() };
|
|
30
|
+
}
|
|
31
|
+
|
|
32
|
+
function loadRules(rulesDir) {
|
|
33
|
+
if (!fs.existsSync(rulesDir)) {
|
|
34
|
+
console.error(`[Error] Rules directory not found: ${rulesDir}`);
|
|
35
|
+
console.error(" Ensure 'rules/' exists relative to the script location.");
|
|
36
|
+
process.exit(1);
|
|
37
|
+
}
|
|
38
|
+
|
|
39
|
+
const files = fs
|
|
40
|
+
.readdirSync(rulesDir)
|
|
41
|
+
.filter((f) => f.endsWith(".md"))
|
|
42
|
+
.sort();
|
|
43
|
+
|
|
44
|
+
if (files.length === 0) {
|
|
45
|
+
console.error(`[Error] No .md files found in: ${rulesDir}`);
|
|
46
|
+
process.exit(1);
|
|
47
|
+
}
|
|
48
|
+
|
|
49
|
+
return files.map((file) => {
|
|
50
|
+
const raw = fs.readFileSync(path.join(rulesDir, file), "utf8");
|
|
51
|
+
const { metadata, body } = parseFrontmatter(raw);
|
|
52
|
+
const baseName = path.basename(file, ".md");
|
|
53
|
+
|
|
54
|
+
return {
|
|
55
|
+
fileName: file,
|
|
56
|
+
name: metadata.name || baseName,
|
|
57
|
+
description: metadata.description || baseName,
|
|
58
|
+
globs: metadata.globs || "",
|
|
59
|
+
alwaysApply: metadata.alwaysApply === true,
|
|
60
|
+
metadata,
|
|
61
|
+
body,
|
|
62
|
+
};
|
|
63
|
+
});
|
|
64
|
+
}
|
|
65
|
+
|
|
66
|
+
function mergeRuleBodies(rules) {
|
|
67
|
+
return rules.map((r) => r.body).join("\n\n---\n\n");
|
|
68
|
+
}
|
|
69
|
+
|
|
70
|
+
function writeMergedFile(filePath, rules) {
|
|
71
|
+
fs.mkdirSync(path.dirname(filePath), { recursive: true });
|
|
72
|
+
const content = `${GENERATED_MARKER}\n\n${mergeRuleBodies(rules)}\n`;
|
|
73
|
+
fs.writeFileSync(filePath, content);
|
|
74
|
+
}
|
|
75
|
+
|
|
76
|
+
module.exports = {
|
|
77
|
+
GENERATED_MARKER,
|
|
78
|
+
parseFrontmatter,
|
|
79
|
+
loadRules,
|
|
80
|
+
mergeRuleBodies,
|
|
81
|
+
writeMergedFile,
|
|
82
|
+
};
|
|
@@ -0,0 +1,76 @@
|
|
|
1
|
+
"use strict";
|
|
2
|
+
|
|
3
|
+
const fs = require("fs");
|
|
4
|
+
const path = require("path");
|
|
5
|
+
const { execSync } = require("child_process");
|
|
6
|
+
|
|
7
|
+
const CACHE_DIR_NAME = ".heymark-cache";
|
|
8
|
+
|
|
9
|
+
/**
|
|
10
|
+
* 저장소 URL에서 캐시 폴더명으로 쓸 수 있는 문자열 추출
|
|
11
|
+
* https://github.com/org/repo -> org-repo
|
|
12
|
+
* git@github.com:org/repo.git -> org-repo
|
|
13
|
+
*/
|
|
14
|
+
function sanitizeRepoName(url) {
|
|
15
|
+
let s = url.trim();
|
|
16
|
+
if (s.endsWith(".git")) s = s.slice(0, -4);
|
|
17
|
+
const match = s.match(/github\.com[:/]([^/]+\/[^/]+?)(?:\/|$)/) || s.match(/([^/]+\/[^/]+?)(?:\/|$)/);
|
|
18
|
+
if (match) {
|
|
19
|
+
return match[1].replace(/\//g, "-");
|
|
20
|
+
}
|
|
21
|
+
return s.replace(/[^a-zA-Z0-9._-]/g, "-") || "repo";
|
|
22
|
+
}
|
|
23
|
+
|
|
24
|
+
/**
|
|
25
|
+
* 원격 저장소를 clone 또는 pull하여, 규칙 .md가 있는 로컬 디렉터리 절대 경로를 반환합니다.
|
|
26
|
+
* Private repo는 사용자의 git 인증(SSH 키, credential)으로 접근해야 합니다.
|
|
27
|
+
* @param {string} projectRoot - 현재 프로젝트 루트
|
|
28
|
+
* @param {{ rulesSource: string, branch?: string, rulesSourceDir?: string }} config
|
|
29
|
+
* @returns {string} - .md 파일이 있는 디렉터리의 절대 경로
|
|
30
|
+
*/
|
|
31
|
+
function getRulesDirFromRepo(projectRoot, config) {
|
|
32
|
+
const url = config.rulesSource;
|
|
33
|
+
const branch = config.branch || "main";
|
|
34
|
+
const subDir = config.rulesSourceDir || "";
|
|
35
|
+
|
|
36
|
+
const cacheBase = path.join(projectRoot, CACHE_DIR_NAME);
|
|
37
|
+
const repoName = sanitizeRepoName(url);
|
|
38
|
+
const clonePath = path.join(cacheBase, repoName);
|
|
39
|
+
|
|
40
|
+
if (!fs.existsSync(clonePath)) {
|
|
41
|
+
fs.mkdirSync(cacheBase, { recursive: true });
|
|
42
|
+
try {
|
|
43
|
+
execSync(`git clone --depth 1 --branch "${branch}" "${url}" "${clonePath}"`, {
|
|
44
|
+
stdio: "inherit",
|
|
45
|
+
cwd: projectRoot,
|
|
46
|
+
});
|
|
47
|
+
} catch (err) {
|
|
48
|
+
console.error("[Error] Failed to clone rules repository.");
|
|
49
|
+
console.error(" For private repos, ensure you have access (SSH key or HTTPS token).");
|
|
50
|
+
console.error(" Example: heymark init https://github.com/org/repo.git");
|
|
51
|
+
process.exit(1);
|
|
52
|
+
}
|
|
53
|
+
} else {
|
|
54
|
+
try {
|
|
55
|
+
execSync("git fetch origin && git checkout --quiet . && git pull --quiet origin " + branch, {
|
|
56
|
+
stdio: "pipe",
|
|
57
|
+
cwd: clonePath,
|
|
58
|
+
});
|
|
59
|
+
} catch (err) {
|
|
60
|
+
// pull 실패 시(네트워크 등) 기존 클론 내용으로 진행
|
|
61
|
+
}
|
|
62
|
+
}
|
|
63
|
+
|
|
64
|
+
const rulesDir = subDir ? path.join(clonePath, subDir) : clonePath;
|
|
65
|
+
if (!fs.existsSync(rulesDir) || !fs.statSync(rulesDir).isDirectory()) {
|
|
66
|
+
console.error(`[Error] Rules directory not found in repo: ${subDir || "(root)"}`);
|
|
67
|
+
process.exit(1);
|
|
68
|
+
}
|
|
69
|
+
return rulesDir;
|
|
70
|
+
}
|
|
71
|
+
|
|
72
|
+
module.exports = {
|
|
73
|
+
CACHE_DIR_NAME,
|
|
74
|
+
getRulesDirFromRepo,
|
|
75
|
+
sanitizeRepoName,
|
|
76
|
+
};
|
package/scripts/sync.js
ADDED
|
@@ -0,0 +1,237 @@
|
|
|
1
|
+
#!/usr/bin/env node
|
|
2
|
+
"use strict";
|
|
3
|
+
|
|
4
|
+
const fs = require("fs");
|
|
5
|
+
const path = require("path");
|
|
6
|
+
const { loadRules } = require("./lib/parser");
|
|
7
|
+
const { CONFIG_FILENAME, loadConfig, writeConfig } = require("./lib/config");
|
|
8
|
+
const { getRulesDirFromRepo } = require("./lib/repo");
|
|
9
|
+
|
|
10
|
+
const SCRIPT_DIR = __dirname;
|
|
11
|
+
const PROJECT_ROOT = process.cwd();
|
|
12
|
+
|
|
13
|
+
function discoverTools() {
|
|
14
|
+
const toolsDir = path.join(SCRIPT_DIR, "tools");
|
|
15
|
+
const registry = {};
|
|
16
|
+
|
|
17
|
+
fs.readdirSync(toolsDir)
|
|
18
|
+
.filter((f) => f.endsWith(".js"))
|
|
19
|
+
.sort()
|
|
20
|
+
.forEach((file) => {
|
|
21
|
+
const key = path.basename(file, ".js");
|
|
22
|
+
registry[key] = require(path.join(toolsDir, file));
|
|
23
|
+
});
|
|
24
|
+
|
|
25
|
+
return registry;
|
|
26
|
+
}
|
|
27
|
+
|
|
28
|
+
function parseArgs(availableTools) {
|
|
29
|
+
const args = process.argv.slice(2);
|
|
30
|
+
const config = {
|
|
31
|
+
tools: Object.keys(availableTools),
|
|
32
|
+
clean: false,
|
|
33
|
+
preview: false,
|
|
34
|
+
help: false,
|
|
35
|
+
source: null,
|
|
36
|
+
};
|
|
37
|
+
|
|
38
|
+
for (let i = 0; i < args.length; i++) {
|
|
39
|
+
const arg = args[i];
|
|
40
|
+
|
|
41
|
+
if (arg === "--tools" || arg === "-t") {
|
|
42
|
+
const val = args[++i];
|
|
43
|
+
if (!val) {
|
|
44
|
+
console.error("[Error] --tools requires a comma-separated list.");
|
|
45
|
+
process.exit(1);
|
|
46
|
+
}
|
|
47
|
+
config.tools = val.split(",").map((t) => t.trim().toLowerCase());
|
|
48
|
+
} else if (arg === "--clean" || arg === "-c") {
|
|
49
|
+
config.clean = true;
|
|
50
|
+
} else if (arg === "--preview" || arg === "-p") {
|
|
51
|
+
config.preview = true;
|
|
52
|
+
} else if (arg === "--source" || arg === "-s") {
|
|
53
|
+
const val = args[++i];
|
|
54
|
+
if (!val) {
|
|
55
|
+
console.error("[Error] --source requires a GitHub repository URL.");
|
|
56
|
+
process.exit(1);
|
|
57
|
+
}
|
|
58
|
+
config.source = val.trim();
|
|
59
|
+
} else if (arg === "--help" || arg === "-h") {
|
|
60
|
+
config.help = true;
|
|
61
|
+
} else {
|
|
62
|
+
console.error(`[Error] Unknown option: ${arg}`);
|
|
63
|
+
console.error(" Use --help for usage information.");
|
|
64
|
+
process.exit(1);
|
|
65
|
+
}
|
|
66
|
+
}
|
|
67
|
+
|
|
68
|
+
const invalid = config.tools.filter((t) => !availableTools[t]);
|
|
69
|
+
if (invalid.length > 0) {
|
|
70
|
+
console.error(`[Error] Unknown tool(s): ${invalid.join(", ")}`);
|
|
71
|
+
console.error(` Available: ${Object.keys(availableTools).join(", ")}`);
|
|
72
|
+
process.exit(1);
|
|
73
|
+
}
|
|
74
|
+
|
|
75
|
+
return config;
|
|
76
|
+
}
|
|
77
|
+
|
|
78
|
+
function showHelp(tools) {
|
|
79
|
+
const toolLines = Object.entries(tools)
|
|
80
|
+
.map(([key, t]) => ` ${key.padEnd(10)} ${t.name.padEnd(16)} -> ${t.output}`)
|
|
81
|
+
.join("\n");
|
|
82
|
+
|
|
83
|
+
console.log(`
|
|
84
|
+
AI Coding Tool Convention Sync
|
|
85
|
+
|
|
86
|
+
Reads *.md from a GitHub repository (public or private) and generates
|
|
87
|
+
tool-specific configuration files for various AI coding assistants.
|
|
88
|
+
Same rules everywhere: A computer, B computer, same remote repo.
|
|
89
|
+
|
|
90
|
+
Usage:
|
|
91
|
+
heymark init <repo-url> Set rules source (creates ${CONFIG_FILENAME})
|
|
92
|
+
heymark [options] Sync from configured or --source repo
|
|
93
|
+
|
|
94
|
+
Options:
|
|
95
|
+
--source, -s <url> GitHub repo URL for this run (overrides ${CONFIG_FILENAME})
|
|
96
|
+
--tools, -t <list> Comma-separated tool names (default: all)
|
|
97
|
+
--clean, -c Remove all generated files
|
|
98
|
+
--preview, -p Preview what will be generated without writing
|
|
99
|
+
--help, -h Show this help message
|
|
100
|
+
|
|
101
|
+
Rules source (in order):
|
|
102
|
+
1. --source <repo-url>
|
|
103
|
+
2. ${CONFIG_FILENAME} (set via 'heymark init <repo-url>')
|
|
104
|
+
Private repos: use SSH (git@github.com:org/repo.git) or HTTPS with token.
|
|
105
|
+
|
|
106
|
+
Available tools:
|
|
107
|
+
${toolLines}
|
|
108
|
+
|
|
109
|
+
Examples:
|
|
110
|
+
heymark init https://github.com/org/my-rules.git
|
|
111
|
+
heymark init https://github.com/org/my-rules.git --dir rules --branch main
|
|
112
|
+
heymark
|
|
113
|
+
heymark -s https://github.com/org/other-rules.git
|
|
114
|
+
heymark -t cursor,claude
|
|
115
|
+
heymark -c
|
|
116
|
+
heymark -p
|
|
117
|
+
`);
|
|
118
|
+
}
|
|
119
|
+
|
|
120
|
+
function runInit(initArgs) {
|
|
121
|
+
const args = initArgs.slice(0);
|
|
122
|
+
const repoUrl = args[0];
|
|
123
|
+
if (!repoUrl || repoUrl.startsWith("--")) {
|
|
124
|
+
console.error("[Error] init requires a GitHub repository URL.");
|
|
125
|
+
console.error(" Example: heymark init https://github.com/org/my-rules.git");
|
|
126
|
+
console.error(" Example: heymark init git@github.com:org/my-rules.git");
|
|
127
|
+
console.error(" Optional: --branch <branch> --dir <subdir> (e.g. --dir rules)");
|
|
128
|
+
process.exit(1);
|
|
129
|
+
}
|
|
130
|
+
let branch = "main";
|
|
131
|
+
let rulesSourceDir = "";
|
|
132
|
+
for (let i = 1; i < args.length; i++) {
|
|
133
|
+
if ((args[i] === "--branch" || args[i] === "-b") && args[i + 1]) {
|
|
134
|
+
branch = args[++i].trim();
|
|
135
|
+
} else if ((args[i] === "--dir" || args[i] === "-d") && args[i + 1]) {
|
|
136
|
+
rulesSourceDir = args[++i].trim();
|
|
137
|
+
}
|
|
138
|
+
}
|
|
139
|
+
const config = { rulesSource: repoUrl.trim(), branch, rulesSourceDir };
|
|
140
|
+
const configPath = writeConfig(PROJECT_ROOT, config);
|
|
141
|
+
console.log(`[Init] Rules source saved to ${path.relative(PROJECT_ROOT, configPath) || configPath}`);
|
|
142
|
+
console.log(` rulesSource: ${config.rulesSource}`);
|
|
143
|
+
if (branch !== "main") console.log(` branch: ${branch}`);
|
|
144
|
+
if (rulesSourceDir) console.log(` rulesSourceDir: ${rulesSourceDir}`);
|
|
145
|
+
console.log("");
|
|
146
|
+
console.log("Run 'heymark' to fetch rules from the repo and generate tool configs.");
|
|
147
|
+
}
|
|
148
|
+
|
|
149
|
+
function resolveRulesDir(config) {
|
|
150
|
+
const repoConfig = config.source
|
|
151
|
+
? { rulesSource: config.source, branch: "main", rulesSourceDir: "" }
|
|
152
|
+
: loadConfig(PROJECT_ROOT);
|
|
153
|
+
if (!repoConfig) return null;
|
|
154
|
+
return getRulesDirFromRepo(PROJECT_ROOT, repoConfig);
|
|
155
|
+
}
|
|
156
|
+
|
|
157
|
+
function main() {
|
|
158
|
+
const args = process.argv.slice(2);
|
|
159
|
+
const subcommand = args[0];
|
|
160
|
+
if (subcommand === "init") {
|
|
161
|
+
runInit(args.slice(1));
|
|
162
|
+
return;
|
|
163
|
+
}
|
|
164
|
+
|
|
165
|
+
const tools = discoverTools();
|
|
166
|
+
const config = parseArgs(tools);
|
|
167
|
+
|
|
168
|
+
if (config.help) {
|
|
169
|
+
showHelp(tools);
|
|
170
|
+
return;
|
|
171
|
+
}
|
|
172
|
+
|
|
173
|
+
const RULES_SRC_DIR = resolveRulesDir(config);
|
|
174
|
+
if (!RULES_SRC_DIR) {
|
|
175
|
+
console.error("[Error] Rules source not set.");
|
|
176
|
+
console.error(" Run: heymark init <github-repo-url>");
|
|
177
|
+
console.error(" Example: heymark init https://github.com/org/my-rules.git");
|
|
178
|
+
console.error(" Or use: heymark --source <repo-url>");
|
|
179
|
+
process.exit(1);
|
|
180
|
+
}
|
|
181
|
+
|
|
182
|
+
const rulesRelPath = path.relative(PROJECT_ROOT, RULES_SRC_DIR) || ".";
|
|
183
|
+
|
|
184
|
+
console.log("[Sync] Starting convention sync...");
|
|
185
|
+
console.log(` Source: ${rulesRelPath} (from remote repo)`);
|
|
186
|
+
console.log(` Target: ${PROJECT_ROOT}`);
|
|
187
|
+
console.log(` Tools: ${config.tools.join(", ")}`);
|
|
188
|
+
console.log("");
|
|
189
|
+
|
|
190
|
+
const rules = loadRules(RULES_SRC_DIR);
|
|
191
|
+
console.log(`[Load] ${rules.length} rule(s): ${rules.map((r) => r.name).join(", ")}`);
|
|
192
|
+
console.log("");
|
|
193
|
+
|
|
194
|
+
if (config.clean) {
|
|
195
|
+
console.log("[Clean] Removing generated files...");
|
|
196
|
+
const ruleNames = rules.map((r) => r.name);
|
|
197
|
+
for (const key of config.tools) {
|
|
198
|
+
const cleaned = tools[key].clean(ruleNames, PROJECT_ROOT);
|
|
199
|
+
cleaned.forEach((p) => console.log(` Deleted: ${p}`));
|
|
200
|
+
}
|
|
201
|
+
console.log("");
|
|
202
|
+
console.log(`[Done] Cleaned ${config.tools.length} tool(s) successfully.`);
|
|
203
|
+
return;
|
|
204
|
+
}
|
|
205
|
+
|
|
206
|
+
if (config.preview) {
|
|
207
|
+
console.log("[Preview] Would generate:");
|
|
208
|
+
for (const key of config.tools) {
|
|
209
|
+
const t = tools[key];
|
|
210
|
+
console.log(` ${t.name.padEnd(16)} -> ${t.output} (${rules.length} rules)`);
|
|
211
|
+
}
|
|
212
|
+
return;
|
|
213
|
+
}
|
|
214
|
+
|
|
215
|
+
// Clean existing files before generating (for fresh sync)
|
|
216
|
+
console.log("[Clean] Removing existing generated files...");
|
|
217
|
+
const ruleNames = rules.map((r) => r.name);
|
|
218
|
+
for (const key of config.tools) {
|
|
219
|
+
const cleaned = tools[key].clean(ruleNames, PROJECT_ROOT);
|
|
220
|
+
if (cleaned.length > 0) {
|
|
221
|
+
cleaned.forEach((p) => console.log(` Deleted: ${p}`));
|
|
222
|
+
}
|
|
223
|
+
}
|
|
224
|
+
console.log("");
|
|
225
|
+
|
|
226
|
+
console.log("[Generate]");
|
|
227
|
+
for (const key of config.tools) {
|
|
228
|
+
const t = tools[key];
|
|
229
|
+
const count = t.generate(rules, PROJECT_ROOT);
|
|
230
|
+
console.log(` ${t.name.padEnd(16)} -> ${t.output} (${count} rules)`);
|
|
231
|
+
}
|
|
232
|
+
|
|
233
|
+
console.log("");
|
|
234
|
+
console.log(`[Done] ${config.tools.length} tool(s) synced successfully.`);
|
|
235
|
+
}
|
|
236
|
+
|
|
237
|
+
main();
|
|
@@ -0,0 +1,42 @@
|
|
|
1
|
+
"use strict";
|
|
2
|
+
|
|
3
|
+
const fs = require("fs");
|
|
4
|
+
const path = require("path");
|
|
5
|
+
|
|
6
|
+
module.exports = {
|
|
7
|
+
name: "Claude Code",
|
|
8
|
+
output: ".claude/skills/*/SKILL.md",
|
|
9
|
+
|
|
10
|
+
generate(rules, projectRoot) {
|
|
11
|
+
for (const rule of rules) {
|
|
12
|
+
const skillDir = path.join(projectRoot, ".claude", "skills", rule.name);
|
|
13
|
+
fs.mkdirSync(skillDir, { recursive: true });
|
|
14
|
+
|
|
15
|
+
const lines = [
|
|
16
|
+
"---",
|
|
17
|
+
`name: ${rule.name}`,
|
|
18
|
+
`description: "${rule.description}"`,
|
|
19
|
+
"---",
|
|
20
|
+
];
|
|
21
|
+
|
|
22
|
+
const content = lines.join("\n") + "\n\n" + rule.body + "\n";
|
|
23
|
+
fs.writeFileSync(path.join(skillDir, "SKILL.md"), content);
|
|
24
|
+
}
|
|
25
|
+
|
|
26
|
+
return rules.length;
|
|
27
|
+
},
|
|
28
|
+
|
|
29
|
+
clean(ruleNames, projectRoot) {
|
|
30
|
+
const cleaned = [];
|
|
31
|
+
|
|
32
|
+
for (const name of ruleNames) {
|
|
33
|
+
const skillDir = path.join(projectRoot, ".claude", "skills", name);
|
|
34
|
+
if (fs.existsSync(skillDir)) {
|
|
35
|
+
fs.rmSync(skillDir, { recursive: true });
|
|
36
|
+
cleaned.push(path.join(".claude", "skills", name));
|
|
37
|
+
}
|
|
38
|
+
}
|
|
39
|
+
|
|
40
|
+
return cleaned;
|
|
41
|
+
},
|
|
42
|
+
};
|
|
@@ -0,0 +1,24 @@
|
|
|
1
|
+
"use strict";
|
|
2
|
+
|
|
3
|
+
const fs = require("fs");
|
|
4
|
+
const path = require("path");
|
|
5
|
+
const { writeMergedFile } = require("../lib/parser");
|
|
6
|
+
|
|
7
|
+
module.exports = {
|
|
8
|
+
name: "OpenAI Codex",
|
|
9
|
+
output: "AGENTS.md",
|
|
10
|
+
|
|
11
|
+
generate(rules, projectRoot) {
|
|
12
|
+
writeMergedFile(path.join(projectRoot, "AGENTS.md"), rules);
|
|
13
|
+
return rules.length;
|
|
14
|
+
},
|
|
15
|
+
|
|
16
|
+
clean(_ruleNames, projectRoot) {
|
|
17
|
+
const filePath = path.join(projectRoot, "AGENTS.md");
|
|
18
|
+
if (fs.existsSync(filePath)) {
|
|
19
|
+
fs.unlinkSync(filePath);
|
|
20
|
+
return ["AGENTS.md"];
|
|
21
|
+
}
|
|
22
|
+
return [];
|
|
23
|
+
},
|
|
24
|
+
};
|
|
@@ -0,0 +1,48 @@
|
|
|
1
|
+
"use strict";
|
|
2
|
+
|
|
3
|
+
const fs = require("fs");
|
|
4
|
+
const path = require("path");
|
|
5
|
+
|
|
6
|
+
module.exports = {
|
|
7
|
+
name: "GitHub Copilot",
|
|
8
|
+
output: ".github/instructions/*.instructions.md",
|
|
9
|
+
|
|
10
|
+
generate(rules, projectRoot) {
|
|
11
|
+
const destDir = path.join(projectRoot, ".github", "instructions");
|
|
12
|
+
fs.mkdirSync(destDir, { recursive: true });
|
|
13
|
+
|
|
14
|
+
for (const rule of rules) {
|
|
15
|
+
const globs = rule.globs
|
|
16
|
+
? rule.globs.split(",").map((g) => g.trim())
|
|
17
|
+
: ["**"];
|
|
18
|
+
|
|
19
|
+
const applyToLines = globs.map((g) => ` - "${g}"`).join("\n");
|
|
20
|
+
const header = `applyTo:\n${applyToLines}\n---`;
|
|
21
|
+
|
|
22
|
+
const content = header + "\n\n" + rule.body + "\n";
|
|
23
|
+
fs.writeFileSync(
|
|
24
|
+
path.join(destDir, `${rule.name}.instructions.md`),
|
|
25
|
+
content
|
|
26
|
+
);
|
|
27
|
+
}
|
|
28
|
+
|
|
29
|
+
return rules.length;
|
|
30
|
+
},
|
|
31
|
+
|
|
32
|
+
clean(ruleNames, projectRoot) {
|
|
33
|
+
const cleaned = [];
|
|
34
|
+
const destDir = path.join(projectRoot, ".github", "instructions");
|
|
35
|
+
|
|
36
|
+
for (const name of ruleNames) {
|
|
37
|
+
const filePath = path.join(destDir, `${name}.instructions.md`);
|
|
38
|
+
if (fs.existsSync(filePath)) {
|
|
39
|
+
fs.unlinkSync(filePath);
|
|
40
|
+
cleaned.push(
|
|
41
|
+
path.join(".github", "instructions", `${name}.instructions.md`)
|
|
42
|
+
);
|
|
43
|
+
}
|
|
44
|
+
}
|
|
45
|
+
|
|
46
|
+
return cleaned;
|
|
47
|
+
},
|
|
48
|
+
};
|
|
@@ -0,0 +1,41 @@
|
|
|
1
|
+
"use strict";
|
|
2
|
+
|
|
3
|
+
const fs = require("fs");
|
|
4
|
+
const path = require("path");
|
|
5
|
+
|
|
6
|
+
module.exports = {
|
|
7
|
+
name: "Cursor",
|
|
8
|
+
output: ".cursor/rules/*.mdc",
|
|
9
|
+
|
|
10
|
+
generate(rules, projectRoot) {
|
|
11
|
+
const destDir = path.join(projectRoot, ".cursor", "rules");
|
|
12
|
+
fs.mkdirSync(destDir, { recursive: true });
|
|
13
|
+
|
|
14
|
+
for (const rule of rules) {
|
|
15
|
+
const lines = ["---", `description: "${rule.description}"`];
|
|
16
|
+
if (rule.globs) lines.push(`globs: "${rule.globs}"`);
|
|
17
|
+
lines.push(`alwaysApply: ${rule.alwaysApply}`);
|
|
18
|
+
lines.push("---");
|
|
19
|
+
|
|
20
|
+
const content = lines.join("\n") + "\n\n" + rule.body + "\n";
|
|
21
|
+
fs.writeFileSync(path.join(destDir, `${rule.name}.mdc`), content);
|
|
22
|
+
}
|
|
23
|
+
|
|
24
|
+
return rules.length;
|
|
25
|
+
},
|
|
26
|
+
|
|
27
|
+
clean(ruleNames, projectRoot) {
|
|
28
|
+
const cleaned = [];
|
|
29
|
+
const destDir = path.join(projectRoot, ".cursor", "rules");
|
|
30
|
+
|
|
31
|
+
for (const name of ruleNames) {
|
|
32
|
+
const filePath = path.join(destDir, `${name}.mdc`);
|
|
33
|
+
if (fs.existsSync(filePath)) {
|
|
34
|
+
fs.unlinkSync(filePath);
|
|
35
|
+
cleaned.push(path.join(".cursor", "rules", `${name}.mdc`));
|
|
36
|
+
}
|
|
37
|
+
}
|
|
38
|
+
|
|
39
|
+
return cleaned;
|
|
40
|
+
},
|
|
41
|
+
};
|