@dynokostya/just-works 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (46) hide show
  1. package/.claude/agents/csharp-code-writer.md +32 -0
  2. package/.claude/agents/diagrammer.md +49 -0
  3. package/.claude/agents/frontend-code-writer.md +36 -0
  4. package/.claude/agents/prompt-writer.md +38 -0
  5. package/.claude/agents/python-code-writer.md +32 -0
  6. package/.claude/agents/swift-code-writer.md +32 -0
  7. package/.claude/agents/typescript-code-writer.md +32 -0
  8. package/.claude/commands/git-sync.md +96 -0
  9. package/.claude/commands/project-docs.md +287 -0
  10. package/.claude/settings.json +112 -0
  11. package/.claude/settings.json.default +15 -0
  12. package/.claude/skills/csharp-coding/SKILL.md +368 -0
  13. package/.claude/skills/ddd-architecture-python/SKILL.md +288 -0
  14. package/.claude/skills/feature-driven-architecture-python/SKILL.md +302 -0
  15. package/.claude/skills/gemini-3-prompting/SKILL.md +483 -0
  16. package/.claude/skills/gpt-5-2-prompting/SKILL.md +295 -0
  17. package/.claude/skills/opus-4-6-prompting/SKILL.md +315 -0
  18. package/.claude/skills/plantuml-diagramming/SKILL.md +758 -0
  19. package/.claude/skills/python-coding/SKILL.md +293 -0
  20. package/.claude/skills/react-coding/SKILL.md +264 -0
  21. package/.claude/skills/rest-api/SKILL.md +421 -0
  22. package/.claude/skills/shadcn-ui-coding/SKILL.md +454 -0
  23. package/.claude/skills/swift-coding/SKILL.md +401 -0
  24. package/.claude/skills/tailwind-css-coding/SKILL.md +268 -0
  25. package/.claude/skills/typescript-coding/SKILL.md +464 -0
  26. package/.claude/statusline-command.sh +34 -0
  27. package/.codex/prompts/plan-reviewer.md +162 -0
  28. package/.codex/prompts/project-docs.md +287 -0
  29. package/.codex/skills/ddd-architecture-python/SKILL.md +288 -0
  30. package/.codex/skills/feature-driven-architecture-python/SKILL.md +302 -0
  31. package/.codex/skills/gemini-3-prompting/SKILL.md +483 -0
  32. package/.codex/skills/gpt-5-2-prompting/SKILL.md +295 -0
  33. package/.codex/skills/opus-4-6-prompting/SKILL.md +315 -0
  34. package/.codex/skills/plantuml-diagramming/SKILL.md +758 -0
  35. package/.codex/skills/python-coding/SKILL.md +293 -0
  36. package/.codex/skills/react-coding/SKILL.md +264 -0
  37. package/.codex/skills/rest-api/SKILL.md +421 -0
  38. package/.codex/skills/shadcn-ui-coding/SKILL.md +454 -0
  39. package/.codex/skills/tailwind-css-coding/SKILL.md +268 -0
  40. package/.codex/skills/typescript-coding/SKILL.md +464 -0
  41. package/AGENTS.md +57 -0
  42. package/CLAUDE.md +98 -0
  43. package/LICENSE +201 -0
  44. package/README.md +114 -0
  45. package/bin/cli.mjs +291 -0
  46. package/package.json +39 -0
@@ -0,0 +1,483 @@
1
+ ---
2
+ name: gemini-3-prompting
3
+ description: Apply when creating or editing prompts targeting Gemini 3. Covers three-layer prompt organization, context-first pattern, constraint placement, thinking_level awareness, few-shot examples, persona conflicts, long-context grounding, prompt decomposition, and migration from Gemini 2.5.
4
+ ---
5
+
6
+ # Gemini 3 Prompting
7
+
8
+ ## When to Use
9
+
10
+ - Creating or editing system prompts targeting Gemini 3
11
+ - Writing few-shot examples for classification or extraction tasks
12
+ - Structuring long-context prompts with multiple sources
13
+ - Writing agentic instructions for Gemini 3 tool-use workflows
14
+ - Decomposing complex prompts into chainable sub-prompts
15
+ - Migrating prompt text from Gemini 2.5
16
+
17
+ ## Overview
18
+
19
+ Gemini 3 responds best to direct, concise instructions. Verbose prompt engineering techniques from older models (Gemini 2.5 and earlier) cause over-analysis and degrade output quality. The model has native thinking capabilities controlled by a `thinking_level` parameter -- do not write manual chain-of-thought instructions.
20
+
21
+ <context>
22
+ Key characteristics to design around:
23
+
24
+ - **Conciseness Over Verbosity**: Direct prompts outperform over-specified ones. Remove filler instructions and meta-commentary.
25
+ - **Context-First Anchoring**: The model anchors reasoning on what it read most recently. Place source material before instructions.
26
+ - **End-Loaded Constraints**: Critical restrictions placed at the END of the prompt are followed most reliably.
27
+ - **Native Thinking**: The `thinking_level` parameter (high/low/medium/minimal) replaces manual CoT prompting -- do not write "Let's think step by step."
28
+ - **Temperature Stays at 1.0**: Lowering temperature causes looping and degraded reasoning. Write prompts assuming temperature=1.0.
29
+ - **Persona Adherence**: The model takes assigned personas seriously and may ignore other instructions to maintain persona. Review potential conflicts.
30
+ - **Default Directness**: Gemini 3 defaults to efficient, direct responses. Request conversational tone explicitly if needed.
31
+ </context>
32
+
33
+ ## Core Prompt Structure
34
+
35
+ ### Three-Layer Organization
36
+
37
+ Gemini 3 prompts perform best with a three-layer structure. Place critical constraints at the end -- the model weights final instructions most heavily.
38
+
39
+ **Layer 1 -- Context and source material:**
40
+ ```
41
+ <context>
42
+ {{ source_documents }}
43
+ </context>
44
+ ```
45
+
46
+ **Layer 2 -- Main task instructions:**
47
+ ```
48
+ Based on the information above, {{ task_instruction }}.
49
+ ```
50
+
51
+ **Layer 3 -- Negative, formatting, and quantitative constraints:**
52
+ ```
53
+ Constraints:
54
+ - Respond in {{ output_format }} format.
55
+ - Do not include information from outside the provided context.
56
+ - Limit your response to {{ max_items }} items.
57
+ ```
58
+
59
+ Full template:
60
+
61
+ ```jinja
62
+ {# Three-layer Gemini 3 prompt #}
63
+ <context>
64
+ {{ context_data }}
65
+ </context>
66
+
67
+ {{ main_instruction }}
68
+
69
+ Constraints:
70
+ - Respond in {{ output_format }} format.
71
+ {% for constraint in constraints %}
72
+ - {{ constraint }}
73
+ {% endfor %}
74
+ ```
75
+
76
+ ### Context-First Principle
77
+
78
+ Place large context blocks (documents, data, conversation history) before your questions and instructions. Use bridging phrases to connect context to the task:
79
+
80
+ ```
81
+ Based on the information above, ...
82
+ Using only the provided documents, ...
83
+ Given the context above, ...
84
+ Based on the entire document above, provide a comprehensive answer to: ...
85
+ ```
86
+
87
+ The last phrasing is especially effective when synthesizing from multiple sources -- it anchors the model to the full input rather than just the most recent section.
88
+
89
+ ### Conciseness
90
+
91
+ Remove filler that does not change model behavior:
92
+
93
+ ```
94
+ Before: "I would like you to carefully analyze the following text and provide
95
+ a detailed summary of the key points, making sure to capture all the
96
+ important information."
97
+
98
+ After: "Summarize the key points from the text above."
99
+ ```
100
+
101
+ ## Thinking and Reasoning
102
+
103
+ The `thinking_level` parameter controls how deeply the model reasons. It replaces manual chain-of-thought prompting entirely.
104
+
105
+ | Level | Availability | Use Case |
106
+ |-------|-------------|----------|
107
+ | `high` (default) | All models | Complex reasoning, analysis, math, multi-step problems |
108
+ | `low` | All models | Simple tasks where latency matters |
109
+ | `medium` | Flash only | Balanced approach for moderate complexity |
110
+ | `minimal` | Flash only | Chat, quick Q&A |
111
+
112
+ **Prompt implications:**
113
+
114
+ - Remove "Let's think step by step", "Think carefully", and similar CoT triggers from all prompts.
115
+ - If you need visible reasoning steps in the output (not just internal reasoning), request it explicitly:
116
+
117
+ ```
118
+ Analyze this data. Show your reasoning step by step, then provide your final answer.
119
+ ```
120
+
121
+ - For lower latency, combine `thinking_level: low` with the system instruction "think silently" -- this reduces visible reasoning overhead while keeping basic internal reasoning active.
122
+
123
+ ## Constraint Writing
124
+
125
+ ### Avoid Overly Broad Negatives
126
+
127
+ Broad negations like "do not infer" or "do not assume" cause the model to become overly conservative and refuse reasonable deductions.
128
+
129
+ ```
130
+ Avoid: "Do not infer any information."
131
+
132
+ Better: "Use the provided additional information or context for deductions
133
+ and avoid using outside knowledge."
134
+
135
+ Avoid: "Never make assumptions."
136
+
137
+ Better: "When the document does not address a topic, state that the
138
+ information is not available rather than speculating."
139
+ ```
140
+
141
+ ### Grounding to Provided Context
142
+
143
+ When the model should not use training data, be explicit about the source of truth:
144
+
145
+ ```
146
+ The provided context is the only source of truth for the current session.
147
+ Do not supplement answers with information from your training data.
148
+ If the context does not contain relevant information, say so.
149
+ ```
150
+
151
+ This is particularly important for hypothetical scenarios, fictional settings, or domain-specific data that contradicts general knowledge.
152
+
153
+ ### Quantitative Constraints
154
+
155
+ Gemini 3 follows quantitative constraints reliably. Use them instead of vague qualifiers:
156
+
157
+ ```
158
+ Avoid: "Keep it short."
159
+ Better: "Respond in 2-3 sentences."
160
+
161
+ Avoid: "List some examples."
162
+ Better: "List exactly 5 examples."
163
+ ```
164
+
165
+ ## Few-Shot Examples
166
+
167
+ Few-shot examples remain effective for classification, extraction, and formatting tasks. The model reproduces patterns it sees -- every example should reflect exactly the behavior you want.
168
+
169
+ - Include 2-5 diverse examples demonstrating the desired pattern
170
+ - Use consistent semantic prefixes (Input:, Output:)
171
+ - Show correct patterns only, not anti-patterns
172
+ - Place examples before the final input (context-first principle)
173
+
174
+ ```jinja
175
+ {% for example in few_shot_examples %}
176
+ Input: {{ example.input }}
177
+ Output: {{ example.output }}
178
+
179
+ {% endfor %}
180
+ Input: {{ current_input }}
181
+ Output:
182
+ ```
183
+
184
+ For classification with structured output:
185
+
186
+ ```jinja
187
+ Classify each message into one of these categories: {{ categories | join(", ") }}.
188
+
189
+ {% for example in examples %}
190
+ Message: {{ example.message }}
191
+ Category: {{ example.category }}
192
+ Confidence: {{ example.confidence }}
193
+
194
+ {% endfor %}
195
+ Message: {{ input_message }}
196
+ Category:
197
+ ```
198
+
199
+ ## Persona and Tone
200
+
201
+ ### Persona Conflicts
202
+
203
+ Gemini 3 takes assigned personas seriously and may prioritize persona adherence over other instructions. Before assigning a persona, check for conflicts:
204
+
205
+ ```
206
+ {# Potential conflict: persona says "be friendly" but constraints say "be terse" #}
207
+ You are a friendly customer support agent.
208
+ ...
209
+ Respond in 1-2 words only.
210
+
211
+ {# Resolution: align persona with constraints #}
212
+ You are a concise customer support agent who values brevity.
213
+ ...
214
+ Respond in 1-2 words only.
215
+ ```
216
+
217
+ Review potential conflicts between the persona description and:
218
+ - Output length constraints
219
+ - Tone requirements elsewhere in the prompt
220
+ - Domain restrictions (e.g., a "creative writer" persona asked to stick to facts)
221
+
222
+ ### Conversational Tone
223
+
224
+ Gemini 3 defaults to direct, efficient responses. If you need a warmer or more conversational tone, request it explicitly:
225
+
226
+ ```
227
+ Explain this as a friendly, talkative assistant. Use casual language
228
+ and occasional humor where appropriate.
229
+ ```
230
+
231
+ Without this, responses will be professional and to-the-point.
232
+
233
+ ## Long-Context and Multi-Source
234
+
235
+ ### Multi-Source Synthesis
236
+
237
+ When the prompt includes multiple documents or data sources, anchor the model to the full input:
238
+
239
+ ```jinja
240
+ <document id="1">
241
+ {{ document_1 }}
242
+ </document>
243
+
244
+ <document id="2">
245
+ {{ document_2 }}
246
+ </document>
247
+
248
+ {% if documents|length > 2 %}
249
+ {% for doc in documents[2:] %}
250
+ <document id="{{ loop.index + 2 }}">
251
+ {{ doc }}
252
+ </document>
253
+
254
+ {% endfor %}
255
+ {% endif %}
256
+
257
+ Based on the entire set of documents above, provide a comprehensive answer
258
+ to the following question. Reference specific documents by ID when citing
259
+ information.
260
+
261
+ Question: {{ question }}
262
+ ```
263
+
264
+ ### Knowledge Cutoff Declaration
265
+
266
+ When the model needs to be aware of its knowledge boundaries, include the cutoff in system instructions:
267
+
268
+ ```
269
+ Your knowledge cutoff date is January 2025. For events or information
270
+ after this date, rely only on the provided context.
271
+ ```
272
+
273
+ ### Grounding Hypothetical Scenarios
274
+
275
+ For fictional, counterfactual, or simulation-based prompts, establish the context as the sole source of truth:
276
+
277
+ ```
278
+ You are operating in a simulated environment. The provided context describes
279
+ the current state of this environment. Treat it as the only source of truth.
280
+ Do not reference real-world information that contradicts the simulation state.
281
+ ```
282
+
283
+ ## Prompt Decomposition
284
+
285
+ ### Breaking Complex Prompts
286
+
287
+ When a single prompt tries to do too much, split it into focused sub-prompts and chain outputs:
288
+
289
+ ```
290
+ {# Instead of one massive prompt, decompose into stages #}
291
+
292
+ Stage 1 -- Extract:
293
+ "Extract all dates, names, and monetary amounts from the contract above.
294
+ Respond in JSON format."
295
+
296
+ Stage 2 -- Analyze (receives Stage 1 output):
297
+ "Given the extracted data above, identify any clauses where the effective
298
+ date is more than 90 days from the signing date."
299
+
300
+ Stage 3 -- Summarize (receives Stage 2 output):
301
+ "Summarize the flagged clauses in plain language for a non-legal audience."
302
+ ```
303
+
304
+ ### Two-Step Verification Pattern
305
+
306
+ When the model might lack information or the context might not contain what you need, split into verification then generation:
307
+
308
+ ```
309
+ First, check if the document above contains information about {{ topic }}.
310
+ If it does, answer the following question based on that information:
311
+ {{ question }}
312
+ If the document does not contain relevant information, state that clearly
313
+ instead of answering from general knowledge.
314
+ ```
315
+
316
+ This prevents the model from silently falling back to training data when the context is incomplete.
317
+
318
+ ### Parallel Decomposition
319
+
320
+ For tasks that can be answered independently, structure sub-prompts for parallel execution and aggregation:
321
+
322
+ ```jinja
323
+ {# Run these as parallel calls, then aggregate #}
324
+ {% for section in document_sections %}
325
+ Prompt {{ loop.index }}:
326
+ "Summarize the following section in 2-3 sentences:
327
+ {{ section }}"
328
+ {% endfor %}
329
+
330
+ Aggregation prompt:
331
+ "Given the section summaries above, write a unified executive summary
332
+ in one paragraph."
333
+ ```
334
+
335
+ ## Agentic Prompts
336
+
337
+ For Gemini 3 in agentic workflows with tool access:
338
+
339
+ ```jinja
340
+ Agent Instructions:
341
+ - When encountering ambiguity, ask for clarification rather than assuming.
342
+ - Before taking state-changing actions, explain what will change and why.
343
+ - When multiple approaches exist, evaluate trade-offs before choosing.
344
+ - For routine tool execution, proceed without narration.
345
+ - For planning and complex decisions, explain your reasoning.
346
+ ```
347
+
348
+ Key considerations:
349
+
350
+ - Use `thinking_level: high` for planning and complex decisions, `low` for routine tool execution.
351
+ - Specify when to assume vs. request clarification -- without guidance, the model tends to assume.
352
+ - Distinguish exploratory actions (safe to take) from state-changing actions (explain first).
353
+ - Let the model's native thinking handle task decomposition; avoid over-prescribing steps.
354
+
355
+ ## Common Patterns
356
+
357
+ ### Classification Task
358
+
359
+ ```jinja
360
+ Classify the following {{ item_type }} into one of these categories: {{ categories | join(", ") }}.
361
+
362
+ {% for example in examples %}
363
+ {{ item_type }}: {{ example.input }}
364
+ Category: {{ example.category }}
365
+
366
+ {% endfor %}
367
+
368
+ {{ item_type }}: {{ input_item }}
369
+ Category:
370
+ ```
371
+
372
+ ### Extraction Task
373
+
374
+ ```jinja
375
+ Extract {{ fields | join(", ") }} from the following text.
376
+
377
+ Text: {{ input_text }}
378
+
379
+ Respond in JSON format:
380
+ {
381
+ {% for field in fields %}
382
+ "{{ field }}": "..."{% if not loop.last %},{% endif %}
383
+ {% endfor %}
384
+ }
385
+
386
+ JSON:
387
+ ```
388
+
389
+ ### Reasoning Task
390
+
391
+ Set `thinking_level: high` and keep the prompt simple:
392
+
393
+ ```jinja
394
+ {{ question }}
395
+
396
+ Provide your analysis and final answer.
397
+ ```
398
+
399
+ If you need visible reasoning in the output:
400
+
401
+ ```jinja
402
+ {{ question }}
403
+
404
+ Show your reasoning step by step, then provide your final answer.
405
+ ```
406
+
407
+ ### Tool-Augmented Task
408
+
409
+ ```jinja
410
+ You have access to the following tools:
411
+ {% for tool in tools %}
412
+ - {{ tool.name }}: {{ tool.description }}
413
+ {% endfor %}
414
+
415
+ {{ task_description }}
416
+
417
+ Respond in JSON format:
418
+ {
419
+ "answer": "...",
420
+ "sources": ["..."],
421
+ "confidence": 0.0-1.0
422
+ }
423
+ ```
424
+
425
+ ## Iteration Techniques
426
+
427
+ When a prompt is not producing the desired output, try these approaches in order:
428
+
429
+ 1. **Rephrase differently**: Use different wording for the same instruction. Gemini 3 can respond differently to semantically equivalent phrasings.
430
+
431
+ 2. **Reorder content**: Move the most important instruction to the end of the prompt (end-loaded constraints are weighted more heavily).
432
+
433
+ 3. **Switch to an analogous task**: If "summarize this document" gives poor results, try "extract the 5 most important points from this document" -- a related but differently framed task.
434
+
435
+ 4. **Add or remove examples**: If the model is over-fitting to examples, reduce to 2. If it is under-performing, add examples that cover edge cases.
436
+
437
+ 5. **Adjust constraint specificity**: Replace vague constraints with quantitative ones, or loosen overly tight constraints that prevent good output.
438
+
439
+ 6. **Decompose**: If iteration is not converging, the prompt may be too complex. Split into sub-prompts (see Prompt Decomposition section).
440
+
441
+ ## Migration from Gemini 2.5
442
+
443
+ Prompt-level changes when migrating from Gemini 2.5 to Gemini 3:
444
+
445
+ - [ ] Remove manual CoT instructions ("Let's think step by step", "Think carefully before answering") -- set the `thinking_level` parameter instead
446
+ - [ ] Remove `temperature` overrides below 1.0 -- Gemini 3 requires temperature=1.0
447
+ - [ ] Simplify verbose prompts -- Gemini 3 handles concise instructions better than over-specified ones
448
+ - [ ] Move critical constraints to the end of the prompt (three-layer pattern)
449
+ - [ ] Replace broad negatives ("do not infer") with specific alternatives
450
+ - [ ] Test persona instructions for conflicts with other constraints
451
+ - [ ] Remove image segmentation instructions (not supported in Gemini 3)
452
+
453
+ ## Anti-Patterns
454
+
455
+ - **Manual chain-of-thought**: "Let's think step by step" degrades output when `thinking_level` is active. Remove it.
456
+ - **Temperature below 1.0**: Causes looping and degraded reasoning. Always use 1.0.
457
+ - **Broad negatives**: "Do not infer" or "Never assume" makes the model refuse reasonable deductions. Use specific alternatives.
458
+ - **Persona-constraint conflicts**: A "friendly, verbose" persona with a "respond in 2 words" constraint. The model will prioritize persona.
459
+ - **Context after instructions**: Placing source material after the question weakens grounding. Context goes first.
460
+ - **Mixed delimiter styles**: Using both XML tags and triple backticks for structural sections in the same prompt. Pick one style.
461
+ - **Over-specified prompts**: Long meta-instructions about how to approach the task. Keep prompts focused on what to do, not how to think.
462
+ - **Anti-pattern examples**: Showing the model what NOT to do. It reproduces patterns it sees, including bad ones.
463
+ - **Missing output format**: Not specifying expected response structure (JSON, list, table). Always define the format.
464
+
465
+ ## Quality Checklist
466
+
467
+ - [ ] Instructions are concise and direct (no verbose meta-instructions)
468
+ - [ ] Three-layer structure: context, then instructions, then constraints at the end
469
+ - [ ] Context is placed before questions/instructions
470
+ - [ ] Response format is explicitly defined
471
+ - [ ] Few-shot examples are included where appropriate (2-5 examples)
472
+ - [ ] Examples show only correct patterns, not anti-patterns
473
+ - [ ] No manual CoT instructions ("think step by step") -- use `thinking_level` parameter
474
+ - [ ] No temperature overrides below 1.0
475
+ - [ ] Negative constraints are specific, not broad ("use provided context" instead of "do not infer")
476
+ - [ ] Persona instructions do not conflict with other constraints
477
+ - [ ] Grounding instructions are included when context should override training data
478
+ - [ ] Complex prompts are decomposed into chainable sub-prompts where needed
479
+
480
+ ## Reference
481
+
482
+ - Official Gemini 3 Documentation: https://ai.google.dev/gemini-api/docs/gemini-3
483
+ - Gemini Prompting Guide: https://ai.google.dev/gemini-api/docs/prompting-strategies