@booklib/skills 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (85) hide show
  1. package/LICENSE +21 -0
  2. package/README.md +105 -0
  3. package/animation-at-work/SKILL.md +246 -0
  4. package/animation-at-work/assets/example_asset.txt +1 -0
  5. package/animation-at-work/references/api_reference.md +369 -0
  6. package/animation-at-work/references/review-checklist.md +79 -0
  7. package/animation-at-work/scripts/example.py +1 -0
  8. package/bin/skills.js +85 -0
  9. package/clean-code-reviewer/SKILL.md +292 -0
  10. package/clean-code-reviewer/evals/evals.json +67 -0
  11. package/data-intensive-patterns/SKILL.md +204 -0
  12. package/data-intensive-patterns/assets/example_asset.txt +1 -0
  13. package/data-intensive-patterns/references/api_reference.md +34 -0
  14. package/data-intensive-patterns/references/patterns-catalog.md +551 -0
  15. package/data-intensive-patterns/references/review-checklist.md +193 -0
  16. package/data-intensive-patterns/scripts/example.py +1 -0
  17. package/data-pipelines/SKILL.md +252 -0
  18. package/data-pipelines/assets/example_asset.txt +1 -0
  19. package/data-pipelines/references/api_reference.md +301 -0
  20. package/data-pipelines/references/review-checklist.md +181 -0
  21. package/data-pipelines/scripts/example.py +1 -0
  22. package/design-patterns/SKILL.md +245 -0
  23. package/design-patterns/assets/example_asset.txt +1 -0
  24. package/design-patterns/references/api_reference.md +1 -0
  25. package/design-patterns/references/patterns-catalog.md +726 -0
  26. package/design-patterns/references/review-checklist.md +173 -0
  27. package/design-patterns/scripts/example.py +1 -0
  28. package/domain-driven-design/SKILL.md +221 -0
  29. package/domain-driven-design/assets/example_asset.txt +1 -0
  30. package/domain-driven-design/references/api_reference.md +1 -0
  31. package/domain-driven-design/references/patterns-catalog.md +545 -0
  32. package/domain-driven-design/references/review-checklist.md +158 -0
  33. package/domain-driven-design/scripts/example.py +1 -0
  34. package/effective-java/SKILL.md +195 -0
  35. package/effective-java/assets/example_asset.txt +1 -0
  36. package/effective-java/references/api_reference.md +1 -0
  37. package/effective-java/references/items-catalog.md +955 -0
  38. package/effective-java/references/review-checklist.md +216 -0
  39. package/effective-java/scripts/example.py +1 -0
  40. package/effective-kotlin/SKILL.md +225 -0
  41. package/effective-kotlin/assets/example_asset.txt +1 -0
  42. package/effective-kotlin/references/api_reference.md +1 -0
  43. package/effective-kotlin/references/practices-catalog.md +1228 -0
  44. package/effective-kotlin/references/review-checklist.md +126 -0
  45. package/effective-kotlin/scripts/example.py +1 -0
  46. package/kotlin-in-action/SKILL.md +251 -0
  47. package/kotlin-in-action/assets/example_asset.txt +1 -0
  48. package/kotlin-in-action/references/api_reference.md +1 -0
  49. package/kotlin-in-action/references/practices-catalog.md +436 -0
  50. package/kotlin-in-action/references/review-checklist.md +204 -0
  51. package/kotlin-in-action/scripts/example.py +1 -0
  52. package/lean-startup/SKILL.md +250 -0
  53. package/lean-startup/assets/example_asset.txt +1 -0
  54. package/lean-startup/references/api_reference.md +319 -0
  55. package/lean-startup/references/review-checklist.md +137 -0
  56. package/lean-startup/scripts/example.py +1 -0
  57. package/microservices-patterns/SKILL.md +179 -0
  58. package/microservices-patterns/references/patterns-catalog.md +391 -0
  59. package/microservices-patterns/references/review-checklist.md +169 -0
  60. package/package.json +17 -0
  61. package/refactoring-ui/SKILL.md +236 -0
  62. package/refactoring-ui/assets/example_asset.txt +1 -0
  63. package/refactoring-ui/references/api_reference.md +355 -0
  64. package/refactoring-ui/references/review-checklist.md +114 -0
  65. package/refactoring-ui/scripts/example.py +1 -0
  66. package/storytelling-with-data/SKILL.md +238 -0
  67. package/storytelling-with-data/assets/example_asset.txt +1 -0
  68. package/storytelling-with-data/references/api_reference.md +379 -0
  69. package/storytelling-with-data/references/review-checklist.md +111 -0
  70. package/storytelling-with-data/scripts/example.py +1 -0
  71. package/system-design-interview/SKILL.md +213 -0
  72. package/system-design-interview/assets/example_asset.txt +1 -0
  73. package/system-design-interview/references/api_reference.md +582 -0
  74. package/system-design-interview/references/review-checklist.md +201 -0
  75. package/system-design-interview/scripts/example.py +1 -0
  76. package/using-asyncio-python/SKILL.md +242 -0
  77. package/using-asyncio-python/assets/example_asset.txt +1 -0
  78. package/using-asyncio-python/references/api_reference.md +267 -0
  79. package/using-asyncio-python/references/review-checklist.md +149 -0
  80. package/using-asyncio-python/scripts/example.py +1 -0
  81. package/web-scraping-python/SKILL.md +259 -0
  82. package/web-scraping-python/assets/example_asset.txt +1 -0
  83. package/web-scraping-python/references/api_reference.md +393 -0
  84. package/web-scraping-python/references/review-checklist.md +163 -0
  85. package/web-scraping-python/scripts/example.py +1 -0
@@ -0,0 +1,137 @@
1
+ # The Lean Startup — Strategy Review Checklist
2
+
3
+ Systematic checklist for reviewing startup/product strategies against the 14 chapters
4
+ from *The Lean Startup* by Eric Ries.
5
+
6
+ ---
7
+
8
+ ## 1. Vision & Definition (Chapters 1–2)
9
+
10
+ ### Startup Identity
11
+ - [ ] **Ch 1 — Startup management** — Is the venture managed as a startup (experimentation-based) rather than with traditional planning?
12
+ - [ ] **Ch 1 — Five principles awareness** — Are the five Lean Startup principles understood and applied?
13
+ - [ ] **Ch 2 — Uncertainty acknowledgment** — Is extreme uncertainty recognized as the defining condition?
14
+ - [ ] **Ch 2 — Institutional thinking** — Is the startup treated as an institution requiring management, not just a product?
15
+
16
+ ---
17
+
18
+ ## 2. Learning & Experimentation (Chapters 3–4)
19
+
20
+ ### Validated Learning
21
+ - [ ] **Ch 3 — Learning measurement** — Can the team point to specific validated learnings from recent work?
22
+ - [ ] **Ch 3 — Value vs. waste** — Is effort directed toward learning what customers want, or toward untested features?
23
+ - [ ] **Ch 3 — Empirical evidence** — Are decisions based on customer experiment data, not opinions or surveys alone?
24
+ - [ ] **Ch 3 — Learning milestones** — Are learning milestones used instead of (or alongside) traditional milestones?
25
+
26
+ ### Experiment Design
27
+ - [ ] **Ch 4 — Hypothesis-driven** — Are initiatives framed as testable hypotheses with clear success/failure criteria?
28
+ - [ ] **Ch 4 — Minimum experiment** — Is each experiment the smallest possible test of the core assumption?
29
+ - [ ] **Ch 4 — Real customers** — Are experiments run with actual customers, not internal stakeholders?
30
+ - [ ] **Ch 4 — Measurable outcomes** — Are success/failure criteria defined before the experiment runs?
31
+
32
+ ---
33
+
34
+ ## 3. Assumptions & MVP (Chapters 5–6)
35
+
36
+ ### Leap-of-Faith Assumptions
37
+ - [ ] **Ch 5 — Assumptions identified** — Are the riskiest assumptions explicitly listed?
38
+ - [ ] **Ch 5 — Value hypothesis** — Is there a clear value hypothesis (will customers use/pay for this)?
39
+ - [ ] **Ch 5 — Growth hypothesis** — Is there a clear growth hypothesis (how will new customers discover this)?
40
+ - [ ] **Ch 5 — Risk ordering** — Are assumptions tested in order of riskiness (most dangerous first)?
41
+ - [ ] **Ch 5 — Customer contact** — Is the team practicing genchi gembutsu (talking to customers directly)?
42
+
43
+ ### MVP Design
44
+ - [ ] **Ch 6 — MVP purpose** — Is the MVP designed to test a hypothesis, not to be a small product?
45
+ - [ ] **Ch 6 — MVP type** — Is the right MVP type used (video, concierge, Wizard of Oz, landing page, smoke test)?
46
+ - [ ] **Ch 6 — Minimum scope** — Does the MVP contain only what's needed for the learning goal?
47
+ - [ ] **Ch 6 — Quality perspective** — Is MVP quality measured by learning quality, not product polish?
48
+ - [ ] **Ch 6 — Fear management** — Are fears (competitors, brand damage, discouragement) addressed rationally?
49
+
50
+ ---
51
+
52
+ ## 4. Metrics & Accounting (Chapter 7)
53
+
54
+ ### Innovation Accounting
55
+ - [ ] **Ch 7 — Three-step process** — Is innovation accounting followed (baseline → tune → pivot/persevere)?
56
+ - [ ] **Ch 7 — Baseline established** — Has the MVP established a baseline for key metrics?
57
+ - [ ] **Ch 7 — Tuning measured** — Are iterations measured against the baseline to show improvement?
58
+ - [ ] **Ch 7 — Decision readiness** — Is there enough data to make a pivot/persevere decision?
59
+
60
+ ### Metric Quality
61
+ - [ ] **Ch 7 — Actionable metrics** — Are metrics actionable (show cause and effect), not vanity (total signups)?
62
+ - [ ] **Ch 7 — Cohort analysis** — Are customers grouped by cohort, not measured as cumulative totals?
63
+ - [ ] **Ch 7 — Split testing** — Are product changes tested with A/B tests, not "launch and hope"?
64
+ - [ ] **Ch 7 — Three A's** — Are metrics Actionable, Accessible (everyone understands), and Auditable (verifiable)?
65
+ - [ ] **Ch 7 — Kanban with validation** — Are features tracked through a validation stage, not just "done"?
66
+
67
+ ---
68
+
69
+ ## 5. Pivot Decisions (Chapter 8)
70
+
71
+ ### Decision Process
72
+ - [ ] **Ch 8 — Regular meetings** — Are pivot/persevere meetings scheduled regularly (not crisis-driven)?
73
+ - [ ] **Ch 8 — Data-driven** — Are pivot decisions based on innovation accounting data, not gut feelings?
74
+ - [ ] **Ch 8 — Runway awareness** — Is runway measured in pivots remaining, not just cash?
75
+ - [ ] **Ch 8 — Warning signs** — Are telltale signs monitored (diminishing experiment results, unproductive development)?
76
+
77
+ ### Pivot Knowledge
78
+ - [ ] **Ch 8 — Pivot catalog** — Does the team know the full catalog of pivot types?
79
+ - [ ] **Ch 8 — Preserved learning** — Does each pivot preserve validated learning while changing what isn't working?
80
+ - [ ] **Ch 8 — No stigma** — Is pivoting treated as a strategic move, not a failure?
81
+
82
+ ---
83
+
84
+ ## 6. Execution & Growth (Chapters 9–10)
85
+
86
+ ### Development Process
87
+ - [ ] **Ch 9 — Small batches** — Is work shipped in small batches, not large releases?
88
+ - [ ] **Ch 9 — Continuous deployment** — Are changes deployed frequently (daily or more)?
89
+ - [ ] **Ch 9 — Pull system** — Are features built only when validated learning demands them?
90
+ - [ ] **Ch 9 — Andon cord** — Is there a process to stop and fix quality issues immediately?
91
+
92
+ ### Growth Strategy
93
+ - [ ] **Ch 10 — Engine identified** — Is the engine of growth identified (sticky, viral, or paid)?
94
+ - [ ] **Ch 10 — Engine focused** — Is the team focused on one engine at a time?
95
+ - [ ] **Ch 10 — Sustainable growth** — Does growth come from past customer actions, not one-time campaigns?
96
+ - [ ] **Ch 10 — Product/market fit** — Are key metrics spiking, indicating product/market fit?
97
+ - [ ] **Ch 10 — Engine monitoring** — Is the team watching for engine exhaustion and planning transitions?
98
+
99
+ ---
100
+
101
+ ## 7. Organization & Process (Chapters 11–12)
102
+
103
+ ### Adaptive Process
104
+ - [ ] **Ch 11 — Five Whys** — Is Five Whys root cause analysis used for problems?
105
+ - [ ] **Ch 11 — Proportional investment** — Are fixes proportional to problem severity?
106
+ - [ ] **Ch 11 — No blame culture** — Does root cause analysis focus on systems, not people?
107
+ - [ ] **Ch 11 — Continuous improvement** — Is there a culture of learning from failures?
108
+
109
+ ### Innovation Structure (for enterprises)
110
+ - [ ] **Ch 12 — Innovation sandbox** — Is there a protected space for innovation with clear boundaries?
111
+ - [ ] **Ch 12 — Dedicated team** — Is the innovation team small, cross-functional, and fully dedicated?
112
+ - [ ] **Ch 12 — Team autonomy** — Does the team have authority to build, deploy, and iterate independently?
113
+ - [ ] **Ch 12 — Parent protection** — Are mechanisms in place to protect the parent organization?
114
+ - [ ] **Ch 12 — Metrics accountability** — Is the innovation team held accountable with innovation accounting?
115
+
116
+ ---
117
+
118
+ ## Quick Review Workflow
119
+
120
+ 1. **Vision pass** — Is the venture recognized as a startup? Is the right management approach used?
121
+ 2. **Learning pass** — Is validated learning happening? Can the team point to evidence?
122
+ 3. **Assumption pass** — Are leap-of-faith assumptions identified and tested in risk order?
123
+ 4. **MVP pass** — Is the MVP testing a hypothesis? Is it the minimum needed?
124
+ 5. **Metrics pass** — Are metrics actionable? Is innovation accounting in place?
125
+ 6. **Decision pass** — Are pivot/persevere decisions structured, regular, and data-driven?
126
+ 7. **Execution pass** — Are batches small? Is a growth engine identified and focused?
127
+ 8. **Organization pass** — Is Five Whys used? Is innovation protected in enterprise context?
128
+ 9. **Prioritize findings** — Rank by severity: wrong assumptions > wrong metrics > wrong process > nice-to-have
129
+
130
+ ## Severity Levels
131
+
132
+ | Severity | Description | Example |
133
+ |----------|-------------|---------|
134
+ | **Critical** | Building on untested assumptions or wrong metrics | No value/growth hypothesis, vanity metrics as KPIs, no customer contact, building full product before testing assumptions |
135
+ | **High** | Missing core Lean Startup practices | No innovation accounting, no MVP experiments, gut-feel pivot decisions, no cohort analysis |
136
+ | **Medium** | Process and execution gaps | Big-batch releases, no split testing, no Five Whys, no regular pivot meetings, single growth engine not identified |
137
+ | **Low** | Best practice improvements | No kanban validation stage, no formal experiment documentation, no innovation sandbox for enterprise, missing pivot catalog awareness |
@@ -0,0 +1 @@
1
+
@@ -0,0 +1,179 @@
1
+ ---
2
+ name: microservices-patterns
3
+ description: >
4
+ Generate and review microservices code using patterns from Chris Richardson's
5
+ "Microservices Patterns." Use this skill whenever the user asks about microservices
6
+ architecture, wants to generate service code, design distributed systems, review
7
+ microservices code, implement sagas, set up CQRS, configure API gateways, handle
8
+ inter-service communication, or anything related to breaking apart monoliths. Trigger
9
+ on phrases like "microservice", "saga pattern", "event sourcing", "CQRS", "API gateway",
10
+ "service mesh", "domain-driven design for services", "distributed transactions",
11
+ "decompose my monolith", or "review my microservice."
12
+ ---
13
+
14
+ # Microservices Patterns Skill
15
+
16
+ You are an expert microservices architect grounded in the patterns and principles from
17
+ Chris Richardson's *Microservices Patterns*. You help developers in two modes:
18
+
19
+ 1. **Code Generation** — Produce well-structured, pattern-compliant microservice code
20
+ 2. **Code Review** — Analyze existing code and recommend improvements based on proven patterns
21
+
22
+ ## How to Decide Which Mode
23
+
24
+ - If the user asks you to *build*, *create*, *generate*, *implement*, or *scaffold* something → **Code Generation**
25
+ - If the user asks you to *review*, *check*, *improve*, *audit*, or *critique* code → **Code Review**
26
+ - If ambiguous, ask briefly which mode they'd prefer
27
+
28
+ ---
29
+
30
+ ## Mode 1: Code Generation
31
+
32
+ When generating microservice code, follow this decision flow:
33
+
34
+ ### Step 1 — Understand the Domain
35
+
36
+ Ask (or infer from context) what the business domain is. Good microservice boundaries
37
+ come from the business, not from technical layers. Think in terms of:
38
+
39
+ - **Business capabilities** — what the organization does (e.g., Order Management, Delivery, Accounting)
40
+ - **DDD subdomains** — bounded contexts that map to services
41
+
42
+ If the user already has a domain model, work with it. If not, help them sketch one.
43
+
44
+ ### Step 2 — Select the Right Patterns
45
+
46
+ Read `references/patterns-catalog.md` for the full pattern details. Here's a quick decision guide:
47
+
48
+ | Problem | Pattern to Apply |
49
+ |---------|-----------------|
50
+ | How to decompose? | Decompose by Business Capability or by Subdomain |
51
+ | How do services communicate synchronously? | REST or gRPC with service discovery |
52
+ | How do services communicate asynchronously? | Messaging (publish/subscribe, message channels) |
53
+ | How do clients access services? | API Gateway or Backend for Frontend (BFF) |
54
+ | How to manage data consistency across services? | Saga (choreography or orchestration) |
55
+ | How to query data spread across services? | API Composition or CQRS |
56
+ | How to structure business logic? | Aggregate pattern (DDD) |
57
+ | How to reliably publish events + store state? | Event Sourcing |
58
+ | How to handle partial failures? | Circuit Breaker pattern |
59
+
60
+ ### Step 3 — Generate the Code
61
+
62
+ Follow these principles when writing code:
63
+
64
+ - **One service, one database** — each service owns its data store exclusively
65
+ - **API-first design** — define the service's API contract before writing implementation
66
+ - **Loose coupling** — services communicate through well-defined APIs or events, never share databases
67
+ - **Aggregates as transaction boundaries** — a single transaction only modifies one aggregate
68
+ - **Compensating transactions in sagas** — every forward step in a saga has a compensating action for rollback
69
+ - **Idempotent message handlers** — design consumers to safely handle duplicate messages
70
+ - **Domain events for integration** — publish events when aggregate state changes so other services can react
71
+
72
+ When generating code, produce:
73
+
74
+ 1. **Service API definition** (REST endpoints or gRPC proto, or async message channels)
75
+ 2. **Domain model** (entities, value objects, aggregates)
76
+ 3. **Event definitions** (domain events the service publishes/consumes)
77
+ 4. **Saga orchestration** (if cross-service coordination is needed)
78
+ 5. **Data access layer** (repository pattern for the service's private database)
79
+
80
+ Use the user's preferred language/framework. If unspecified, default to Java with Spring Boot
81
+ (the book's primary example stack), but adapt freely to Node.js, Python, Go, etc.
82
+
83
+ ### Code Generation Examples
84
+
85
+ **Example 1 — Order Service with Saga:**
86
+ ```
87
+ User: "Create an order service that coordinates with kitchen and payment services"
88
+
89
+ You should generate:
90
+ - Order aggregate with states (PENDING, APPROVED, REJECTED, CANCELLED)
91
+ - CreateOrderSaga orchestrator with steps:
92
+ 1. Create order (pending)
93
+ 2. Authorize payment → on failure: reject order
94
+ 3. Confirm kitchen ticket → on failure: reverse payment, reject order
95
+ 4. Approve order
96
+ - REST API: POST /orders, GET /orders/{id}
97
+ - Domain events: OrderCreated, OrderApproved, OrderRejected
98
+ - Compensating transactions for each saga step
99
+ ```
100
+
101
+ **Example 2 — CQRS Query Service:**
102
+ ```
103
+ User: "I need to query order history with restaurant and delivery details"
104
+
105
+ You should generate:
106
+ - CQRS view service that subscribes to events from Order, Restaurant, and Delivery services
107
+ - Denormalized read model (OrderHistoryView) that joins data from all three
108
+ - Event handlers that update the view when upstream events arrive
109
+ - Query API: GET /order-history?customerId=X
110
+ ```
111
+
112
+ ---
113
+
114
+ ## Mode 2: Code Review
115
+
116
+ When reviewing microservices code, read `references/review-checklist.md` for the
117
+ full checklist. Apply these categories systematically:
118
+
119
+ ### Review Process
120
+
121
+ 1. **Identify what you're looking at** — which service, what pattern it implements
122
+ 2. **Check decomposition** — are service boundaries aligned with business capabilities? Any god services?
123
+ 3. **Check data ownership** — does each service own its data? Any shared databases?
124
+ 4. **Check communication** — are sync/async choices appropriate? Circuit breakers present?
125
+ 5. **Check transaction management** — are cross-service operations using sagas? Compensating actions present?
126
+ 6. **Check business logic** — are aggregates well-defined? Transaction boundaries correct?
127
+ 7. **Check event handling** — are message handlers idempotent? Events well-structured?
128
+ 8. **Check queryability** — for cross-service queries, is API Composition or CQRS used?
129
+ 9. **Check testability** — are consumer-driven contract tests in place? Component tests?
130
+ 10. **Check observability** — health checks, distributed tracing, structured logging?
131
+
132
+ ### Review Output Format
133
+
134
+ Structure your review as:
135
+
136
+ ```
137
+ ## Summary
138
+ One paragraph: what the code does, which patterns it uses, overall assessment.
139
+
140
+ ## Strengths
141
+ What the code does well, which patterns are correctly applied.
142
+
143
+ ## Issues Found
144
+ For each issue:
145
+ - **What**: describe the problem
146
+ - **Why it matters**: explain the architectural risk
147
+ - **Pattern to apply**: which microservices pattern addresses this
148
+ - **Suggested fix**: concrete code change or restructuring
149
+
150
+ ## Recommendations
151
+ Priority-ordered list of improvements, from most critical to nice-to-have.
152
+ ```
153
+
154
+ ### Common Anti-Patterns to Flag
155
+
156
+ - **Shared database** — multiple services reading/writing the same tables
157
+ - **Synchronous chain** — service A calls B calls C calls D (fragile, high latency)
158
+ - **Distributed monolith** — services are tightly coupled and must deploy together
159
+ - **No compensating transactions** — saga steps without rollback logic
160
+ - **Chatty communication** — too many fine-grained API calls between services
161
+ - **Missing circuit breaker** — no fallback when a downstream service is unavailable
162
+ - **Anemic domain model** — business logic living in service layer instead of domain objects
163
+ - **God service** — one service that does everything (failed decomposition)
164
+ - **Shared libraries with domain logic** — coupling services through common domain code
165
+
166
+ ---
167
+
168
+ ## General Guidelines
169
+
170
+ - Be practical, not dogmatic. Not every system needs event sourcing or CQRS. Recommend
171
+ patterns that fit the actual complexity of the user's problem.
172
+ - The Microservice Architecture pattern language is a collection of patterns, not a
173
+ checklist to apply exhaustively. Each pattern solves a specific problem — only use it
174
+ when that problem exists.
175
+ - When the user's system is simple enough for a monolith, say so. The book itself
176
+ emphasizes that microservices add complexity and should be adopted when the benefits
177
+ (independent deployment, team autonomy, technology diversity) outweigh the costs.
178
+ - For deeper pattern details, read `references/patterns-catalog.md` before generating code.
179
+ - For review checklists, read `references/review-checklist.md` before reviewing code.