@forwardimpact/schema 0.3.0 → 0.6.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/bin/fit-schema.js +2 -2
- package/examples/capabilities/business.yaml +1 -1
- package/examples/capabilities/delivery.yaml +9 -7
- package/examples/capabilities/people.yaml +1 -1
- package/examples/capabilities/reliability.yaml +32 -11
- package/examples/capabilities/scale.yaml +1 -1
- package/examples/framework.yaml +1 -1
- package/examples/questions/behaviours/outcome_ownership.yaml +226 -49
- package/examples/questions/behaviours/polymathic_knowledge.yaml +273 -45
- package/examples/questions/behaviours/precise_communication.yaml +246 -52
- package/examples/questions/behaviours/relentless_curiosity.yaml +246 -48
- package/examples/questions/behaviours/systems_thinking.yaml +236 -50
- package/examples/questions/capabilities/business.yaml +107 -0
- package/examples/questions/capabilities/delivery.yaml +104 -0
- package/examples/questions/capabilities/people.yaml +104 -0
- package/examples/questions/capabilities/reliability.yaml +103 -0
- package/examples/questions/capabilities/scale.yaml +103 -0
- package/examples/questions/skills/architecture_design.yaml +102 -51
- package/examples/questions/skills/cloud_platforms.yaml +90 -44
- package/examples/questions/skills/code_quality.yaml +86 -45
- package/examples/questions/skills/data_modeling.yaml +93 -43
- package/examples/questions/skills/devops.yaml +91 -44
- package/examples/questions/skills/full_stack_development.yaml +93 -45
- package/examples/questions/skills/sre_practices.yaml +92 -41
- package/examples/questions/skills/stakeholder_management.yaml +97 -46
- package/examples/questions/skills/team_collaboration.yaml +87 -40
- package/examples/questions/skills/technical_writing.yaml +89 -40
- package/examples/stages.yaml +6 -0
- package/package.json +9 -9
- package/schema/json/behaviour-questions.schema.json +53 -26
- package/schema/json/capability-questions.schema.json +95 -0
- package/schema/json/capability.schema.json +3 -3
- package/schema/json/skill-questions.schema.json +34 -19
- package/schema/json/stages.schema.json +5 -1
- package/schema/rdf/behaviour-questions.ttl +39 -7
- package/schema/rdf/capability.ttl +5 -5
- package/schema/rdf/defs.ttl +3 -3
- package/schema/rdf/skill-questions.ttl +28 -1
- package/schema/rdf/stages.ttl +27 -3
- package/{lib → src}/levels.js +37 -80
- package/{lib → src}/loader.js +9 -5
- package/{lib → src}/modifiers.js +3 -3
- package/{lib → src}/validation.js +74 -37
- /package/{lib → src}/index-generator.js +0 -0
- /package/{lib → src}/index.js +0 -0
- /package/{lib → src}/schema-validation.js +0 -0
|
@@ -1,50 +1,248 @@
|
|
|
1
1
|
# yaml-language-server: $schema=https://www.forwardimpact.team/schema/json/behaviour-questions.schema.json
|
|
2
2
|
|
|
3
|
-
|
|
4
|
-
|
|
5
|
-
|
|
6
|
-
|
|
7
|
-
|
|
8
|
-
|
|
9
|
-
|
|
10
|
-
|
|
11
|
-
|
|
12
|
-
|
|
13
|
-
|
|
14
|
-
|
|
15
|
-
|
|
16
|
-
|
|
17
|
-
|
|
18
|
-
|
|
19
|
-
|
|
20
|
-
|
|
21
|
-
|
|
22
|
-
|
|
23
|
-
|
|
24
|
-
|
|
25
|
-
|
|
26
|
-
|
|
27
|
-
|
|
28
|
-
|
|
29
|
-
|
|
30
|
-
|
|
31
|
-
|
|
32
|
-
|
|
33
|
-
|
|
34
|
-
|
|
35
|
-
|
|
36
|
-
|
|
37
|
-
|
|
38
|
-
|
|
39
|
-
|
|
40
|
-
|
|
41
|
-
|
|
42
|
-
|
|
43
|
-
|
|
44
|
-
|
|
45
|
-
|
|
46
|
-
|
|
47
|
-
|
|
48
|
-
|
|
49
|
-
|
|
50
|
-
|
|
3
|
+
professionalQuestions:
|
|
4
|
+
emerging:
|
|
5
|
+
- id: cur_pro_emerg_1
|
|
6
|
+
text:
|
|
7
|
+
You've been assigned to fix a bug in a service you've never worked on.
|
|
8
|
+
The fix is straightforward but you don't understand why the code is
|
|
9
|
+
structured the way it is.
|
|
10
|
+
context:
|
|
11
|
+
The service handles payment reconciliation. The bug is a simple
|
|
12
|
+
off-by-one error in a date filter. The codebase uses patterns you
|
|
13
|
+
haven't seen before — event sourcing with a custom projection layer.
|
|
14
|
+
Your team lead said "just fix the bug and move on."
|
|
15
|
+
simulationPrompts:
|
|
16
|
+
- Would you just fix the bug or try to understand the broader system?
|
|
17
|
+
- What questions would you ask about the codebase?
|
|
18
|
+
- How would you decide how deep to go given you were told to move on?
|
|
19
|
+
- What would you do if you found something else that looked wrong while
|
|
20
|
+
investigating?
|
|
21
|
+
lookingFor:
|
|
22
|
+
- Shows curiosity beyond the immediate task
|
|
23
|
+
- Asks meaningful questions about why, not just what
|
|
24
|
+
- Willing to explore unfamiliar territory
|
|
25
|
+
- Balances curiosity with practical delivery
|
|
26
|
+
expectedDurationMinutes: 20
|
|
27
|
+
|
|
28
|
+
developing:
|
|
29
|
+
- id: cur_pro_dev_1
|
|
30
|
+
text:
|
|
31
|
+
Your team's CI pipeline takes 45 minutes. Everyone accepts it as normal.
|
|
32
|
+
You suspect it could be much faster but nobody has investigated.
|
|
33
|
+
context:
|
|
34
|
+
The pipeline runs unit tests, integration tests, and two rounds of
|
|
35
|
+
linting. It was set up 2 years ago and has been added to but never
|
|
36
|
+
optimized. When you mentioned it might be slow, a senior engineer said
|
|
37
|
+
"it's fine, we're used to it." You have no pipeline expertise.
|
|
38
|
+
simulationPrompts:
|
|
39
|
+
- How would you investigate this despite not being a CI expert?
|
|
40
|
+
- What would your first experiment be?
|
|
41
|
+
- How would you approach learning about pipeline optimization?
|
|
42
|
+
- How do you handle the pushback from the senior engineer who thinks
|
|
43
|
+
it's fine?
|
|
44
|
+
lookingFor:
|
|
45
|
+
- Investigates root causes independently
|
|
46
|
+
- Experiments with unfamiliar technology without fear of failure
|
|
47
|
+
- Seeks to understand how things work rather than accepting the status
|
|
48
|
+
quo
|
|
49
|
+
- Shows self-directed exploration patterns
|
|
50
|
+
expectedDurationMinutes: 20
|
|
51
|
+
|
|
52
|
+
practicing:
|
|
53
|
+
- id: cur_pro_pract_1
|
|
54
|
+
text:
|
|
55
|
+
A new AI coding tool claims to reduce development time by 50%. Your
|
|
56
|
+
company is considering adopting it but wants someone to evaluate it
|
|
57
|
+
thoroughly.
|
|
58
|
+
context:
|
|
59
|
+
The tool is a code generation agent that can handle entire user stories.
|
|
60
|
+
Marketing materials are impressive but vague on limitations. Your
|
|
61
|
+
company has 60 engineers. You've been given 2 weeks to evaluate it and
|
|
62
|
+
make a recommendation. No one on your team has used AI agents beyond
|
|
63
|
+
basic copilot tools.
|
|
64
|
+
simulationPrompts:
|
|
65
|
+
- How would you design your evaluation? What would you test?
|
|
66
|
+
- How do you separate marketing claims from reality?
|
|
67
|
+
- What failure modes would you specifically look for?
|
|
68
|
+
- How would you structure your findings to help the organization decide?
|
|
69
|
+
lookingFor:
|
|
70
|
+
- Systematic approach to investigating new technology
|
|
71
|
+
- Treats the evaluation as discovery, not confirmation
|
|
72
|
+
- Protects time for thorough exploration
|
|
73
|
+
- Discovers requirements through immersion in the problem space
|
|
74
|
+
expectedDurationMinutes: 20
|
|
75
|
+
|
|
76
|
+
role_modeling:
|
|
77
|
+
- id: cur_pro_role_1
|
|
78
|
+
text:
|
|
79
|
+
Your team consistently delivers features on time but rarely questions
|
|
80
|
+
whether they're building the right things. Product requirements are
|
|
81
|
+
accepted at face value.
|
|
82
|
+
context:
|
|
83
|
+
Over the last 6 months, 2 features were shipped and then unused because
|
|
84
|
+
the underlying assumptions were wrong. The team is highly skilled at
|
|
85
|
+
execution but doesn't push back on requirements or explore the problem
|
|
86
|
+
space. When you've asked "why are we building this?" in planning, you
|
|
87
|
+
get polite but empty answers.
|
|
88
|
+
simulationPrompts:
|
|
89
|
+
- How would you shift the team culture toward questioning and discovery?
|
|
90
|
+
- How do you model curiosity without slowing delivery?
|
|
91
|
+
- What specific practices would you introduce?
|
|
92
|
+
- How do you create safety for engineers to challenge assumptions?
|
|
93
|
+
lookingFor:
|
|
94
|
+
- Drives curiosity through challenging questions
|
|
95
|
+
- Creates environments where exploration is encouraged
|
|
96
|
+
- Models problem discovery orientation
|
|
97
|
+
- Seeks out ambiguity rather than avoiding it
|
|
98
|
+
expectedDurationMinutes: 20
|
|
99
|
+
|
|
100
|
+
exemplifying:
|
|
101
|
+
- id: cur_pro_exemp_1
|
|
102
|
+
text:
|
|
103
|
+
Your organization wants to become "AI-native" but most teams treat AI
|
|
104
|
+
tools as autocomplete. There's no culture of experimentation with new AI
|
|
105
|
+
capabilities.
|
|
106
|
+
context:
|
|
107
|
+
The company has 200+ engineers. AI tool licenses are available to
|
|
108
|
+
everyone but usage data shows 80% of usage is basic code completion.
|
|
109
|
+
Only 3 teams have experimented with AI agents. The CTO has asked you to
|
|
110
|
+
lead a transformation toward deeper AI integration. Budget is available
|
|
111
|
+
but cultural resistance is strong.
|
|
112
|
+
simulationPrompts:
|
|
113
|
+
- How do you create a culture of experimentation at organizational
|
|
114
|
+
scale?
|
|
115
|
+
- How do you sponsor initiatives that might fail?
|
|
116
|
+
- How do you influence engineers who are skeptical or intimidated by AI?
|
|
117
|
+
- How would you share learnings across 200+ engineers effectively?
|
|
118
|
+
lookingFor:
|
|
119
|
+
- Shapes organizational culture around curiosity and learning
|
|
120
|
+
- Sponsors experimental initiatives across the organization
|
|
121
|
+
- Recognized as a thought leader in problem discovery
|
|
122
|
+
- Influences practices around innovation and exploration
|
|
123
|
+
expectedDurationMinutes: 20
|
|
124
|
+
followUps:
|
|
125
|
+
- How would you measure whether the organization is becoming more
|
|
126
|
+
curious?
|
|
127
|
+
|
|
128
|
+
managementQuestions:
|
|
129
|
+
emerging:
|
|
130
|
+
- id: cur_mgmt_emerg_1
|
|
131
|
+
text:
|
|
132
|
+
A team member asks if they can spend a day investigating a new database
|
|
133
|
+
technology. Your sprint is fully committed and you're behind by 2 story
|
|
134
|
+
points.
|
|
135
|
+
context:
|
|
136
|
+
The team member is your most productive engineer. The database
|
|
137
|
+
technology could solve a scaling problem you'll face in 3 months. You've
|
|
138
|
+
never managed competing priorities between exploration and delivery
|
|
139
|
+
before. Your manager tracks sprint velocity closely.
|
|
140
|
+
simulationPrompts:
|
|
141
|
+
- How do you decide whether to approve the exploration time?
|
|
142
|
+
- How would you frame this to your manager?
|
|
143
|
+
- What boundaries would you set on the exploration?
|
|
144
|
+
- How do you make this a learning opportunity regardless of the outcome?
|
|
145
|
+
lookingFor:
|
|
146
|
+
- Supports team learning and exploration
|
|
147
|
+
- Shows awareness of balancing curiosity with delivery
|
|
148
|
+
- Creates space for investigation even under pressure
|
|
149
|
+
- Treats learning as valuable, not wasteful
|
|
150
|
+
expectedDurationMinutes: 20
|
|
151
|
+
|
|
152
|
+
developing:
|
|
153
|
+
- id: cur_mgmt_dev_1
|
|
154
|
+
text:
|
|
155
|
+
You notice your team only learns reactively — they pick up new skills
|
|
156
|
+
when forced by a project requirement but never explore proactively.
|
|
157
|
+
context:
|
|
158
|
+
The team of 7 engineers is competent and delivers well. But when a
|
|
159
|
+
project required Kubernetes knowledge, nobody had it and the team
|
|
160
|
+
scrambled. This has happened 3 times in the past year. Engineers say
|
|
161
|
+
they'd love to learn but "there's never time."
|
|
162
|
+
simulationPrompts:
|
|
163
|
+
- How would you create structured opportunities for exploration?
|
|
164
|
+
- How do you make learning feel safe, not like extra work?
|
|
165
|
+
- What would you do if an exploration produces no tangible outcome?
|
|
166
|
+
- How do you balance your delivery commitments with learning time?
|
|
167
|
+
lookingFor:
|
|
168
|
+
- Creates time and space for exploration
|
|
169
|
+
- Balances delivery with learning proactively
|
|
170
|
+
- Encourages experimentation without fear of failure
|
|
171
|
+
- Makes curiosity part of the team's rhythm, not an exception
|
|
172
|
+
expectedDurationMinutes: 20
|
|
173
|
+
|
|
174
|
+
practicing:
|
|
175
|
+
- id: cur_mgmt_pract_1
|
|
176
|
+
text:
|
|
177
|
+
Two of your teams have different approaches to adopting new technology.
|
|
178
|
+
One team experiments constantly but ships slowly. The other never
|
|
179
|
+
experiments but delivers predictably.
|
|
180
|
+
context:
|
|
181
|
+
You manage both teams. The experimental team has found valuable
|
|
182
|
+
innovations but missed deadlines 3 times. The predictable team delivers
|
|
183
|
+
on time but their tech stack is becoming outdated. Leadership is
|
|
184
|
+
starting to question the experimental team's reliability.
|
|
185
|
+
simulationPrompts:
|
|
186
|
+
- How do you calibrate the right level of exploration for each team?
|
|
187
|
+
- What would you take from each team's culture to improve the other?
|
|
188
|
+
- How do you protect the experimental team from leadership pressure?
|
|
189
|
+
- How do you encourage the predictable team to question their
|
|
190
|
+
assumptions?
|
|
191
|
+
lookingFor:
|
|
192
|
+
- Builds team culture that values questioning assumptions
|
|
193
|
+
- Calibrates exploration against delivery needs
|
|
194
|
+
- Protects space for curiosity while maintaining accountability
|
|
195
|
+
- Creates sustainable patterns for experimentation
|
|
196
|
+
expectedDurationMinutes: 20
|
|
197
|
+
|
|
198
|
+
role_modeling:
|
|
199
|
+
- id: cur_mgmt_role_1
|
|
200
|
+
text:
|
|
201
|
+
Your engineering function is solving the same problems repeatedly across
|
|
202
|
+
teams because there's no culture of sharing discoveries or asking "has
|
|
203
|
+
anyone solved this before?"
|
|
204
|
+
context:
|
|
205
|
+
You lead 3 teams with 25 engineers total. In the last quarter, two teams
|
|
206
|
+
independently built similar caching solutions. Another team
|
|
207
|
+
re-investigated a technology that was evaluated and rejected 6 months
|
|
208
|
+
ago. Engineers are curious individually but there's no collective
|
|
209
|
+
curiosity.
|
|
210
|
+
simulationPrompts:
|
|
211
|
+
- How do you create collective curiosity, not just individual curiosity?
|
|
212
|
+
- What structures would you put in place for sharing discoveries?
|
|
213
|
+
- How do you model curiosity visibly as a leader?
|
|
214
|
+
- How do you make "has anyone solved this?" the first question, not the
|
|
215
|
+
last?
|
|
216
|
+
lookingFor:
|
|
217
|
+
- Models curiosity visibly in leadership
|
|
218
|
+
- Creates environments where discoveries are shared
|
|
219
|
+
- Builds collective inquiry practices
|
|
220
|
+
- Drives curiosity as a cultural value across teams
|
|
221
|
+
expectedDurationMinutes: 20
|
|
222
|
+
|
|
223
|
+
exemplifying:
|
|
224
|
+
- id: cur_mgmt_exemp_1
|
|
225
|
+
text:
|
|
226
|
+
The organization wants to launch an internal innovation program but
|
|
227
|
+
previous attempts failed because they felt disconnected from real work.
|
|
228
|
+
context:
|
|
229
|
+
Two previous "hack week" programs were cancelled after low
|
|
230
|
+
participation. Engineers felt the initiatives were performative — good
|
|
231
|
+
ideas went nowhere after the event. Leadership still believes in the
|
|
232
|
+
concept but wants a different approach. You've been asked to design
|
|
233
|
+
something that creates genuine, sustainable curiosity.
|
|
234
|
+
simulationPrompts:
|
|
235
|
+
- Why did previous programs fail and how would yours be different?
|
|
236
|
+
- How do you build organizational systems that reward genuine curiosity?
|
|
237
|
+
- How do you ensure discoveries from the program feed back into real
|
|
238
|
+
work?
|
|
239
|
+
- How do you coach other managers to foster curiosity in their teams?
|
|
240
|
+
lookingFor:
|
|
241
|
+
- Builds organizational systems that reward curiosity
|
|
242
|
+
- Creates sustainable exploration programs, not performative ones
|
|
243
|
+
- Coaches other managers on fostering curiosity
|
|
244
|
+
- Balances predictable delivery with organizational learning
|
|
245
|
+
expectedDurationMinutes: 20
|
|
246
|
+
followUps:
|
|
247
|
+
- How do you handle the inevitable pushback from delivery-focused
|
|
248
|
+
managers?
|
|
@@ -1,52 +1,238 @@
|
|
|
1
1
|
# yaml-language-server: $schema=https://www.forwardimpact.team/schema/json/behaviour-questions.schema.json
|
|
2
2
|
|
|
3
|
-
|
|
4
|
-
|
|
5
|
-
|
|
6
|
-
|
|
7
|
-
|
|
8
|
-
|
|
9
|
-
|
|
10
|
-
|
|
11
|
-
|
|
12
|
-
|
|
13
|
-
|
|
14
|
-
|
|
15
|
-
|
|
16
|
-
|
|
17
|
-
|
|
18
|
-
|
|
19
|
-
|
|
20
|
-
|
|
21
|
-
|
|
22
|
-
|
|
23
|
-
|
|
24
|
-
|
|
25
|
-
|
|
26
|
-
|
|
27
|
-
|
|
28
|
-
|
|
29
|
-
|
|
30
|
-
|
|
31
|
-
|
|
32
|
-
|
|
33
|
-
|
|
34
|
-
|
|
35
|
-
|
|
36
|
-
|
|
37
|
-
|
|
38
|
-
|
|
39
|
-
|
|
40
|
-
|
|
41
|
-
|
|
42
|
-
|
|
43
|
-
|
|
44
|
-
|
|
45
|
-
|
|
46
|
-
|
|
47
|
-
|
|
48
|
-
|
|
49
|
-
|
|
50
|
-
|
|
51
|
-
|
|
52
|
-
|
|
3
|
+
professionalQuestions:
|
|
4
|
+
emerging:
|
|
5
|
+
- id: sys_pro_emerg_1
|
|
6
|
+
text:
|
|
7
|
+
You are asked to add a caching layer to a frequently called API
|
|
8
|
+
endpoint. The change seems straightforward but the endpoint is used by
|
|
9
|
+
three other services.
|
|
10
|
+
context:
|
|
11
|
+
The endpoint currently handles 500 requests per second. Two of the
|
|
12
|
+
consuming services expect real-time data, while the third can tolerate
|
|
13
|
+
staleness. Your team owns the endpoint but not the consuming services.
|
|
14
|
+
simulationPrompts:
|
|
15
|
+
- What would you check before implementing the cache?
|
|
16
|
+
- How would you find out who depends on this endpoint?
|
|
17
|
+
- What could go wrong with adding caching here?
|
|
18
|
+
- How would you communicate the change to the other teams?
|
|
19
|
+
lookingFor:
|
|
20
|
+
- Considers immediate dependencies before making changes
|
|
21
|
+
- Recognizes that systems have interconnected parts
|
|
22
|
+
- Asks about downstream impact rather than assuming it's safe
|
|
23
|
+
- Shows basic cause-and-effect thinking
|
|
24
|
+
expectedDurationMinutes: 20
|
|
25
|
+
|
|
26
|
+
developing:
|
|
27
|
+
- id: sys_pro_dev_1
|
|
28
|
+
text:
|
|
29
|
+
After deploying a database migration, response times on an unrelated
|
|
30
|
+
service have increased by 300%. There is no obvious connection between
|
|
31
|
+
the two.
|
|
32
|
+
context:
|
|
33
|
+
The migration added an index to a high-traffic table. The affected
|
|
34
|
+
service shares the same database cluster but uses different tables.
|
|
35
|
+
Monitoring shows increased lock contention during peak hours.
|
|
36
|
+
simulationPrompts:
|
|
37
|
+
- How would you investigate the connection between these two events?
|
|
38
|
+
- What tools would you use to trace the impact?
|
|
39
|
+
- How would you map the dependencies that led to this?
|
|
40
|
+
- What would you do to resolve it while the investigation continues?
|
|
41
|
+
lookingFor:
|
|
42
|
+
- Identifies upstream and downstream impacts methodically
|
|
43
|
+
- Uses observability tools to trace cross-service effects
|
|
44
|
+
- Maps dependencies before proposing a fix
|
|
45
|
+
- Understands feedback loops in shared infrastructure
|
|
46
|
+
expectedDurationMinutes: 20
|
|
47
|
+
|
|
48
|
+
practicing:
|
|
49
|
+
- id: sys_pro_pract_1
|
|
50
|
+
text:
|
|
51
|
+
Your team is proposing to replace a synchronous API with an event-driven
|
|
52
|
+
architecture. The change would affect 6 consuming services across 3
|
|
53
|
+
teams.
|
|
54
|
+
context:
|
|
55
|
+
The current API handles order processing. Moving to events would improve
|
|
56
|
+
throughput but change data consistency guarantees. Two of the consuming
|
|
57
|
+
services have SLAs that depend on synchronous confirmation. Business
|
|
58
|
+
stakeholders want the throughput improvement for peak season.
|
|
59
|
+
simulationPrompts:
|
|
60
|
+
- How would you map the full system impact of this architectural change?
|
|
61
|
+
- How do you handle the teams whose SLAs depend on synchronous
|
|
62
|
+
behaviour?
|
|
63
|
+
- What would your migration approach look like?
|
|
64
|
+
- How would you help business stakeholders understand the trade-offs?
|
|
65
|
+
lookingFor:
|
|
66
|
+
- Maps complex interactions across technical and business domains
|
|
67
|
+
- Anticipates cascading effects of architectural changes
|
|
68
|
+
- Designs migration that degrades gracefully during transition
|
|
69
|
+
- Understands how technology changes impact business operations
|
|
70
|
+
expectedDurationMinutes: 20
|
|
71
|
+
|
|
72
|
+
role_modeling:
|
|
73
|
+
- id: sys_pro_role_1
|
|
74
|
+
text:
|
|
75
|
+
A major outage was caused by a cascading failure across 4 services. The
|
|
76
|
+
post-mortem reveals that no single team understood the full dependency
|
|
77
|
+
chain.
|
|
78
|
+
context:
|
|
79
|
+
The cascade started with a memory leak in Service A, which caused
|
|
80
|
+
timeouts in Service B, which triggered retries that overwhelmed Service
|
|
81
|
+
C, which failed over incorrectly to Service D. Each team had local
|
|
82
|
+
monitoring but no one had end-to-end visibility. This is the second
|
|
83
|
+
cascading failure this quarter.
|
|
84
|
+
simulationPrompts:
|
|
85
|
+
- How would you lead the cross-team investigation?
|
|
86
|
+
- What systemic changes would you propose to prevent cascading failures?
|
|
87
|
+
- How would you create shared visibility across these teams?
|
|
88
|
+
- How do you make the case for investing in system-wide resilience?
|
|
89
|
+
lookingFor:
|
|
90
|
+
- Shapes systems design practices across the function
|
|
91
|
+
- Bridges technical systems thinking with business process impact
|
|
92
|
+
- Creates clarity from complexity for multiple stakeholder groups
|
|
93
|
+
- Influences cross-team architecture decisions
|
|
94
|
+
expectedDurationMinutes: 20
|
|
95
|
+
|
|
96
|
+
exemplifying:
|
|
97
|
+
- id: sys_pro_exemp_1
|
|
98
|
+
text:
|
|
99
|
+
The organisation is scaling from 20 to 80 microservices. Complexity is
|
|
100
|
+
growing faster than the team's ability to reason about the system.
|
|
101
|
+
context:
|
|
102
|
+
Incident frequency has tripled in 6 months. Teams operate in silos with
|
|
103
|
+
no shared architectural principles. Executive leadership is concerned
|
|
104
|
+
about reliability but doesn't want to slow feature delivery. You've been
|
|
105
|
+
asked to define the systems architecture strategy.
|
|
106
|
+
simulationPrompts:
|
|
107
|
+
- How would you define organizational systems architecture principles?
|
|
108
|
+
- How do you balance team autonomy with system-wide coherence?
|
|
109
|
+
- How would you advise executive leadership on the systemic risks?
|
|
110
|
+
- What governance structures would you put in place?
|
|
111
|
+
lookingFor:
|
|
112
|
+
- Defines organizational systems architecture principles
|
|
113
|
+
- Advises executive leadership on systemic risks and opportunities
|
|
114
|
+
- Creates frameworks that scale with organizational growth
|
|
115
|
+
- Takes an approach recognized as industry-leading
|
|
116
|
+
expectedDurationMinutes: 20
|
|
117
|
+
followUps:
|
|
118
|
+
- How would you measure whether systems thinking is improving?
|
|
119
|
+
|
|
120
|
+
managementQuestions:
|
|
121
|
+
emerging:
|
|
122
|
+
- id: sys_mgmt_emerg_1
|
|
123
|
+
text:
|
|
124
|
+
A team member made a change to your service that broke a downstream
|
|
125
|
+
consumer. They didn't realize the dependency existed.
|
|
126
|
+
context:
|
|
127
|
+
The downstream team is upset and your team member feels terrible. The
|
|
128
|
+
dependency wasn't documented and there's no integration test covering
|
|
129
|
+
it. Your team has 6 members, most of whom are unfamiliar with the
|
|
130
|
+
broader system context.
|
|
131
|
+
simulationPrompts:
|
|
132
|
+
- How do you help the team member understand what happened?
|
|
133
|
+
- What do you say to the downstream team?
|
|
134
|
+
- How do you help your team understand the broader system context?
|
|
135
|
+
- What would you put in place to prevent similar blind spots?
|
|
136
|
+
lookingFor:
|
|
137
|
+
- Helps team members see how their work fits the broader system
|
|
138
|
+
- Creates context awareness without blame
|
|
139
|
+
- Shows basic understanding of system dependencies
|
|
140
|
+
- Takes steps to improve visibility
|
|
141
|
+
expectedDurationMinutes: 20
|
|
142
|
+
|
|
143
|
+
developing:
|
|
144
|
+
- id: sys_mgmt_dev_1
|
|
145
|
+
text:
|
|
146
|
+
Your team is about to ship a major refactor. Another team's manager
|
|
147
|
+
warns you it might affect their service, but your team can't see how.
|
|
148
|
+
context:
|
|
149
|
+
The refactor changes internal data structures but the API contract
|
|
150
|
+
should remain the same. The other team's concern is based on past
|
|
151
|
+
experience where "internal changes" leaked through. You need to decide
|
|
152
|
+
whether to delay the release to investigate.
|
|
153
|
+
simulationPrompts:
|
|
154
|
+
- How do you assess the risk the other manager is raising?
|
|
155
|
+
- How do you help your team think about second-order effects?
|
|
156
|
+
- What would you do if your team disagrees with the delay?
|
|
157
|
+
- How do you build a collaborative relationship with the other team?
|
|
158
|
+
lookingFor:
|
|
159
|
+
- Teaches team to think about downstream impacts
|
|
160
|
+
- Takes cross-team concerns seriously even without proof
|
|
161
|
+
- Facilitates systems thinking in planning decisions
|
|
162
|
+
- Builds collaborative relationships across team boundaries
|
|
163
|
+
expectedDurationMinutes: 20
|
|
164
|
+
|
|
165
|
+
practicing:
|
|
166
|
+
- id: sys_mgmt_pract_1
|
|
167
|
+
text:
|
|
168
|
+
Your team needs to plan a quarter of work but keeps getting interrupted
|
|
169
|
+
by production issues caused by other teams' changes affecting your
|
|
170
|
+
service.
|
|
171
|
+
context:
|
|
172
|
+
Your service is a critical dependency for 5 other teams. In the last
|
|
173
|
+
quarter, 40% of your team's time was spent on reactive work caused by
|
|
174
|
+
upstream changes. Your team is frustrated and wants to "build a wall"
|
|
175
|
+
with strict API contracts.
|
|
176
|
+
simulationPrompts:
|
|
177
|
+
- How do you structure your team's work to account for cross-system
|
|
178
|
+
dependencies?
|
|
179
|
+
- How would you work with the other teams rather than building walls?
|
|
180
|
+
- What systemic improvements would you propose?
|
|
181
|
+
- How do you protect your team while maintaining collaborative system
|
|
182
|
+
stewardship?
|
|
183
|
+
lookingFor:
|
|
184
|
+
- Embeds systems thinking into team planning processes
|
|
185
|
+
- Coordinates cross-team to address systemic issues
|
|
186
|
+
- Balances team protection with broader system health
|
|
187
|
+
- Proposes structural improvements, not just coping mechanisms
|
|
188
|
+
expectedDurationMinutes: 20
|
|
189
|
+
|
|
190
|
+
role_modeling:
|
|
191
|
+
- id: sys_mgmt_role_1
|
|
192
|
+
text:
|
|
193
|
+
You want to invest in chaos engineering but your leadership sees it as
|
|
194
|
+
unnecessary risk. Meanwhile, cascading failures are becoming more
|
|
195
|
+
frequent.
|
|
196
|
+
context:
|
|
197
|
+
Your function has had 5 cascading failures in 6 months. Each post-mortem
|
|
198
|
+
identifies systemic issues but fixes are always local. Leadership wants
|
|
199
|
+
predictable delivery and sees intentional failure injection as
|
|
200
|
+
dangerous. You manage 3 teams that own core platform services.
|
|
201
|
+
simulationPrompts:
|
|
202
|
+
- How do you make the case for chaos engineering to sceptical
|
|
203
|
+
leadership?
|
|
204
|
+
- How would you implement it safely to build confidence?
|
|
205
|
+
- How do you develop systems thinking capabilities across your teams?
|
|
206
|
+
- What metrics would you use to demonstrate value?
|
|
207
|
+
lookingFor:
|
|
208
|
+
- Models systems thinking in leadership decisions
|
|
209
|
+
- Develops team capabilities for understanding complex systems
|
|
210
|
+
- Makes reasoning visible and transparent to stakeholders
|
|
211
|
+
- Proposes incremental approaches that build trust
|
|
212
|
+
expectedDurationMinutes: 20
|
|
213
|
+
|
|
214
|
+
exemplifying:
|
|
215
|
+
- id: sys_mgmt_exemp_1
|
|
216
|
+
text:
|
|
217
|
+
You are leading an organizational initiative to improve system
|
|
218
|
+
reliability, but teams have no shared language or practices for systems
|
|
219
|
+
thinking.
|
|
220
|
+
context:
|
|
221
|
+
You are responsible for a function of 50+ engineers across 8 teams.
|
|
222
|
+
Incident reviews reveal that most outages are caused by teams not
|
|
223
|
+
understanding cross-system impacts. There is no architectural governance
|
|
224
|
+
and teams have conflicting approaches to resilience.
|
|
225
|
+
simulationPrompts:
|
|
226
|
+
- How do you create organizational structures that promote systems
|
|
227
|
+
thinking?
|
|
228
|
+
- How do you balance local team optimization with broader system health?
|
|
229
|
+
- How do you develop a shared language for discussing system complexity?
|
|
230
|
+
- How do you handle complexity that spans multiple team boundaries?
|
|
231
|
+
lookingFor:
|
|
232
|
+
- Creates organizational structures for systems thinking
|
|
233
|
+
- Balances local and global optimization at scale
|
|
234
|
+
- Builds shared understanding of system complexity
|
|
235
|
+
- Takes a strategic, long-term approach to systemic improvement
|
|
236
|
+
expectedDurationMinutes: 20
|
|
237
|
+
followUps:
|
|
238
|
+
- How would you know if systems thinking maturity is improving?
|