agentid-cli 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,358 @@
1
+ # CONGRESSIONAL AI FRAMING ANALYSIS
2
+ ## CommDAAF Coding Instructions v1.0
3
+
4
+ You are a content analyst coding congressional hearing transcripts for how artificial intelligence is framed in U.S. policy discourse. Apply the coding scheme below systematically and consistently.
5
+
6
+ ---
7
+
8
+ ## TASK
9
+
10
+ For each hearing excerpt provided, identify:
11
+
12
+ 1. **PRIMARY_FRAME**: The single dominant AI frame (required)
13
+ 2. **SECONDARY_FRAME**: One additional frame if clearly present (optional, use "NONE" if not applicable)
14
+ 3. **VALENCE**: Overall sentiment toward AI expressed
15
+ 4. **URGENCY**: Level of temporal pressure/immediacy expressed
16
+ 5. **POLICY_STANCE**: Position on AI regulation
17
+ 6. **CONFIDENCE**: Your confidence in this coding (0.0-1.0)
18
+
19
+ ---
20
+
21
+ ## FRAME DEFINITIONS
22
+
23
+ ### 1. INNOVATION
24
+ **Definition**: AI framed as economic opportunity, technological progress, competitiveness, job creation, or scientific advancement.
25
+
26
+ **Theoretical basis**: Technology optimism, economic growth framing (Entman 1993; Nisbet 2009)
27
+
28
+ **Indicators**:
29
+ - References to economic growth, jobs, competitiveness
30
+ - Language about "leading," "winning," "advancing"
31
+ - Emphasis on benefits, opportunities, potential
32
+ - Discussion of U.S. technological leadership
33
+ - Mentions of startups, innovation ecosystems, R&D
34
+
35
+ **Examples (YES)**:
36
+ - "AI will create millions of new jobs and transform our economy"
37
+ - "We must ensure America leads in this critical technology"
38
+ - "The potential for AI to solve healthcare challenges is enormous"
39
+
40
+ **Counter-examples (NO)**:
41
+ - "AI threatens American jobs" → RISK_ECONOMIC
42
+ - "We need rules to ensure AI develops safely" → GOVERNANCE
43
+
44
+ ---
45
+
46
+ ### 2. RISK_SAFETY
47
+ **Definition**: AI framed as presenting safety risks, existential threats, or potential for catastrophic harm.
48
+
49
+ **Theoretical basis**: Risk society theory (Beck 1992); technological catastrophism
50
+
51
+ **Indicators**:
52
+ - References to existential risk, catastrophic outcomes
53
+ - Language about AI "going wrong," losing control
54
+ - Discussion of autonomous weapons, superintelligence
55
+ - Mentions of alignment problems, unintended consequences
56
+ - Apocalyptic or catastrophic framing
57
+
58
+ **Examples (YES)**:
59
+ - "We cannot rule out the possibility that AI could pose existential risks"
60
+ - "Autonomous weapons systems could destabilize global security"
61
+ - "If we lose control of these systems, the consequences could be catastrophic"
62
+
63
+ **Counter-examples (NO)**:
64
+ - "AI chatbots are harming children" → RISK_HARM
65
+ - "AI will take our jobs" → RISK_ECONOMIC
66
+
67
+ ---
68
+
69
+ ### 3. RISK_HARM
70
+ **Definition**: AI framed as causing concrete, immediate harms to individuals or groups (not existential).
71
+
72
+ **Theoretical basis**: Technology harm framing; consumer protection discourse
73
+
74
+ **Indicators**:
75
+ - References to specific harms: addiction, mental health, exploitation
76
+ - Discussion of vulnerable populations (children, elderly)
77
+ - Mentions of AI-enabled fraud, manipulation, grooming
78
+ - Stories of individuals harmed by AI systems
79
+ - Focus on corporate accountability for harms
80
+
81
+ **Examples (YES)**:
82
+ - "AI chatbots are grooming children and leading them to suicide"
83
+ - "These algorithms are designed to addict users for profit"
84
+ - "Deepfakes are being used to harass and exploit women"
85
+
86
+ **Counter-examples (NO)**:
87
+ - "AI could destroy humanity" → RISK_SAFETY
88
+ - "AI will eliminate millions of jobs" → RISK_ECONOMIC
89
+
90
+ ---
91
+
92
+ ### 4. RISK_ECONOMIC
93
+ **Definition**: AI framed as threatening jobs, economic disruption, or widening inequality.
94
+
95
+ **Theoretical basis**: Technological unemployment discourse; labor economics
96
+
97
+ **Indicators**:
98
+ - References to job loss, automation, displacement
99
+ - Discussion of economic inequality, concentration of wealth
100
+ - Mentions of workers being replaced by AI
101
+ - Concerns about economic disruption, transition costs
102
+
103
+ **Examples (YES)**:
104
+ - "Millions of American workers could lose their jobs to AI"
105
+ - "AI is concentrating wealth in the hands of a few tech giants"
106
+ - "We need to prepare workers for the coming disruption"
107
+
108
+ **Counter-examples (NO)**:
109
+ - "AI will create new jobs" → INNOVATION
110
+ - "AI is harming children" → RISK_HARM
111
+
112
+ ---
113
+
114
+ ### 5. GOVERNANCE
115
+ **Definition**: AI framed primarily in terms of regulatory approaches, oversight mechanisms, or governance structures.
116
+
117
+ **Theoretical basis**: Regulatory theory; administrative law discourse
118
+
119
+ **Indicators**:
120
+ - References to specific regulations, laws, oversight bodies
121
+ - Discussion of regulatory frameworks (sector-specific vs. horizontal)
122
+ - Mentions of compliance, auditing, transparency requirements
123
+ - Debate about federal vs. state authority
124
+ - Focus on HOW to regulate rather than WHETHER to regulate
125
+
126
+ **Examples (YES)**:
127
+ - "We need a federal framework that preempts the patchwork of state laws"
128
+ - "Sector-specific regulators are best positioned to oversee AI"
129
+ - "Transparency and auditing requirements would ensure accountability"
130
+
131
+ **Counter-examples (NO)**:
132
+ - "AI is wonderful and shouldn't be regulated" → INNOVATION + anti-regulation stance
133
+ - "AI harms require urgent action" → RISK_HARM (governance is secondary)
134
+
135
+ ---
136
+
137
+ ### 6. SOVEREIGNTY
138
+ **Definition**: AI framed in terms of national security, geopolitical competition, or foreign threats.
139
+
140
+ **Theoretical basis**: Securitization theory (Buzan et al. 1998)
141
+
142
+ **Indicators**:
143
+ - References to China, adversaries, geopolitical competition
144
+ - Discussion of national security implications
145
+ - Mentions of technology transfer, export controls
146
+ - Language about "winning" vs. rivals, tech cold war
147
+ - Defense and intelligence applications
148
+
149
+ **Examples (YES)**:
150
+ - "China is racing ahead while we debate regulations"
151
+ - "We cannot allow adversaries to dominate this critical technology"
152
+ - "AI is essential to our national defense"
153
+
154
+ **Counter-examples (NO)**:
155
+ - "We must lead in AI for economic growth" → INNOVATION (unless explicitly about rivals)
156
+
157
+ ---
158
+
159
+ ### 7. RIGHTS
160
+ **Definition**: AI framed in terms of civil liberties, discrimination, privacy, or constitutional concerns.
161
+
162
+ **Theoretical basis**: Digital rights discourse; civil liberties framing
163
+
164
+ **Indicators**:
165
+ - References to privacy, surveillance, data protection
166
+ - Discussion of algorithmic discrimination, bias
167
+ - Mentions of due process, First Amendment, civil rights
168
+ - Concerns about facial recognition, predictive policing
169
+ - Focus on individual autonomy and dignity
170
+
171
+ **Examples (YES)**:
172
+ - "AI surveillance threatens our Fourth Amendment rights"
173
+ - "Algorithmic systems are perpetuating racial discrimination"
174
+ - "People have a right to know when they're interacting with AI"
175
+
176
+ **Counter-examples (NO)**:
177
+ - "AI is harming children" → RISK_HARM (unless specifically about children's privacy rights)
178
+
179
+ ---
180
+
181
+ ### 8. TECHNICAL
182
+ **Definition**: AI discussed primarily in technical or scientific terms, focusing on how systems work.
183
+
184
+ **Theoretical basis**: Expert discourse; technocratic framing
185
+
186
+ **Indicators**:
187
+ - Explanations of AI mechanisms, architectures
188
+ - Discussion of model types, training methods
189
+ - Technical terminology (LLMs, neural networks, etc.)
190
+ - Focus on capabilities and limitations of systems
191
+ - Scientific or engineering perspective
192
+
193
+ **Examples (YES)**:
194
+ - "Large language models work by predicting the next token..."
195
+ - "The challenge with generative AI is the stochastic nature of outputs"
196
+ - "Federated learning allows training without centralizing data"
197
+
198
+ **Counter-examples (NO)**:
199
+ - Technical explanation used to support innovation claims → INNOVATION
200
+ - Technical explanation used to explain risks → RISK_*
201
+
202
+ ---
203
+
204
+ ## DECISION HIERARCHY
205
+
206
+ When multiple frames are present, use this hierarchy to select PRIMARY_FRAME:
207
+
208
+ 1. **Most emphasized**: Which frame receives the most attention/words?
209
+ 2. **Opening frame**: Which frame appears in the opening statement?
210
+ 3. **Recommended action**: Which frame drives the policy recommendation?
211
+
212
+ If still unclear after applying hierarchy, select the frame that appears FIRST.
213
+
214
+ For SECONDARY_FRAME, select only if a second frame is **clearly and substantially** present (at least 20% of content). Otherwise, code "NONE".
215
+
216
+ ---
217
+
218
+ ## VALENCE DEFINITIONS
219
+
220
+ | Value | Definition | Indicators |
221
+ |-------|------------|------------|
222
+ | **POSITIVE** | AI viewed favorably overall | Benefits emphasized, optimistic language, opportunities highlighted |
223
+ | **NEGATIVE** | AI viewed unfavorably overall | Harms emphasized, pessimistic language, threats highlighted |
224
+ | **MIXED** | Both positive and negative views expressed substantially | Balance of benefits and risks, nuanced assessment |
225
+ | **NEUTRAL** | No clear evaluative stance | Descriptive, factual, procedural language |
226
+
227
+ ---
228
+
229
+ ## URGENCY DEFINITIONS
230
+
231
+ | Value | Definition | Indicators |
232
+ |-------|------------|------------|
233
+ | **HIGH** | Immediate action demanded | "Now," "urgent," "crisis," "cannot wait," deadlines |
234
+ | **MEDIUM** | Action needed but not emergency | "Should," "need to," "important to address" |
235
+ | **LOW** | No particular time pressure | "Consider," "study," "over time," deliberative |
236
+
237
+ ---
238
+
239
+ ## POLICY_STANCE DEFINITIONS
240
+
241
+ | Value | Definition |
242
+ |-------|------------|
243
+ | **PRO_REGULATION** | Supports new AI regulations, oversight, or restrictions |
244
+ | **ANTI_REGULATION** | Opposes new AI regulations, favors industry self-governance |
245
+ | **SECTOR_SPECIFIC** | Supports regulation through existing sector regulators (FDA, FTC, etc.) |
246
+ | **FEDERAL_PREEMPTION** | Supports federal law preempting state AI regulations |
247
+ | **STATE_AUTHORITY** | Supports state-level AI regulation |
248
+ | **NEUTRAL** | No clear regulatory stance, or purely procedural |
249
+
250
+ ---
251
+
252
+ ## OUTPUT FORMAT
253
+
254
+ Return a JSON array with one object per hearing excerpt:
255
+
256
+ ```json
257
+ [
258
+ {
259
+ "id": "CHRG-119hhrg61690",
260
+ "primary_frame": "INNOVATION",
261
+ "secondary_frame": "SOVEREIGNTY",
262
+ "valence": "POSITIVE",
263
+ "urgency": "MEDIUM",
264
+ "policy_stance": "ANTI_REGULATION",
265
+ "confidence": 0.85,
266
+ "rationale": "Opening emphasizes U.S. leadership and economic opportunity; China mentioned as competitor; opposes California-style regulation"
267
+ }
268
+ ]
269
+ ```
270
+
271
+ **Required fields**: id, primary_frame, valence, urgency, policy_stance, confidence
272
+ **Optional fields**: secondary_frame (use "NONE" if not applicable), rationale
273
+
274
+ ---
275
+
276
+ ## CODING RULES
277
+
278
+ 1. **Code what is expressed, not what you infer**: Base coding on explicit statements, not implied positions.
279
+
280
+ 2. **Code the overall excerpt**: Consider the full excerpt, not just memorable quotes.
281
+
282
+ 3. **Multiple speakers**: If excerpt contains multiple speakers with different frames, code the DOMINANT frame (most words/emphasis).
283
+
284
+ 4. **Procedural content**: Excerpts that are purely procedural (roll call, motions) should be coded as:
285
+ - primary_frame: "PROCEDURAL"
286
+ - valence: "NEUTRAL"
287
+ - urgency: "LOW"
288
+ - policy_stance: "NEUTRAL"
289
+
290
+ 5. **Insufficient content**: If excerpt is too short or unclear to code reliably, use:
291
+ - confidence: 0.3 or lower
292
+ - rationale: explain the limitation
293
+
294
+ 6. **Frame combinations**: Common frame combinations:
295
+ - INNOVATION + SOVEREIGNTY: "Beat China" framing
296
+ - RISK_HARM + GOVERNANCE: "Regulate to protect" framing
297
+ - RIGHTS + GOVERNANCE: "Protect civil liberties through law" framing
298
+
299
+ ---
300
+
301
+ ## EXAMPLES
302
+
303
+ ### Example 1: Innovation + Sovereignty
304
+ **Text**: "The United States must lead in artificial intelligence. We are in a global competition with China, and we cannot afford to fall behind. AI will create millions of jobs and transform every sector of our economy. We should not burden our innovators with regulations that will only help our adversaries."
305
+
306
+ **Coding**:
307
+ ```json
308
+ {
309
+ "id": "example_1",
310
+ "primary_frame": "INNOVATION",
311
+ "secondary_frame": "SOVEREIGNTY",
312
+ "valence": "POSITIVE",
313
+ "urgency": "HIGH",
314
+ "policy_stance": "ANTI_REGULATION",
315
+ "confidence": 0.95,
316
+ "rationale": "Emphasizes economic opportunity and job creation (INNOVATION) while framing as competition with China (SOVEREIGNTY). Clearly opposes regulation."
317
+ }
318
+ ```
319
+
320
+ ### Example 2: Risk_Harm + Governance
321
+ **Text**: "AI chatbots are harming our children. We have heard testimony from parents whose children were groomed, manipulated, and led to self-harm by these products. The companies know what is happening and they do nothing. It is time for Congress to act. We need mandatory safety standards and real accountability."
322
+
323
+ **Coding**:
324
+ ```json
325
+ {
326
+ "id": "example_2",
327
+ "primary_frame": "RISK_HARM",
328
+ "secondary_frame": "GOVERNANCE",
329
+ "valence": "NEGATIVE",
330
+ "urgency": "HIGH",
331
+ "policy_stance": "PRO_REGULATION",
332
+ "confidence": 0.95,
333
+ "rationale": "Focuses on specific harms to children (RISK_HARM) and calls for regulatory action (GOVERNANCE). Strong negative valence toward AI/companies."
334
+ }
335
+ ```
336
+
337
+ ### Example 3: Technical
338
+ **Text**: "Large language models like GPT-4 and Claude work by training on vast amounts of text data to predict the next token in a sequence. These models have billions of parameters and require significant computational resources. The key technical challenge is ensuring these systems remain aligned with human values as they become more capable."
339
+
340
+ **Coding**:
341
+ ```json
342
+ {
343
+ "id": "example_3",
344
+ "primary_frame": "TECHNICAL",
345
+ "secondary_frame": "RISK_SAFETY",
346
+ "valence": "NEUTRAL",
347
+ "urgency": "LOW",
348
+ "policy_stance": "NEUTRAL",
349
+ "confidence": 0.85,
350
+ "rationale": "Primarily technical explanation of how LLMs work. Brief mention of alignment suggests RISK_SAFETY as secondary. No policy stance."
351
+ }
352
+ ```
353
+
354
+ ---
355
+
356
+ ## POSTS TO CODE
357
+
358
+ [Content will be inserted here]
@@ -0,0 +1,252 @@
1
+ [
2
+ {
3
+ "id": "CHRG-119hhrg60318",
4
+ "primary_frame": "SOVEREIGNTY",
5
+ "secondary_frame": "INNOVATION",
6
+ "valence": "POSITIVE",
7
+ "urgency": "HIGH",
8
+ "policy_stance": "NEUTRAL",
9
+ "confidence": 0.90,
10
+ "rationale": "Title is 'Global AI Arms Race'; witness explicitly frames as 'defining competition of 21st century' and states 'whoever leads this agentic AI race is going to shape the rules of the future international order.' Strong geopolitical framing (SOVEREIGNTY) with secondary emphasis on economic growth (INNOVATION). No explicit regulatory stance in excerpt."
11
+ },
12
+ {
13
+ "id": "CHRG-119shrg61468",
14
+ "primary_frame": "RISK_HARM",
15
+ "secondary_frame": "NONE",
16
+ "valence": "NEGATIVE",
17
+ "urgency": "MEDIUM",
18
+ "policy_stance": "PRO_REGULATION",
19
+ "confidence": 0.85,
20
+ "rationale": "Focus on 'growing threat of scams, fraud, and financial exploitation' targeting seniors. Frames AI-enabled fraud as concrete harm to vulnerable population. Implied regulatory stance (fighting fraud requires action). No secondary frame clearly present - purely harm-focused."
21
+ },
22
+ {
23
+ "id": "CHRG-118shrg54677",
24
+ "primary_frame": "GOVERNANCE",
25
+ "secondary_frame": "INNOVATION",
26
+ "valence": "MIXED",
27
+ "urgency": "MEDIUM",
28
+ "policy_stance": "PRO_REGULATION",
29
+ "confidence": 0.85,
30
+ "rationale": "Klobuchar frames as oversight hearing on AI use by federal agencies (Library of Congress, GPO, Smithsonian). States 'our laws need to be as sophisticated as the potential threats.' Acknowledges innovation potential but emphasizes governance structures. Explicit pro-regulation stance."
31
+ },
32
+ {
33
+ "id": "CHRG-118shrg55804",
34
+ "primary_frame": "SOVEREIGNTY",
35
+ "secondary_frame": "INNOVATION",
36
+ "valence": "MIXED",
37
+ "urgency": "HIGH",
38
+ "policy_stance": "NEUTRAL",
39
+ "confidence": 0.90,
40
+ "rationale": "Foreign Relations Committee hearing on 'AI Revolution' in diplomacy/foreign policy. Cardin: 'AI revolution will change economics, societies, entire world... also change diplomacy.' Risch: 'era of strategic competition,' 'U.S. and our allies must lead,' 'national security' implications. Clear sovereignty/geopolitical framing with innovation secondary. High urgency due to 'ongoing wars' context."
41
+ },
42
+ {
43
+ "id": "CHRG-118shrg57016",
44
+ "primary_frame": "TECHNICAL",
45
+ "secondary_frame": "SOVEREIGNTY",
46
+ "valence": "POSITIVE",
47
+ "urgency": "MEDIUM",
48
+ "policy_stance": "NEUTRAL",
49
+ "confidence": 0.40,
50
+ "rationale": "FALSE POSITIVE: This hearing is primarily about FUSION ENERGY, not AI. Mentions 'critical and emerging technologies' and 'global race' but content is fusion technology (ITER, nuclear fusion). Should likely be excluded from AI framing analysis. Low confidence due to off-topic content."
51
+ },
52
+ {
53
+ "id": "CHRG-119hhrg59601",
54
+ "primary_frame": "SOVEREIGNTY",
55
+ "secondary_frame": "INNOVATION",
56
+ "valence": "MIXED",
57
+ "urgency": "HIGH",
58
+ "policy_stance": "NEUTRAL",
59
+ "confidence": 0.90,
60
+ "rationale": "Title is 'Examining Policies to Counter China.' Explicitly discusses 'economic and geopolitical rivalry with China,' mentions 5G race, TikTok, DeepSeek. Chair Hill frames as strategic competition where 'Congress must see the forest.' Clear sovereignty/national competition framing. Nuanced valence - not alarmist but serious."
61
+ },
62
+ {
63
+ "id": "CHRG-119hhrg60684",
64
+ "primary_frame": "PROCEDURAL",
65
+ "secondary_frame": "NONE",
66
+ "valence": "NEUTRAL",
67
+ "urgency": "LOW",
68
+ "policy_stance": "NEUTRAL",
69
+ "confidence": 0.25,
70
+ "rationale": "FALSE POSITIVE: Hearing is about veteran Transition Assistance Program (TAP), not AI. Content discusses military-to-civilian transition support. Should be excluded from AI framing analysis."
71
+ },
72
+ {
73
+ "id": "CHRG-119hhrg61123",
74
+ "primary_frame": "PROCEDURAL",
75
+ "secondary_frame": "NONE",
76
+ "valence": "NEUTRAL",
77
+ "urgency": "LOW",
78
+ "policy_stance": "NEUTRAL",
79
+ "confidence": 0.25,
80
+ "rationale": "FALSE POSITIVE: Generic 'Legislative Hearing' about VA services and legislative proposals. No AI-specific content in excerpt. Should be excluded from AI framing analysis."
81
+ },
82
+ {
83
+ "id": "CHRG-118hhrg55799",
84
+ "primary_frame": "PROCEDURAL",
85
+ "secondary_frame": "NONE",
86
+ "valence": "NEUTRAL",
87
+ "urgency": "LOW",
88
+ "policy_stance": "NEUTRAL",
89
+ "confidence": 0.25,
90
+ "rationale": "FALSE POSITIVE: Hearing on Workforce Innovation and Opportunity Act (WIOA) reauthorization. About job training and workforce development generally, not AI specifically. Should be excluded from AI framing analysis."
91
+ },
92
+ {
93
+ "id": "CHRG-119shrg61235",
94
+ "primary_frame": "PROCEDURAL",
95
+ "secondary_frame": "NONE",
96
+ "valence": "NEUTRAL",
97
+ "urgency": "LOW",
98
+ "policy_stance": "NEUTRAL",
99
+ "confidence": 0.25,
100
+ "rationale": "FALSE POSITIVE: Nomination hearing for DOI and DOE officials. Procedural confirmation hearing with no AI-specific content in excerpt. Should be excluded from AI framing analysis."
101
+ },
102
+ {
103
+ "id": "CHRG-119shrg61334",
104
+ "primary_frame": "PROCEDURAL",
105
+ "secondary_frame": "NONE",
106
+ "valence": "NEUTRAL",
107
+ "urgency": "MEDIUM",
108
+ "policy_stance": "NEUTRAL",
109
+ "confidence": 0.25,
110
+ "rationale": "FALSE POSITIVE: Hearing about Boeing manufacturing safety issues, not AI. Should be excluded from AI framing analysis."
111
+ },
112
+ {
113
+ "id": "CHRG-119hhrg60837",
114
+ "primary_frame": "RISK_HARM",
115
+ "secondary_frame": "SOVEREIGNTY",
116
+ "valence": "NEGATIVE",
117
+ "urgency": "HIGH",
118
+ "policy_stance": "PRO_REGULATION",
119
+ "confidence": 0.70,
120
+ "rationale": "Hearing on online terrorist recruitment and radicalization. References 'new and emerging threats' including digital platforms. Frames technology as enabling harm (terrorism). Secondary sovereignty frame via CCP and Iran threats. Limited direct AI content but relevant to tech platform risks."
121
+ },
122
+ {
123
+ "id": "CHRG-118shrg57242",
124
+ "primary_frame": "PROCEDURAL",
125
+ "secondary_frame": "NONE",
126
+ "valence": "NEUTRAL",
127
+ "urgency": "LOW",
128
+ "policy_stance": "NEUTRAL",
129
+ "confidence": 0.25,
130
+ "rationale": "FALSE POSITIVE: Workforce Innovation and Opportunity Act reauthorization hearing. About job training generally, not AI specifically. Should be excluded from AI framing analysis."
131
+ },
132
+ {
133
+ "id": "CHRG-118shrg56072",
134
+ "primary_frame": "PROCEDURAL",
135
+ "secondary_frame": "NONE",
136
+ "valence": "NEUTRAL",
137
+ "urgency": "LOW",
138
+ "policy_stance": "NEUTRAL",
139
+ "confidence": 0.25,
140
+ "rationale": "FALSE POSITIVE: Smithsonian oversight hearing. Institutional oversight, not AI-focused. Should be excluded from AI framing analysis."
141
+ },
142
+ {
143
+ "id": "CHRG-118hhrg53380",
144
+ "primary_frame": "PROCEDURAL",
145
+ "secondary_frame": "NONE",
146
+ "valence": "NEUTRAL",
147
+ "urgency": "LOW",
148
+ "policy_stance": "NEUTRAL",
149
+ "confidence": 0.25,
150
+ "rationale": "FALSE POSITIVE: Federal telework policy oversight. Post-pandemic workplace policy, not AI. Should be excluded from AI framing analysis."
151
+ },
152
+ {
153
+ "id": "CHRG-119shrg61428",
154
+ "primary_frame": "PROCEDURAL",
155
+ "secondary_frame": "NONE",
156
+ "valence": "NEUTRAL",
157
+ "urgency": "LOW",
158
+ "policy_stance": "NEUTRAL",
159
+ "confidence": 0.25,
160
+ "rationale": "FALSE POSITIVE: Pipeline safety reauthorization hearing. Energy infrastructure, not AI. Should be excluded from AI framing analysis."
161
+ },
162
+ {
163
+ "id": "CHRG-118shrg58574",
164
+ "primary_frame": "PROCEDURAL",
165
+ "secondary_frame": "NONE",
166
+ "valence": "NEUTRAL",
167
+ "urgency": "LOW",
168
+ "policy_stance": "NEUTRAL",
169
+ "confidence": 0.25,
170
+ "rationale": "FALSE POSITIVE: Hearing on disability policy. Social services and accessibility, not AI-focused. Should be excluded from AI framing analysis."
171
+ },
172
+ {
173
+ "id": "CHRG-118hhrg55921",
174
+ "primary_frame": "SOVEREIGNTY",
175
+ "secondary_frame": "GOVERNANCE",
176
+ "valence": "MIXED",
177
+ "urgency": "HIGH",
178
+ "policy_stance": "NEUTRAL",
179
+ "confidence": 0.75,
180
+ "rationale": "AUKUS partnership and international security hearing. Under Secretary Jenkins discusses 'leading from a position of innovation during this inflection point.' Focus on arms control, defense partnerships, and emerging technologies in security context. Sovereignty frame dominant; governance secondary via arms control frameworks."
181
+ },
182
+ {
183
+ "id": "CHRG-118shrg57143",
184
+ "primary_frame": "SOVEREIGNTY",
185
+ "secondary_frame": "RIGHTS",
186
+ "valence": "NEGATIVE",
187
+ "urgency": "HIGH",
188
+ "policy_stance": "PRO_REGULATION",
189
+ "confidence": 0.80,
190
+ "rationale": "Cybersecurity policy hearing on 'rising authoritarianism and global competition.' Focus on threats to cyberspace and internet freedom. Sovereignty frame (competition with authoritarians) combined with rights frame (internet freedom). Negative valence toward authoritarian tech use."
191
+ },
192
+ {
193
+ "id": "CHRG-118hhrg55065",
194
+ "primary_frame": "PROCEDURAL",
195
+ "secondary_frame": "NONE",
196
+ "valence": "NEUTRAL",
197
+ "urgency": "LOW",
198
+ "policy_stance": "NEUTRAL",
199
+ "confidence": 0.25,
200
+ "rationale": "FALSE POSITIVE: Educational freedom and school choice hearing. Education policy, not AI. Should be excluded from AI framing analysis."
201
+ },
202
+ {
203
+ "id": "CHRG-118shrg54521",
204
+ "primary_frame": "PROCEDURAL",
205
+ "secondary_frame": "NONE",
206
+ "valence": "NEUTRAL",
207
+ "urgency": "LOW",
208
+ "policy_stance": "NEUTRAL",
209
+ "confidence": 0.20,
210
+ "rationale": "INSUFFICIENT DATA: Title is blank/unknown. Cannot reliably code without reviewing full content. Low confidence."
211
+ },
212
+ {
213
+ "id": "CHRG-119shrg62231",
214
+ "primary_frame": "GOVERNANCE",
215
+ "secondary_frame": "INNOVATION",
216
+ "valence": "POSITIVE",
217
+ "urgency": "MEDIUM",
218
+ "policy_stance": "NEUTRAL",
219
+ "confidence": 0.80,
220
+ "rationale": "NTIA Assistant Secretary nomination hearing. Discusses telecommunications and information policy leadership. Focus on governance (regulatory leadership at NTIA) with positive valence toward nominee's expertise. May involve AI policy but excerpt focuses on telecom generally."
221
+ },
222
+ {
223
+ "id": "CHRG-119hhrg58472",
224
+ "primary_frame": "PROCEDURAL",
225
+ "secondary_frame": "NONE",
226
+ "valence": "NEUTRAL",
227
+ "urgency": "LOW",
228
+ "policy_stance": "NEUTRAL",
229
+ "confidence": 0.25,
230
+ "rationale": "INSUFFICIENT DATA: Generic 'Legislative Hearing' title. Without full content, cannot determine AI relevance. Low confidence."
231
+ },
232
+ {
233
+ "id": "CHRG-118hhrg57440",
234
+ "primary_frame": "PROCEDURAL",
235
+ "secondary_frame": "NONE",
236
+ "valence": "NEUTRAL",
237
+ "urgency": "LOW",
238
+ "policy_stance": "NEUTRAL",
239
+ "confidence": 0.25,
240
+ "rationale": "FALSE POSITIVE: UAP/UFO hearing ('Unidentified Anomalous Phenomena'). About UAPs, not AI. Should be excluded from AI framing analysis."
241
+ },
242
+ {
243
+ "id": "CHRG-119hhrg58804",
244
+ "primary_frame": "PROCEDURAL",
245
+ "secondary_frame": "NONE",
246
+ "valence": "NEUTRAL",
247
+ "urgency": "LOW",
248
+ "policy_stance": "NEUTRAL",
249
+ "confidence": 0.25,
250
+ "rationale": "FALSE POSITIVE: Animal testing oversight hearing. About animal welfare in research, not AI. Should be excluded from AI framing analysis."
251
+ }
252
+ ]