canvaslms 5.3__py3-none-any.whl → 5.5__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
canvaslms/cli/quizzes.nw CHANGED
@@ -1,46 +1,148 @@
1
1
  \chapter{The \texttt{quizzes} command}
2
2
  \label{quizzes-command}
3
3
  \chapterprecis{%
4
- This chapter was originally authored by GitHub Copilot and minimally reviewed
4
+ This chapter was originally authored by GitHub Copilot and minimally reviewed
5
5
  and revised by Daniel Bosk.
6
6
  Then later expanded on by Dan-Claude and, finally,
7
7
  revised by Daniel Bosk.%
8
8
  }
9
9
 
10
- This chapter provides the subcommand [[quizzes]], which provides access to
11
- Canvas quiz and survey data.
10
+ This chapter provides the subcommand [[quizzes]], which provides comprehensive
11
+ access to Canvas quiz and survey functionality. The command supports both
12
+ Classic Quizzes (the original Canvas quiz system) and New Quizzes (Quizzes.Next).
12
13
 
13
- The [[quizzes]] command has two subcommands:
14
- \begin{itemize}
15
- \item [[list]] lists all quizzes (including Classic Quizzes, New Quizzes, and
16
- surveys) in a course.
17
- \item [[analyse]] summarizes quiz or survey evaluation data.
18
- \end{itemize}
14
+ The [[quizzes]] command has the following subcommands:
15
+ \begin{description}
16
+ \item[[[list]]] Lists all quizzes in a course (Classic, New Quizzes, and surveys)
17
+ \item[[[view]]] Displays full quiz content including questions and answers
18
+ \item[[[analyse]]] Summarizes quiz/survey evaluation data with statistics and AI
19
+ \item[[[create]]] Creates a new quiz from JSON (settings and optionally questions)
20
+ \item[[[edit]]] Modifies quiz settings and instructions
21
+ \item[[[delete]]] Removes a quiz
22
+ \item[[[export]]] Exports a complete quiz to JSON for backup or migration
23
+ \item[[[items]]] Manages quiz questions (list, add, edit, delete, export)
24
+ \item[[[banks]]] Manages quiz item banks
25
+ \end{description}
19
26
 
20
- The [[analyse]] subcommand supports two modes of operation:
21
- \begin{enumerate}
22
- \item Fetch quiz/survey data directly from Canvas by specifying the quiz.
23
- This works reliably for Classic Quizzes and New Quizzes (Quizzes.Next).
24
- The implementation uses the documented New Quiz Reports API.
25
- \item Read and analyze a CSV file downloaded from Canvas.
26
- This is the most reliable method for both Classic and New Quizzes.
27
- \end{enumerate}
28
27
 
29
- For analysis, the command provides:
30
- \begin{itemize}
31
- \item statistical summaries for quantitative (multiple choice, rating) data,
32
- \item Individual responses for qualitative (free text) data,
33
- \item AI-generated summaries of qualitative data using the [[llm]] package.
34
- % XXX and quantitative?
35
- \end{itemize}
28
+ \section{Creating and Managing Quizzes}
29
+
30
+ The [[quizzes]] command provides a complete workflow for quiz backup, migration,
31
+ and duplication between courses.
32
+
33
+ \subsection{The export/create workflow}
34
+
35
+ The recommended workflow for duplicating or migrating quizzes is:
36
+ \begin{minted}{bash}
37
+ # 1. Export an existing quiz (with -I for importable format)
38
+ canvaslms quizzes export -c "Source Course" -a "Midterm Exam" -I > midterm.json
36
39
 
40
+ # 2. Create the quiz in another course
41
+ canvaslms quizzes create -c "Target Course" -f midterm.json
37
42
 
38
- \section{Usage Examples}
43
+ # Optionally change the title
44
+ canvaslms quizzes create -c "Target Course" -f midterm.json --title "New Title"
45
+ \end{minted}
46
+
47
+ This workflow exports both quiz settings (title, time limit, instructions, etc.)
48
+ and all questions in a single JSON file.
49
+
50
+ \subsection{Creating quizzes from scratch}
51
+
52
+ To create a new quiz from scratch:
53
+ \begin{minted}{bash}
54
+ # See the full JSON format with examples
55
+ canvaslms quizzes create --example > template.json
56
+
57
+ # Edit the template and create the quiz
58
+ canvaslms quizzes create -c "My Course" -f template.json
59
+ \end{minted}
60
+
61
+ The [[--example]] flag outputs complete examples for both New Quizzes and
62
+ Classic Quizzes, including all supported settings and question types.
63
+
64
+ \subsection{Advanced New Quiz settings}
65
+ \label{sec:advanced-quiz-settings}
66
+
67
+ New Quizzes support additional settings for controlling multiple attempts and
68
+ result visibility. These are specified in the [[quiz_settings]] object within
69
+ [[settings]]:
70
+ \begin{verbatim}
71
+ {
72
+ "quiz_type": "new",
73
+ "settings": {
74
+ "title": "Practice Quiz",
75
+ "quiz_settings": {
76
+ "multiple_attempts": { ... },
77
+ "result_view_settings": { ... }
78
+ }
79
+ }
80
+ }
81
+ \end{verbatim}
82
+
83
+ \paragraph{Multiple attempts with waiting periods.}
84
+ To allow students multiple attempts with a cooling period between attempts:
85
+ \begin{verbatim}
86
+ "multiple_attempts": {
87
+ "multiple_attempts_enabled": true,
88
+ "attempt_limit": null,
89
+ "score_to_keep": "latest",
90
+ "cooling_period": true,
91
+ "cooling_period_seconds": 3600
92
+ }
93
+ \end{verbatim}
94
+ Here, [[attempt_limit]] of [[null]] means unlimited attempts. The
95
+ [[score_to_keep]] can be [[highest]] (default) or [[latest]]. Setting
96
+ [[cooling_period]] to [[true]] requires [[cooling_period_seconds]] to specify
97
+ the wait time (3600 seconds = 1 hour). Using [[latest]] with a cooling period
98
+ lets students ``build on their last result''---they see which questions they
99
+ got wrong and can retry without losing progress.
100
+
101
+ \paragraph{Controlling what students see after submission.}
102
+ To show students their score but hide the correct answers:
103
+ \begin{verbatim}
104
+ "result_view_settings": {
105
+ "display_items": true,
106
+ "display_item_response": true,
107
+ "display_item_correct_answer": false,
108
+ "display_item_feedback": false,
109
+ "display_points_awarded": true,
110
+ "display_points_possible": true
111
+ }
112
+ \end{verbatim}
113
+ With these settings, students see their responses and points but cannot see
114
+ which answers were correct. This is useful for practice quizzes where you want
115
+ students to keep trying without revealing the answers.
116
+
117
+ You can also schedule when correct answers become visible using
118
+ [[display_correct_answer_at]] and [[hide_correct_answer_at]] with ISO 8601
119
+ timestamps.
120
+
121
+ \subsection{Adding questions separately}
122
+
123
+ You can also create an empty quiz and add questions separately:
124
+ \begin{minted}{bash}
125
+ # Create quiz with settings only
126
+ canvaslms quizzes create -c "My Course" --title "New Quiz" --type new
127
+
128
+ # Add questions from a JSON file
129
+ canvaslms quizzes items add -c "My Course" -a "New Quiz" -f questions.json
130
+
131
+ # See question format examples
132
+ canvaslms quizzes items add --example
133
+ \end{minted}
134
+
135
+
136
+ \section{Analyzing Quiz Results}
137
+
138
+ The [[quizzes analyse]] command provides statistical analysis and AI-generated
139
+ summaries of quiz and survey responses. It supports two modes: fetching data
140
+ directly from Canvas or analyzing a downloaded CSV file.
39
141
 
40
142
  \subsection{Analyzing a CSV file}
41
143
 
42
- Download the Student Analysis Report CSV manually from Canvas and analyze it
43
- using this command:
144
+ The most reliable method is to download the Student Analysis Report CSV from
145
+ Canvas and analyze it locally:
44
146
  \begin{minted}{bash}
45
147
  canvaslms quizzes analyse --csv survey_results.csv
46
148
  \end{minted}
@@ -216,6 +318,7 @@ subcommands:
216
318
  \item[ [[add_create_command()]] ] Registers the [[quizzes create]] subcommand
217
319
  \item[ [[add_edit_command()]] ] Registers the [[quizzes edit]] subcommand
218
320
  \item[ [[add_delete_command()]] ] Registers the [[quizzes delete]] subcommand
321
+ \item[ [[add_export_command()]] ] Registers the [[quizzes export]] subcommand
219
322
  \item[ [[add_items_command()]] ] Registers the [[quizzes items]] subcommand group
220
323
  \item[ [[add_banks_command()]] ] Registers the [[quizzes banks]] subcommand group
221
324
  \end{description}
@@ -223,12 +326,14 @@ subcommands:
223
326
  <<[[quizzes.py]]>>=
224
327
  import argparse
225
328
  import csv
329
+ import difflib
226
330
  import json
227
331
  import os
228
332
  import re
229
333
  import statistics
230
334
  import sys
231
335
  import time
336
+ import yaml
232
337
  from collections import defaultdict, Counter
233
338
  from typing import Dict, List, Any
234
339
 
@@ -269,6 +374,10 @@ def add_delete_command(subp):
269
374
  """Adds the quizzes delete subcommand to argparse parser subp"""
270
375
  <<add quizzes delete command to subp>>
271
376
 
377
+ def add_export_command(subp):
378
+ """Adds the quizzes export subcommand to argparse parser subp"""
379
+ <<add quizzes export command to subp>>
380
+
272
381
  def add_items_command(subp):
273
382
  """Adds the quizzes items subcommand group to argparse parser subp"""
274
383
  <<add quizzes items command to subp>>
@@ -288,7 +397,8 @@ The subcommands are organized by workflow:
288
397
  content including questions (Chapter~\ref{quiz-view}).
289
398
  \item[Analysis] [[analyse]] provides statistical summaries and AI-generated
290
399
  insights for quiz/survey responses.
291
- \item[Management] [[create]], [[edit]], and [[delete]] handle quiz lifecycle.
400
+ \item[Management] [[create]], [[edit]], [[delete]], and [[export]] handle quiz
401
+ lifecycle, including backup and migration.
292
402
  \item[Items] [[items]] and [[banks]] manage individual questions and item banks.
293
403
  \end{description}
294
404
 
@@ -308,6 +418,7 @@ add_view_command(quizzes_subp)
308
418
  add_create_command(quizzes_subp)
309
419
  add_edit_command(quizzes_subp)
310
420
  add_delete_command(quizzes_subp)
421
+ add_export_command(quizzes_subp)
311
422
  add_items_command(quizzes_subp)
312
423
  add_banks_command(quizzes_subp)
313
424
  @
@@ -2287,55 +2398,108 @@ new quizzes.
2287
2398
  \label{sec:quizzes-create}
2288
2399
 
2289
2400
  The [[quizzes create]] command creates a new quiz in a course.
2290
- Users can specify quiz settings via a JSON file or interactively.
2401
+ Users can specify quiz settings and optionally questions via a JSON file.
2402
+ The JSON format supports the complete export/create workflow, allowing users
2403
+ to export a quiz with [[quizzes export]] and create a copy with this command.
2291
2404
 
2292
2405
  <<add quizzes create command to subp>>=
2293
2406
  create_parser = subp.add_parser("create",
2294
2407
  help="Create a new quiz",
2295
- description="""Create a new quiz in a course. Quiz settings can be
2296
- provided via a JSON file or entered interactively.""")
2408
+ description="""Create a new quiz in a course from a JSON file.
2409
+
2410
+ Use --example to see the full JSON format with all supported attributes.
2411
+ The JSON can include both quiz settings and questions, enabling a complete
2412
+ export/create workflow:
2413
+
2414
+ canvaslms quizzes export -c "Source Course" -a "Quiz" -I > quiz.json
2415
+ canvaslms quizzes create -c "Target Course" -f quiz.json
2416
+
2417
+ JSON STRUCTURE:
2418
+ {
2419
+ "quiz_type": "new" or "classic",
2420
+ "settings": { ... quiz settings ... },
2421
+ "items": [ ... ] (New Quizzes) or "questions": [ ... ] (Classic)
2422
+ }
2423
+
2424
+ SETTINGS FOR NEW QUIZZES (time_limit in seconds):
2425
+ title, instructions, time_limit, allowed_attempts, shuffle_questions,
2426
+ shuffle_answers, points_possible, due_at, unlock_at, lock_at
2427
+
2428
+ ADVANCED SETTINGS FOR NEW QUIZZES (in settings.quiz_settings):
2429
+ multiple_attempts: attempt_limit, score_to_keep, cooling_period_seconds
2430
+ result_view_settings: display_item_correct_answer, display_item_feedback, etc.
2431
+
2432
+ SETTINGS FOR CLASSIC QUIZZES (time_limit in minutes):
2433
+ title, description, quiz_type (assignment/practice_quiz/graded_survey/survey),
2434
+ time_limit, allowed_attempts, shuffle_questions, shuffle_answers,
2435
+ points_possible, published, due_at, unlock_at, lock_at,
2436
+ show_correct_answers, one_question_at_a_time, cant_go_back, access_code
2437
+
2438
+ For question format details, see: canvaslms quizzes items add --example""")
2297
2439
 
2298
2440
  create_parser.set_defaults(func=create_command)
2299
2441
 
2300
2442
  try:
2301
- courses.add_course_option(create_parser, required=True)
2443
+ courses.add_course_option(create_parser, required=False)
2302
2444
  except argparse.ArgumentError:
2303
2445
  pass
2304
2446
 
2305
2447
  create_parser.add_argument("-f", "--file",
2306
- help="JSON file containing quiz settings",
2448
+ help="JSON file containing quiz settings and optionally questions",
2307
2449
  type=str)
2308
2450
 
2309
2451
  create_parser.add_argument("--type",
2310
2452
  choices=["new", "classic"],
2311
- default="new",
2453
+ default=None,
2312
2454
  help="Quiz type: 'new' (New Quizzes) or 'classic' (Classic Quizzes). "
2313
- "Default: new")
2455
+ "Auto-detected from JSON if not specified. Default: new")
2314
2456
 
2315
2457
  create_parser.add_argument("--title", "-t",
2316
- help="Quiz title (can also be specified in JSON file)")
2458
+ help="Quiz title (overrides title in JSON file)")
2459
+
2460
+ create_parser.add_argument("--example", "-E",
2461
+ action="store_true",
2462
+ help="Print example JSON for creating quizzes and exit")
2317
2463
  @
2318
2464
 
2319
2465
 
2320
2466
  \subsection{JSON format for quiz creation}
2321
2467
 
2322
- The JSON file format follows the Canvas API structure.
2323
- Here is an example for New Quizzes:
2468
+ The JSON file format supports two structures: a simple settings-only format
2469
+ and a full format that includes questions.
2470
+
2471
+ \paragraph{Simple settings format.}
2472
+ For creating a quiz without questions (add questions later with
2473
+ [[quizzes items add]]):
2324
2474
  \begin{verbatim}
2325
2475
  {
2326
2476
  "title": "Midterm Exam",
2327
2477
  "instructions": "<p>Answer all questions.</p>",
2328
2478
  "time_limit": 3600,
2329
- "allowed_attempts": 2,
2330
- "shuffle_questions": true,
2331
- "shuffle_answers": true,
2332
- "points_possible": 100,
2333
- "due_at": "2025-03-15T23:59:00Z"
2479
+ "allowed_attempts": 2
2480
+ }
2481
+ \end{verbatim}
2482
+
2483
+ \paragraph{Full format with questions.}
2484
+ For creating a complete quiz including questions (the format produced by
2485
+ [[quizzes export]]):
2486
+ \begin{verbatim}
2487
+ {
2488
+ "quiz_type": "new",
2489
+ "settings": {
2490
+ "title": "Midterm Exam",
2491
+ "instructions": "<p>Answer all questions.</p>",
2492
+ "time_limit": 3600
2493
+ },
2494
+ "items": [ ... question items ... ]
2334
2495
  }
2335
2496
  \end{verbatim}
2336
2497
 
2337
- For Classic Quizzes, the format is similar but uses different field names
2338
- (e.g., [[quiz_type]] instead of [[grading_type]]).
2498
+ The command auto-detects the format: if [[settings]] key exists, it uses the
2499
+ full format; otherwise it treats the entire JSON as settings.
2500
+
2501
+ For Classic Quizzes, the format uses [[description]] instead of [[instructions]],
2502
+ [[time_limit]] in minutes (not seconds), and [[questions]] instead of [[items]].
2339
2503
 
2340
2504
 
2341
2505
  \subsection{Processing the create command}
@@ -2344,26 +2508,67 @@ The [[create_command]] function processes the create request, reading settings
2344
2508
  from a JSON file if provided, then calling the appropriate API based on the
2345
2509
  selected quiz type.
2346
2510
 
2511
+ When [[--example]] is provided, we print example JSON for both quiz types and
2512
+ exit immediately without requiring course or file arguments:
2513
+ \begin{minted}{bash}
2514
+ canvaslms quizzes create --example > quiz.json
2515
+ # Edit quiz.json to customize
2516
+ canvaslms quizzes create -c "My Course" -f quiz.json
2517
+ \end{minted}
2518
+
2519
+ The command supports two JSON formats:
2520
+ \begin{enumerate}
2521
+ \item \textbf{Full format} with [[settings]] key: used by [[quizzes export]]
2522
+ \item \textbf{Simple format} without [[settings]]: treats entire JSON as settings
2523
+ \end{enumerate}
2524
+
2525
+ If the JSON contains [[items]] (New Quizzes) or [[questions]] (Classic Quizzes),
2526
+ those are added after the quiz is created.
2527
+
2347
2528
  <<functions>>=
2348
2529
  def create_command(config, canvas, args):
2349
2530
  """Creates a new quiz in a course"""
2531
+ # Handle --example flag first (doesn't require course/file)
2532
+ if getattr(args, 'example', False):
2533
+ print_full_quiz_example_json()
2534
+ return
2535
+
2536
+ # Validate required arguments when not using --example
2537
+ if not getattr(args, 'course', None):
2538
+ canvaslms.cli.err(1, "Please specify -c/--course or use --example")
2539
+ if not getattr(args, 'file', None) and not getattr(args, 'title', None):
2540
+ canvaslms.cli.err(1, "Please specify -f/--file or --title or use --example")
2541
+
2350
2542
  # Get the course
2351
2543
  course_list = courses.process_course_option(canvas, args)
2352
2544
  if len(course_list) != 1:
2353
2545
  canvaslms.cli.err(1, "Please specify exactly one course for quiz creation")
2354
2546
  course = course_list[0]
2355
2547
 
2356
- # Read quiz settings from file or use defaults
2357
- quiz_params = {}
2548
+ # Read quiz data from file or use defaults
2549
+ quiz_data = {}
2358
2550
  if args.file:
2359
2551
  try:
2360
2552
  with open(args.file, 'r', encoding='utf-8') as f:
2361
- quiz_params = json.load(f)
2553
+ quiz_data = json.load(f)
2362
2554
  except FileNotFoundError:
2363
2555
  canvaslms.cli.err(1, f"File not found: {args.file}")
2364
2556
  except json.JSONDecodeError as e:
2365
2557
  canvaslms.cli.err(1, f"Invalid JSON in {args.file}: {e}")
2366
2558
 
2559
+ # Determine quiz type from args or JSON
2560
+ quiz_type = args.type
2561
+ if quiz_type is None:
2562
+ quiz_type = quiz_data.get('quiz_type', 'new')
2563
+
2564
+ # Extract settings: support both full format (with 'settings' key) and simple format
2565
+ if 'settings' in quiz_data:
2566
+ quiz_params = quiz_data['settings'].copy()
2567
+ else:
2568
+ # Simple format: entire JSON is settings (excluding items/questions)
2569
+ quiz_params = {k: v for k, v in quiz_data.items()
2570
+ if k not in ('quiz_type', 'items', 'questions')}
2571
+
2367
2572
  # Command-line title overrides file
2368
2573
  if args.title:
2369
2574
  quiz_params['title'] = args.title
@@ -2372,15 +2577,35 @@ def create_command(config, canvas, args):
2372
2577
  canvaslms.cli.err(1, "Quiz title is required (use --title or include in JSON)")
2373
2578
 
2374
2579
  # Create the quiz
2375
- if args.type == "new":
2376
- quiz = create_new_quiz(course, canvas._Canvas__requester, quiz_params)
2580
+ requester = canvas._Canvas__requester
2581
+ if quiz_type == "new":
2582
+ quiz = create_new_quiz(course, requester, quiz_params)
2377
2583
  else:
2378
2584
  quiz = create_classic_quiz(course, quiz_params)
2379
2585
 
2380
- if quiz:
2381
- print(f"Created quiz: {quiz_params.get('title')} (ID: {quiz.get('id', 'unknown')})")
2382
- else:
2586
+ if not quiz:
2383
2587
  canvaslms.cli.err(1, "Failed to create quiz")
2588
+
2589
+ quiz_id = quiz.get('id', 'unknown')
2590
+ print(f"Created quiz: {quiz_params.get('title')} (ID: {quiz_id})")
2591
+
2592
+ # Add questions if present in JSON
2593
+ items = quiz_data.get('items', [])
2594
+ questions = quiz_data.get('questions', [])
2595
+
2596
+ if quiz_type == "new" and items:
2597
+ print(f"Adding {len(items)} question(s)...")
2598
+ success, failed = add_new_quiz_items(course, quiz_id, requester, items)
2599
+ print(f"Added {success} question(s), {failed} failed")
2600
+ elif quiz_type == "classic" and questions:
2601
+ # For classic quizzes, we need to get the quiz object to add questions
2602
+ try:
2603
+ quiz_obj = course.get_quiz(quiz_id)
2604
+ print(f"Adding {len(questions)} question(s)...")
2605
+ success, failed = add_classic_questions(quiz_obj, questions)
2606
+ print(f"Added {success} question(s), {failed} failed")
2607
+ except Exception as e:
2608
+ canvaslms.cli.warn(f"Failed to add questions: {e}")
2384
2609
  @
2385
2610
 
2386
2611
 
@@ -2389,6 +2614,13 @@ def create_command(config, canvas, args):
2389
2614
  The New Quizzes API uses a different endpoint than Classic Quizzes.
2390
2615
  We make a direct POST request to [[/api/quiz/v1/courses/:id/quizzes]].
2391
2616
 
2617
+ The API expects nested parameters for [[quiz_settings]], which contains
2618
+ [[multiple_attempts]] and [[result_view_settings]] sub-structures. We need
2619
+ to flatten these into the format the API expects:
2620
+ \begin{verbatim}
2621
+ quiz[quiz_settings][multiple_attempts][cooling_period_seconds]=3600
2622
+ \end{verbatim}
2623
+
2392
2624
  <<functions>>=
2393
2625
  def create_new_quiz(course, requester, quiz_params):
2394
2626
  """Creates a New Quiz via the New Quizzes API
@@ -2396,17 +2628,15 @@ def create_new_quiz(course, requester, quiz_params):
2396
2628
  Args:
2397
2629
  course: Course object
2398
2630
  requester: Canvas API requester for direct HTTP calls
2399
- quiz_params: Dictionary of quiz parameters
2631
+ quiz_params: Dictionary of quiz parameters, may include nested quiz_settings
2400
2632
 
2401
2633
  Returns:
2402
2634
  Dictionary with created quiz data, or None on failure
2403
2635
  """
2404
2636
  endpoint = f"courses/{course.id}/quizzes"
2405
2637
 
2406
- # Build the request parameters
2407
- params = {}
2408
- for key, value in quiz_params.items():
2409
- params[f'quiz[{key}]'] = value
2638
+ # Build the request parameters, handling nested quiz_settings
2639
+ params = build_new_quiz_api_params(quiz_params)
2410
2640
 
2411
2641
  try:
2412
2642
  response = requester.request(
@@ -2421,6 +2651,53 @@ def create_new_quiz(course, requester, quiz_params):
2421
2651
  return None
2422
2652
  @
2423
2653
 
2654
+ The [[build_new_quiz_api_params]] function handles the conversion of our
2655
+ nested dictionary structure into the flat parameter format required by the
2656
+ Canvas API. It recursively processes [[quiz_settings]] and its sub-structures.
2657
+
2658
+ <<functions>>=
2659
+ def build_new_quiz_api_params(quiz_params):
2660
+ """Converts quiz parameters to Canvas API format
2661
+
2662
+ Handles nested structures like quiz_settings.multiple_attempts by
2663
+ flattening them into the format:
2664
+ quiz[quiz_settings][multiple_attempts][key]=value
2665
+
2666
+ Args:
2667
+ quiz_params: Dictionary with quiz parameters, may include nested dicts
2668
+
2669
+ Returns:
2670
+ Dictionary suitable for passing to requester.request()
2671
+ """
2672
+ params = {}
2673
+
2674
+ for key, value in quiz_params.items():
2675
+ if value is None:
2676
+ continue
2677
+
2678
+ if key == 'quiz_settings' and isinstance(value, dict):
2679
+ # Handle nested quiz_settings structure
2680
+ for settings_key, settings_value in value.items():
2681
+ if settings_value is None:
2682
+ continue
2683
+
2684
+ if isinstance(settings_value, dict):
2685
+ # Handle doubly-nested structures like multiple_attempts, result_view_settings
2686
+ for nested_key, nested_value in settings_value.items():
2687
+ if nested_value is not None:
2688
+ param_key = f'quiz[quiz_settings][{settings_key}][{nested_key}]'
2689
+ params[param_key] = nested_value
2690
+ else:
2691
+ # Direct quiz_settings value (e.g., shuffle_answers)
2692
+ param_key = f'quiz[quiz_settings][{settings_key}]'
2693
+ params[param_key] = settings_value
2694
+ else:
2695
+ # Top-level quiz parameter
2696
+ params[f'quiz[{key}]'] = value
2697
+
2698
+ return params
2699
+ @
2700
+
2424
2701
 
2425
2702
  \subsection{Creating a Classic Quiz}
2426
2703
 
@@ -2605,23 +2882,300 @@ QUIZ_SCHEMA = {
2605
2882
  @
2606
2883
 
2607
2884
 
2885
+ \subsection{New Quiz settings schema}
2886
+ \label{sec:new-quiz-settings-schema}
2887
+
2888
+ New Quizzes use a more sophisticated settings structure than Classic Quizzes.
2889
+ The [[quiz_settings]] object contains nested structures for multiple attempts
2890
+ and result visibility. These settings are particularly important for formative
2891
+ assessments where students should be able to:
2892
+ \begin{itemize}
2893
+ \item Retry the quiz multiple times (with optional waiting periods)
2894
+ \item See their score but not the correct answers
2895
+ \item Build on their previous attempt rather than starting fresh
2896
+ \end{itemize}
2897
+
2898
+ \paragraph{Multiple attempts settings.}
2899
+ The [[multiple_attempts]] structure controls how many times students can take
2900
+ the quiz and what happens between attempts:
2901
+
2902
+ <<constants>>=
2903
+ NEW_QUIZ_MULTIPLE_ATTEMPTS_SCHEMA = {
2904
+ 'multiple_attempts_enabled': {
2905
+ 'default': False,
2906
+ 'description': 'Whether multiple attempts are allowed'
2907
+ },
2908
+ 'attempt_limit': {
2909
+ 'default': True,
2910
+ 'description': 'Whether there is a maximum number of attempts (False = unlimited)'
2911
+ },
2912
+ 'max_attempts': {
2913
+ 'default': 1,
2914
+ 'description': 'Maximum number of attempts (only used if attempt_limit is True)'
2915
+ },
2916
+ 'score_to_keep': {
2917
+ 'default': 'highest',
2918
+ 'description': 'Which score to keep: average, first, highest, or latest'
2919
+ },
2920
+ 'cooling_period': {
2921
+ 'default': False,
2922
+ 'description': 'Whether to require a waiting period between attempts'
2923
+ },
2924
+ 'cooling_period_seconds': {
2925
+ 'default': None,
2926
+ 'description': 'Required waiting time between attempts in seconds (e.g., 3600 = 1 hour)'
2927
+ },
2928
+ }
2929
+ @
2930
+
2931
+ \paragraph{Result view settings.}
2932
+ The [[result_view_settings]] structure controls what students see after
2933
+ submitting the quiz. This is crucial for formative assessments where you want
2934
+ students to know their score but not memorize correct answers:
2935
+
2936
+ <<constants>>=
2937
+ NEW_QUIZ_RESULT_VIEW_SCHEMA = {
2938
+ 'result_view_restricted': {
2939
+ 'default': False,
2940
+ 'description': 'Whether to restrict what students see in results'
2941
+ },
2942
+ 'display_points_awarded': {
2943
+ 'default': True,
2944
+ 'description': 'Show points earned (requires result_view_restricted=True)'
2945
+ },
2946
+ 'display_points_possible': {
2947
+ 'default': True,
2948
+ 'description': 'Show total points possible (requires result_view_restricted=True)'
2949
+ },
2950
+ 'display_items': {
2951
+ 'default': True,
2952
+ 'description': 'Show questions in results (requires result_view_restricted=True)'
2953
+ },
2954
+ 'display_item_response': {
2955
+ 'default': True,
2956
+ 'description': 'Show student responses (requires display_items=True)'
2957
+ },
2958
+ 'display_item_response_qualifier': {
2959
+ 'default': 'always',
2960
+ 'description': 'When to show responses: always, once_per_attempt, after_last_attempt, once_after_last_attempt'
2961
+ },
2962
+ 'display_item_response_correctness': {
2963
+ 'default': True,
2964
+ 'description': 'Show whether answers are correct/incorrect (requires display_item_response=True)'
2965
+ },
2966
+ 'display_item_response_correctness_qualifier': {
2967
+ 'default': 'always',
2968
+ 'description': 'When to show correctness: always, after_last_attempt'
2969
+ },
2970
+ 'display_item_correct_answer': {
2971
+ 'default': True,
2972
+ 'description': 'Show the correct answer (requires display_item_response_correctness=True)'
2973
+ },
2974
+ 'display_item_feedback': {
2975
+ 'default': True,
2976
+ 'description': 'Show item feedback (requires display_items=True)'
2977
+ },
2978
+ }
2979
+ @
2980
+
2981
+
2982
+ \subsection{File formats for quiz editing}
2983
+ \label{sec:quiz-file-formats}
2984
+
2985
+ The [[quizzes edit]] command supports three file formats, auto-detected from
2986
+ the file extension and verified against the content:
2987
+ \begin{description}
2988
+ \item[JSON format] ([[.json]]) Full quiz structure with [[settings]] and
2989
+ optional [[items]]. This is the same format used by [[quizzes export]] and
2990
+ [[quizzes create]], enabling a round-trip workflow.
2991
+ \item[YAML format] ([[.yaml]], [[.yml]]) Same structure as JSON but in YAML
2992
+ syntax. More readable for hand-editing.
2993
+ \item[Front matter format] ([[.md]]) YAML front matter with Markdown body for
2994
+ instructions. This is the default interactive editing format, focused on
2995
+ editing settings and instructions without touching questions.
2996
+ \end{description}
2997
+
2998
+ The format is detected by file extension and verified by checking the first
2999
+ characters of the file:
3000
+ \begin{itemize}
3001
+ \item JSON files start with [[{]]
3002
+ \item YAML full format starts with [[quiz_type:]] or [[settings:]]
3003
+ \item Front matter format starts with [[---]]
3004
+ \end{itemize}
3005
+
3006
+ <<functions>>=
3007
+ def detect_quiz_file_format(filepath):
3008
+ """Detect quiz file format from extension and content
3009
+
3010
+ Args:
3011
+ filepath: Path to the file
3012
+
3013
+ Returns:
3014
+ 'json', 'yaml', or 'frontmatter'
3015
+
3016
+ Raises:
3017
+ FileNotFoundError: If file doesn't exist
3018
+ ValueError: If format cannot be determined or extension mismatches content
3019
+ """
3020
+ # Check extension
3021
+ ext = os.path.splitext(filepath)[1].lower()
3022
+
3023
+ # Read file content
3024
+ with open(filepath, 'r', encoding='utf-8') as f:
3025
+ file_content = f.read()
3026
+
3027
+ content_stripped = file_content.lstrip()
3028
+
3029
+ # Determine expected format from extension
3030
+ if ext == '.json':
3031
+ expected = 'json'
3032
+ elif ext in ('.yaml', '.yml'):
3033
+ expected = 'yaml'
3034
+ elif ext == '.md':
3035
+ expected = 'frontmatter'
3036
+ else:
3037
+ expected = None # Auto-detect
3038
+
3039
+ # Detect actual format from content
3040
+ if content_stripped.startswith('{'):
3041
+ actual = 'json'
3042
+ elif content_stripped.startswith('---'):
3043
+ actual = 'frontmatter'
3044
+ elif (content_stripped.startswith('quiz_type:') or
3045
+ content_stripped.startswith('settings:')):
3046
+ actual = 'yaml'
3047
+ else:
3048
+ # For YAML, try parsing to see if it's valid
3049
+ try:
3050
+ data = yaml.safe_load(content_stripped)
3051
+ if isinstance(data, dict) and ('settings' in data or 'title' in data):
3052
+ actual = 'yaml'
3053
+ else:
3054
+ raise ValueError(f"Cannot determine format of {filepath}")
3055
+ except yaml.YAMLError:
3056
+ raise ValueError(f"Cannot determine format of {filepath}")
3057
+
3058
+ # Verify match if extension specified a format
3059
+ if expected and expected != actual:
3060
+ raise ValueError(
3061
+ f"File extension suggests {expected} but content looks like {actual}"
3062
+ )
3063
+
3064
+ return actual, file_content
3065
+ @
3066
+
3067
+ The [[read_quiz_from_file]] function reads quiz data from any supported format
3068
+ and returns a unified structure that can be used for updating the quiz.
3069
+
3070
+ <<functions>>=
3071
+ def read_quiz_from_file(filepath):
3072
+ """Read quiz data from JSON, YAML, or front matter file
3073
+
3074
+ Args:
3075
+ filepath: Path to the quiz file
3076
+
3077
+ Returns:
3078
+ Dictionary with:
3079
+ 'format': 'json'|'yaml'|'frontmatter'
3080
+ 'settings': dict of quiz settings
3081
+ 'instructions': str or None (body for frontmatter format)
3082
+ 'items': list or None (questions, for json/yaml)
3083
+ 'quiz_type': 'new'|'classic' or None
3084
+
3085
+ Raises:
3086
+ FileNotFoundError: If file doesn't exist
3087
+ ValueError: If file format is invalid
3088
+ """
3089
+ format_type, file_content = detect_quiz_file_format(filepath)
3090
+
3091
+ if format_type == 'json':
3092
+ try:
3093
+ data = json.loads(file_content)
3094
+ except json.JSONDecodeError as e:
3095
+ raise ValueError(f"Invalid JSON: {e}")
3096
+ return _parse_quiz_full_format(data, 'json')
3097
+
3098
+ elif format_type == 'yaml':
3099
+ try:
3100
+ data = yaml.safe_load(file_content)
3101
+ except yaml.YAMLError as e:
3102
+ raise ValueError(f"Invalid YAML: {e}")
3103
+ return _parse_quiz_full_format(data, 'yaml')
3104
+
3105
+ else: # frontmatter
3106
+ attributes, body = content.parse_yaml_front_matter(file_content)
3107
+ return {
3108
+ 'format': 'frontmatter',
3109
+ 'settings': attributes,
3110
+ 'instructions': body.strip() if body else None,
3111
+ 'items': None,
3112
+ 'quiz_type': None
3113
+ }
3114
+
3115
+
3116
+ def _parse_quiz_full_format(data, format_type):
3117
+ """Parse full format (JSON or YAML) quiz data
3118
+
3119
+ Handles both the full format with 'settings' key and the simple format
3120
+ where the entire dict is settings.
3121
+ """
3122
+ if not isinstance(data, dict):
3123
+ raise ValueError(f"Expected a dictionary, got {type(data).__name__}")
3124
+
3125
+ if 'settings' in data:
3126
+ settings = data['settings'].copy()
3127
+ else:
3128
+ # Simple format: entire dict is settings (excluding metadata keys)
3129
+ settings = {k: v for k, v in data.items()
3130
+ if k not in ('quiz_type', 'items', 'questions')}
3131
+
3132
+ # Extract instructions from settings if present
3133
+ instructions = settings.get('instructions')
3134
+
3135
+ return {
3136
+ 'format': format_type,
3137
+ 'settings': settings,
3138
+ 'instructions': instructions,
3139
+ 'items': data.get('items') or data.get('questions'),
3140
+ 'quiz_type': data.get('quiz_type')
3141
+ }
3142
+ @
3143
+
3144
+
2608
3145
  \subsection{Command-line interface}
2609
3146
 
2610
3147
  The edit command takes course and quiz selection options. The [[-f]] option
2611
- is now optional---without it, interactive mode is used. The [[--html]] option
2612
- preserves raw HTML instead of converting to Markdown.
3148
+ supports multiple file formats (JSON, YAML, Markdown with front matter), which
3149
+ are auto-detected from the file extension. The [[--full-json]] option enables
3150
+ interactive editing in full JSON format (same as [[quizzes export]]).
2613
3151
 
2614
3152
  <<add quizzes edit command to subp>>=
2615
3153
  edit_parser = subp.add_parser("edit",
2616
3154
  help="Edit quiz settings and instructions",
2617
3155
  description="""Edit an existing quiz's settings and instructions.
2618
3156
 
2619
- Without -f: Opens your editor with YAML front matter (settings) and
2620
- Markdown body (instructions). After editing, shows a preview and asks
2621
- whether to accept, edit further, or discard the changes.
3157
+ INTERACTIVE MODE (default):
3158
+ Opens your editor with YAML front matter (settings) and Markdown body
3159
+ (instructions). After editing, shows a preview and asks whether to
3160
+ accept, edit further, or discard the changes.
3161
+
3162
+ Use --full-json to edit as full JSON (same format as 'quizzes export -I').
3163
+ This allows editing all quiz_settings including multiple_attempts and
3164
+ result_view_settings.
3165
+
3166
+ FILE MODE (-f):
3167
+ Reads content from a file. Format is auto-detected from extension:
3168
+ .json - Full JSON format (settings + optional items)
3169
+ .yaml/.yml - Full YAML format (same structure as JSON)
3170
+ .md - YAML front matter + Markdown body
2622
3171
 
2623
- With -f: Reads content from a file for scripted workflows. The file
2624
- should have YAML front matter followed by Markdown (or HTML with --html).
3172
+ The JSON/YAML format is the same as 'quizzes export' output, enabling
3173
+ a round-trip workflow: export, modify, edit.
3174
+
3175
+ ITEM HANDLING:
3176
+ By default, items/questions in the file are ignored to protect student
3177
+ submissions. Use --replace-items to replace all questions (with confirmation
3178
+ if submissions exist).
2625
3179
 
2626
3180
  The quiz type (New or Classic) is auto-detected.""")
2627
3181
 
@@ -2635,13 +3189,24 @@ except argparse.ArgumentError:
2635
3189
  add_quiz_option(edit_parser, required=True)
2636
3190
 
2637
3191
  edit_parser.add_argument("-f", "--file",
2638
- help="Read content from a Markdown/HTML file with YAML front matter",
3192
+ help="Read content from file (format auto-detected: .json, .yaml, .yml, .md)",
2639
3193
  type=str,
2640
3194
  required=False)
2641
3195
 
2642
3196
  edit_parser.add_argument("--html",
2643
3197
  action="store_true",
2644
3198
  help="Edit raw HTML instead of converting to Markdown")
3199
+
3200
+ edit_parser.add_argument("--full-json",
3201
+ action="store_true",
3202
+ help="Interactive mode: edit as full JSON instead of YAML+Markdown. "
3203
+ "Allows editing all quiz_settings including multiple_attempts.")
3204
+
3205
+ edit_parser.add_argument("--replace-items",
3206
+ action="store_true",
3207
+ help="Replace existing questions with items from file. "
3208
+ "Default: ignore items to preserve student attempts. "
3209
+ "Will prompt for confirmation if quiz has submissions.")
2645
3210
  @
2646
3211
 
2647
3212
 
@@ -2663,31 +3228,70 @@ def edit_command(config, canvas, args):
2663
3228
  <<edit quiz interactively>>
2664
3229
  @
2665
3230
 
2666
- In file mode, we read the YAML front matter and body from the file, then
2667
- apply the changes to the quiz. If the file contains an [[id]] field, we use
2668
- it to identify the quiz directly; otherwise we use the filter match.
3231
+ In file mode, we use [[read_quiz_from_file]] to detect the format and parse
3232
+ the content appropriately. The function returns a unified structure that
3233
+ works for all three formats (JSON, YAML, or Markdown with front matter).
3234
+
3235
+ For JSON and YAML files, the structure may contain [[items]] (questions).
3236
+ By default we ignore items to protect student submissions---use
3237
+ [[--replace-items]] to replace them (after confirmation if submissions exist).
2669
3238
 
2670
3239
  <<edit quiz from file>>=
2671
3240
  try:
2672
- attributes, body = content.read_content_from_file(args.file)
3241
+ quiz_data = read_quiz_from_file(args.file)
2673
3242
  except FileNotFoundError:
2674
3243
  canvaslms.cli.err(1, f"File not found: {args.file}")
2675
3244
  except ValueError as e:
2676
3245
  canvaslms.cli.err(1, f"Invalid file format: {e}")
2677
3246
 
2678
- # If id is specified, find that specific quiz
2679
- if attributes.get('id'):
3247
+ <<identify target quiz from file data>>
3248
+ <<handle item replacement if requested>>
3249
+ <<apply settings from file>>
3250
+ @
3251
+
3252
+ We extract [[id]] from either the top level or the settings:
3253
+
3254
+ <<identify target quiz from file data>>=
3255
+ settings = quiz_data['settings']
3256
+ quiz_id = settings.get('id')
3257
+
3258
+ if quiz_id:
2680
3259
  target_quiz = None
2681
3260
  for quiz in quiz_list:
2682
- if str(quiz.id) == str(attributes['id']):
3261
+ if str(quiz.id) == str(quiz_id):
2683
3262
  target_quiz = quiz
2684
3263
  break
2685
3264
  if not target_quiz:
2686
- canvaslms.cli.err(1, f"Quiz with ID {attributes['id']} not found")
3265
+ canvaslms.cli.err(1, f"Quiz with ID {quiz_id} not found")
2687
3266
  quiz_list = [target_quiz]
3267
+ @
3268
+
3269
+ If [[--replace-items]] is specified and the file contains items, we replace
3270
+ the quiz questions. But first we check for student submissions and ask for
3271
+ confirmation if any exist.
3272
+
3273
+ <<handle item replacement if requested>>=
3274
+ items = quiz_data.get('items')
3275
+ if args.replace_items and items:
3276
+ for quiz in quiz_list:
3277
+ <<check submissions and replace items>>
3278
+ elif items:
3279
+ print(f"Note: Ignoring {len(items)} items in file (use --replace-items to update)",
3280
+ file=sys.stderr)
3281
+ @
3282
+
3283
+ The settings are applied using [[apply_quiz_edit]]. For frontmatter format,
3284
+ the body (instructions) is separate; for JSON/YAML it's in settings.
3285
+
3286
+ <<apply settings from file>>=
3287
+ body = quiz_data.get('instructions') or ''
2688
3288
 
2689
3289
  for quiz in quiz_list:
2690
- success = apply_quiz_edit(quiz, attributes, body, requester, args.html)
3290
+ # For JSON/YAML, instructions may be in settings; extract it for body
3291
+ if quiz_data['format'] in ('json', 'yaml'):
3292
+ body = settings.get('instructions', '') or ''
3293
+
3294
+ success = apply_quiz_edit(quiz, settings, body, requester, args.html)
2691
3295
  if success:
2692
3296
  print(f"Updated quiz: {quiz.title}")
2693
3297
  else:
@@ -2695,7 +3299,15 @@ for quiz in quiz_list:
2695
3299
  @
2696
3300
 
2697
3301
  In interactive mode, we process each quiz one at a time, opening the editor
2698
- and showing a preview before applying changes.
3302
+ and showing a preview before applying changes. The [[--full-json]] flag
3303
+ selects between two editing modes:
3304
+ \begin{description}
3305
+ \item[YAML+Markdown] (default) Simple front matter with Markdown body for
3306
+ instructions. Good for quick edits to title, due dates, and instructions.
3307
+ \item[Full JSON] (with [[--full-json]]) Complete JSON export format, including
3308
+ [[quiz_settings]] with [[multiple_attempts]] and [[result_view_settings]].
3309
+ Optionally includes items if [[--replace-items]] is also specified.
3310
+ \end{description}
2699
3311
 
2700
3312
  <<edit quiz interactively>>=
2701
3313
  # Confirm if multiple quizzes match
@@ -2713,7 +3325,13 @@ updated_count = 0
2713
3325
  skipped_count = 0
2714
3326
 
2715
3327
  for quiz in quiz_list:
2716
- result = edit_quiz_interactive(quiz, requester, args.html)
3328
+ if args.full_json:
3329
+ result = edit_quiz_interactive_json(
3330
+ quiz, requester, args.html, args.replace_items
3331
+ )
3332
+ else:
3333
+ result = edit_quiz_interactive(quiz, requester, args.html)
3334
+
2717
3335
  if result == 'updated':
2718
3336
  updated_count += 1
2719
3337
  elif result == 'skipped':
@@ -2743,7 +3361,7 @@ def edit_quiz_interactive(quiz, requester, html_mode=False):
2743
3361
  'updated', 'skipped', or 'error'
2744
3362
  """
2745
3363
  # Extract current quiz attributes including instructions
2746
- current_attrs = extract_quiz_attributes(quiz)
3364
+ current_attrs = extract_quiz_attributes(quiz, requester)
2747
3365
 
2748
3366
  # Get content from editor - instructions becomes the body
2749
3367
  result = content.get_content_from_editor(
@@ -2774,11 +3392,421 @@ def edit_quiz_interactive(quiz, requester, html_mode=False):
2774
3392
  print("Discarded changes.", file=sys.stderr)
2775
3393
  return 'skipped'
2776
3394
 
2777
- final_attrs, final_body = result
3395
+ final_attrs, final_body = result
3396
+
3397
+ # Apply the changes
3398
+ success = apply_quiz_edit(quiz, final_attrs, final_body, requester, html_mode)
3399
+ return 'updated' if success else 'error'
3400
+ @
3401
+
3402
+
3403
+ \subsection{Interactive JSON editing workflow}
3404
+
3405
+ The [[--full-json]] mode provides access to all quiz settings, including the
3406
+ nested [[quiz_settings]] structure that controls [[multiple_attempts]] and
3407
+ [[result_view_settings]]. This uses the same JSON format as [[quizzes export]],
3408
+ enabling a round-trip workflow.
3409
+
3410
+ The workflow is:
3411
+ \begin{enumerate}
3412
+ \item Export the quiz to JSON (same format as [[quizzes export -I]])
3413
+ \item Open the JSON in the user's editor
3414
+ \item Show a diff of the changes and ask for confirmation
3415
+ \item Apply the changes (settings only, unless [[--replace-items]])
3416
+ \end{enumerate}
3417
+
3418
+ <<functions>>=
3419
+ def edit_quiz_interactive_json(quiz, requester, html_mode=False,
3420
+ replace_items=False):
3421
+ """Edit a quiz interactively using full JSON format
3422
+
3423
+ Args:
3424
+ quiz: Quiz object to edit
3425
+ requester: Canvas API requester
3426
+ html_mode: If True, don't convert instructions (not used in JSON mode)
3427
+ replace_items: If True, also update items from the JSON
3428
+
3429
+ Returns:
3430
+ 'updated', 'skipped', or 'error'
3431
+ """
3432
+ import tempfile
3433
+
3434
+ # Export current quiz state to JSON
3435
+ if is_new_quiz(quiz):
3436
+ original_data = export_full_new_quiz(quiz, requester, include_banks=True,
3437
+ importable=not replace_items)
3438
+ else:
3439
+ original_data = export_full_classic_quiz(quiz, importable=not replace_items)
3440
+
3441
+ # If not replacing items, remove items from the export to simplify editing
3442
+ if not replace_items:
3443
+ original_data.pop('items', None)
3444
+ original_data.pop('questions', None)
3445
+ original_json = json.dumps(original_data, indent=2, ensure_ascii=False)
3446
+
3447
+ # Create temp file with .json extension for editor syntax highlighting
3448
+ with tempfile.NamedTemporaryFile(
3449
+ mode='w', suffix='.json', delete=False, encoding='utf-8'
3450
+ ) as f:
3451
+ f.write(original_json)
3452
+ temp_path = f.name
3453
+
3454
+ try:
3455
+ while True:
3456
+ # Open editor
3457
+ edited_json = open_in_editor(temp_path)
3458
+ if edited_json is None:
3459
+ print("Editor cancelled.", file=sys.stderr)
3460
+ return 'skipped'
3461
+
3462
+ # Parse the edited JSON
3463
+ try:
3464
+ edited_data = json.loads(edited_json)
3465
+ except json.JSONDecodeError as e:
3466
+ print(f"Invalid JSON: {e}", file=sys.stderr)
3467
+ response = input("Edit again? [Y/n] ").strip().lower()
3468
+ if response == 'n':
3469
+ return 'skipped'
3470
+ continue
3471
+
3472
+ # Show diff and confirm
3473
+ result = show_json_diff_and_confirm(
3474
+ original_json, edited_json, quiz.title
3475
+ )
3476
+
3477
+ if result == 'accept':
3478
+ break
3479
+ elif result == 'edit':
3480
+ # Update temp file with edited content for next iteration
3481
+ with open(temp_path, 'w', encoding='utf-8') as f:
3482
+ f.write(edited_json)
3483
+ continue
3484
+ else: # discard
3485
+ print("Discarded changes.", file=sys.stderr)
3486
+ return 'skipped'
3487
+ finally:
3488
+ # Clean up temp file
3489
+ try:
3490
+ os.unlink(temp_path)
3491
+ except OSError:
3492
+ pass
3493
+
3494
+ # Apply the changes
3495
+ success = apply_quiz_from_dict(
3496
+ quiz, edited_data, requester, replace_items=replace_items
3497
+ )
3498
+ return 'updated' if success else 'error'
3499
+ @
3500
+
3501
+ \paragraph{Opening the editor.}
3502
+ We use the [[EDITOR]] environment variable, falling back to common editors.
3503
+
3504
+ <<functions>>=
3505
+ def open_in_editor(filepath):
3506
+ """Open a file in the user's preferred editor
3507
+
3508
+ Args:
3509
+ filepath: Path to the file to edit
3510
+
3511
+ Returns:
3512
+ The edited file content, or None if editor failed/was cancelled
3513
+ """
3514
+ import subprocess
3515
+
3516
+ editor = os.environ.get('EDITOR', os.environ.get('VISUAL', 'vi'))
3517
+
3518
+ try:
3519
+ subprocess.run([editor, filepath], check=True)
3520
+ except subprocess.CalledProcessError:
3521
+ return None
3522
+ except FileNotFoundError:
3523
+ print(f"Editor '{editor}' not found. Set EDITOR environment variable.",
3524
+ file=sys.stderr)
3525
+ return None
3526
+
3527
+ try:
3528
+ with open(filepath, 'r', encoding='utf-8') as f:
3529
+ return f.read()
3530
+ except IOError:
3531
+ return None
3532
+ @
3533
+
3534
+ \paragraph{Showing the diff and confirming.}
3535
+ We show a unified diff of the changes and ask the user to accept, edit again,
3536
+ or discard.
3537
+
3538
+ <<functions>>=
3539
+ def show_json_diff_and_confirm(original, edited, title):
3540
+ """Show a diff between original and edited JSON, ask for confirmation
3541
+
3542
+ Args:
3543
+ original: Original JSON string
3544
+ edited: Edited JSON string
3545
+ title: Quiz title for display
3546
+
3547
+ Returns:
3548
+ 'accept', 'edit', or 'discard'
3549
+ """
3550
+ if original.strip() == edited.strip():
3551
+ print("No changes detected.")
3552
+ return 'discard'
3553
+
3554
+ # Generate unified diff
3555
+ original_lines = original.splitlines(keepends=True)
3556
+ edited_lines = edited.splitlines(keepends=True)
3557
+
3558
+ diff = list(difflib.unified_diff(
3559
+ original_lines, edited_lines,
3560
+ fromfile='original', tofile='edited',
3561
+ lineterm=''
3562
+ ))
3563
+
3564
+ if not diff:
3565
+ print("No changes detected.")
3566
+ return 'discard'
3567
+
3568
+ # Display diff with colors if terminal supports it
3569
+ print(f"\n--- Changes to: {title} ---")
3570
+ for line in diff:
3571
+ line = line.rstrip('\n')
3572
+ if line.startswith('+') and not line.startswith('+++'):
3573
+ print(f"\033[32m{line}\033[0m") # Green for additions
3574
+ elif line.startswith('-') and not line.startswith('---'):
3575
+ print(f"\033[31m{line}\033[0m") # Red for deletions
3576
+ elif line.startswith('@@'):
3577
+ print(f"\033[36m{line}\033[0m") # Cyan for line numbers
3578
+ else:
3579
+ print(line)
3580
+ print()
3581
+
3582
+ # Prompt for action
3583
+ while True:
3584
+ response = input("[A]ccept, [E]dit, [D]iscard? ").strip().lower()
3585
+ if response in ('a', 'accept'):
3586
+ return 'accept'
3587
+ elif response in ('e', 'edit'):
3588
+ return 'edit'
3589
+ elif response in ('d', 'discard'):
3590
+ return 'discard'
3591
+ print("Please enter A, E, or D.")
3592
+ @
3593
+
3594
+ \paragraph{Applying changes from JSON.}
3595
+ We extract settings from the edited JSON and apply them. If
3596
+ [[replace_items]] is True and items are present, we also update questions.
3597
+
3598
+ <<functions>>=
3599
+ def apply_quiz_from_dict(quiz, data, requester, replace_items=False):
3600
+ """Apply quiz changes from a dictionary (parsed JSON/YAML)
3601
+
3602
+ Args:
3603
+ quiz: Quiz object to update
3604
+ data: Dictionary with settings and optional items
3605
+ requester: Canvas API requester
3606
+ replace_items: If True, replace quiz items
3607
+
3608
+ Returns:
3609
+ True on success, False on failure
3610
+ """
3611
+ # Extract settings - handle both 'settings' key and flat structure
3612
+ if 'settings' in data:
3613
+ settings = data['settings'].copy()
3614
+ else:
3615
+ # Flat structure: everything except 'items' and 'quiz_type' is settings
3616
+ settings = {k: v for k, v in data.items()
3617
+ if k not in ('items', 'questions', 'quiz_type')}
3618
+
3619
+ # Extract instructions/body
3620
+ body = settings.get('instructions', '') or ''
3621
+
3622
+ # Apply settings
3623
+ success = apply_quiz_edit(quiz, settings, body, requester, html_mode=True)
3624
+
3625
+ if not success:
3626
+ return False
3627
+
3628
+ # Handle items if requested
3629
+ items = data.get('items') or data.get('questions')
3630
+ if replace_items and items:
3631
+ item_success = replace_quiz_items(quiz, items, requester)
3632
+ if not item_success:
3633
+ canvaslms.cli.warn("Settings updated but failed to replace items")
3634
+ return False
3635
+
3636
+ return True
3637
+ @
3638
+
3639
+
3640
+ \subsection{Submission checking and item replacement}
3641
+
3642
+ Before replacing quiz items, we should check if students have already started
3643
+ the quiz. Replacing items after students have submitted could invalidate their
3644
+ work, so we warn and ask for confirmation.
3645
+
3646
+ <<functions>>=
3647
+ def get_quiz_submission_count(quiz, requester):
3648
+ """Get the number of student submissions for a quiz
3649
+
3650
+ Args:
3651
+ quiz: Quiz object
3652
+ requester: Canvas API requester (unused for New Quizzes)
3653
+
3654
+ Returns:
3655
+ Number of submissions, or -1 if unable to determine
3656
+ """
3657
+ try:
3658
+ if is_new_quiz(quiz):
3659
+ # New Quizzes are assignments - use standard Canvas API.
3660
+ # The quiz.id is actually the assignment_id.
3661
+ # Note: canvasapi returns NewQuiz.id as string (bug/inconsistency),
3662
+ # but get_assignment() requires int.
3663
+ assignment = quiz.course.get_assignment(int(quiz.id))
3664
+ submissions = list(assignment.get_submissions())
3665
+ # Count submissions that have been submitted (not just placeholder records)
3666
+ return sum(1 for s in submissions if s.workflow_state == 'submitted'
3667
+ or s.workflow_state == 'graded'
3668
+ or getattr(s, 'submitted_at', None) is not None)
3669
+ else:
3670
+ # Classic Quiz: use canvasapi
3671
+ submissions = list(quiz.get_submissions())
3672
+ # Count only actual submissions (not just generated records)
3673
+ return sum(1 for s in submissions if s.workflow_state != 'settings_only')
3674
+ except Exception as e:
3675
+ canvaslms.cli.warn(f"Could not check submissions: {e}")
3676
+ return -1
3677
+ @
3678
+
3679
+ The [[replace_quiz_items]] function handles the complete process of replacing
3680
+ quiz items. It first checks for submissions, then deletes existing items,
3681
+ and finally creates the new items.
3682
+
3683
+ <<functions>>=
3684
+ def replace_quiz_items(quiz, items, requester):
3685
+ """Replace all items in a quiz with new ones
3686
+
3687
+ Args:
3688
+ quiz: Quiz object
3689
+ items: List of item dictionaries (from export format)
3690
+ requester: Canvas API requester
3691
+
3692
+ Returns:
3693
+ True on success, False on failure
3694
+ """
3695
+ # Check for submissions first
3696
+ submission_count = get_quiz_submission_count(quiz, requester)
3697
+
3698
+ if submission_count > 0:
3699
+ print(f"\nWarning: This quiz has {submission_count} student submission(s).",
3700
+ file=sys.stderr)
3701
+ print("Replacing items will invalidate existing responses!", file=sys.stderr)
3702
+ response = input("Continue anyway? [y/N] ").strip().lower()
3703
+ if response != 'y':
3704
+ print("Item replacement cancelled.")
3705
+ return False
3706
+ elif submission_count < 0:
3707
+ print("\nWarning: Could not determine submission count.", file=sys.stderr)
3708
+ response = input("Continue with item replacement? [y/N] ").strip().lower()
3709
+ if response != 'y':
3710
+ print("Item replacement cancelled.")
3711
+ return False
3712
+
3713
+ # Delete existing items
3714
+ if is_new_quiz(quiz):
3715
+ delete_success = delete_all_new_quiz_items(quiz, requester)
3716
+ else:
3717
+ delete_success = delete_all_classic_quiz_questions(quiz)
3718
+
3719
+ if not delete_success:
3720
+ canvaslms.cli.warn("Failed to delete existing items")
3721
+ return False
3722
+
3723
+ # Create new items
3724
+ if is_new_quiz(quiz):
3725
+ create_success = add_new_quiz_items(quiz.course, quiz.id, requester, items)
3726
+ else:
3727
+ create_success = add_classic_questions(quiz, items)
3728
+
3729
+ return create_success
3730
+ @
3731
+
3732
+ \paragraph{Deleting New Quiz items.}
3733
+ We first fetch all existing items, then delete each one.
3734
+
3735
+ <<functions>>=
3736
+ def delete_all_new_quiz_items(quiz, requester):
3737
+ """Delete all items from a New Quiz
3738
+
3739
+ Args:
3740
+ quiz: Quiz object
3741
+ requester: Canvas API requester
3742
+
3743
+ Returns:
3744
+ True on success, False on failure
3745
+ """
3746
+ try:
3747
+ # Fetch existing items
3748
+ endpoint = f"courses/{quiz.course.id}/quizzes/{quiz.id}/items"
3749
+ response = requester.request(
3750
+ method='GET',
3751
+ endpoint=endpoint,
3752
+ _url="new_quizzes"
3753
+ )
3754
+ items = response.json()
3755
+
3756
+ if not items:
3757
+ return True # Nothing to delete
3758
+
3759
+ # Delete each item
3760
+ for item in items:
3761
+ item_id = item.get('id')
3762
+ if item_id:
3763
+ delete_endpoint = f"courses/{quiz.course.id}/quizzes/{quiz.id}/items/{item_id}"
3764
+ requester.request(
3765
+ method='DELETE',
3766
+ endpoint=delete_endpoint,
3767
+ _url="new_quizzes"
3768
+ )
3769
+
3770
+ return True
3771
+ except Exception as e:
3772
+ canvaslms.cli.warn(f"Failed to delete New Quiz items: {e}")
3773
+ return False
3774
+ @
3775
+
3776
+ \paragraph{Deleting Classic Quiz questions.}
3777
+ For Classic Quizzes, we use the canvasapi library.
3778
+
3779
+ <<functions>>=
3780
+ def delete_all_classic_quiz_questions(quiz):
3781
+ """Delete all questions from a Classic Quiz
3782
+
3783
+ Args:
3784
+ quiz: Classic Quiz object
3785
+
3786
+ Returns:
3787
+ True on success, False on failure
3788
+ """
3789
+ try:
3790
+ questions = list(quiz.get_questions())
3791
+
3792
+ for question in questions:
3793
+ question.delete()
3794
+
3795
+ return True
3796
+ except Exception as e:
3797
+ canvaslms.cli.warn(f"Failed to delete Classic Quiz questions: {e}")
3798
+ return False
3799
+ @
3800
+
3801
+ The chunk [[<<check submissions and replace items>>]] combines submission
3802
+ checking with item replacement for the file-based workflow:
2778
3803
 
2779
- # Apply the changes
2780
- success = apply_quiz_edit(quiz, final_attrs, final_body, requester, html_mode)
2781
- return 'updated' if success else 'error'
3804
+ <<check submissions and replace items>>=
3805
+ item_success = replace_quiz_items(quiz, items, requester)
3806
+ if item_success:
3807
+ print(f"Replaced items for quiz: {quiz.title}")
3808
+ else:
3809
+ canvaslms.cli.warn(f"Failed to replace items for quiz: {quiz.title}")
2782
3810
  @
2783
3811
 
2784
3812
 
@@ -2789,15 +3817,20 @@ the quiz object. We use [[QUIZ_SCHEMA]] to determine which attributes to
2789
3817
  extract, and also include [[instructions]] separately (since it becomes the
2790
3818
  body content, not a YAML attribute).
2791
3819
 
3820
+ For New Quizzes, we also extract [[quiz_settings]] which contains the
3821
+ important [[multiple_attempts]] and [[result_view_settings]] structures.
3822
+
2792
3823
  <<functions>>=
2793
- def extract_quiz_attributes(quiz):
3824
+ def extract_quiz_attributes(quiz, requester=None):
2794
3825
  """Extract editable attributes from a quiz object
2795
3826
 
2796
3827
  Args:
2797
3828
  quiz: Quiz object (New Quiz or Classic Quiz)
3829
+ requester: Canvas API requester (needed for New Quiz settings)
2798
3830
 
2799
3831
  Returns:
2800
3832
  Dictionary of attributes matching QUIZ_SCHEMA, plus 'instructions'
3833
+ and 'quiz_settings' (for New Quizzes)
2801
3834
  """
2802
3835
  attrs = {}
2803
3836
 
@@ -2818,12 +3851,47 @@ def extract_quiz_attributes(quiz):
2818
3851
  # Add instructions (not in schema, but needed for content_attr)
2819
3852
  if is_new_quiz(quiz):
2820
3853
  attrs['instructions'] = getattr(quiz, 'instructions', '') or ''
3854
+ # Fetch quiz_settings for New Quizzes
3855
+ if requester:
3856
+ quiz_settings = fetch_new_quiz_settings(quiz, requester)
3857
+ if quiz_settings:
3858
+ attrs['quiz_settings'] = quiz_settings
2821
3859
  else:
2822
3860
  attrs['instructions'] = getattr(quiz, 'description', '') or ''
2823
3861
 
2824
3862
  return attrs
2825
3863
  @
2826
3864
 
3865
+ \paragraph{Fetching New Quiz settings.}
3866
+ The New Quizzes API returns [[quiz_settings]] as part of the quiz object,
3867
+ but the [[canvasapi]] library may not expose all fields. We make a direct
3868
+ API call to get the complete settings.
3869
+
3870
+ <<functions>>=
3871
+ def fetch_new_quiz_settings(quiz, requester):
3872
+ """Fetch quiz_settings from the New Quizzes API
3873
+
3874
+ Args:
3875
+ quiz: Quiz object (must have .id and .course attributes)
3876
+ requester: Canvas API requester
3877
+
3878
+ Returns:
3879
+ Dictionary with quiz_settings, or None if unavailable
3880
+ """
3881
+ try:
3882
+ endpoint = f"courses/{quiz.course.id}/quizzes/{quiz.id}"
3883
+ response = requester.request(
3884
+ method='GET',
3885
+ endpoint=endpoint,
3886
+ _url="new_quizzes"
3887
+ )
3888
+ data = response.json()
3889
+ return data.get('quiz_settings', None)
3890
+ except Exception as e:
3891
+ canvaslms.cli.warn(f"Failed to fetch New Quiz settings: {e}")
3892
+ return None
3893
+ @
3894
+
2827
3895
 
2828
3896
  \subsection{Applying quiz edits}
2829
3897
 
@@ -2880,7 +3948,7 @@ def quiz_attributes_to_api_params(attributes, is_new, html_body):
2880
3948
  html_body: HTML content for instructions/description
2881
3949
 
2882
3950
  Returns:
2883
- Dictionary suitable for Canvas API
3951
+ Dictionary suitable for Canvas API (nested for New Quizzes)
2884
3952
  """
2885
3953
  params = {}
2886
3954
 
@@ -2909,11 +3977,19 @@ def quiz_attributes_to_api_params(attributes, is_new, html_body):
2909
3977
  continue
2910
3978
 
2911
3979
  # Skip hide_results for New Quizzes: result visibility is controlled
2912
- # differently in the New Quizzes interface (through the "Restrict student
2913
- # result view" setting), not via this API parameter.
3980
+ # through quiz_settings.result_view_settings, not this parameter.
2914
3981
  if key == 'hide_results' and is_new:
2915
3982
  continue
2916
3983
 
3984
+ # Pass through quiz_settings as-is for New Quizzes
3985
+ if key == 'quiz_settings' and is_new:
3986
+ params['quiz_settings'] = value
3987
+ continue
3988
+
3989
+ # Skip instructions - handled separately as body
3990
+ if key == 'instructions':
3991
+ continue
3992
+
2917
3993
  params[key] = value
2918
3994
 
2919
3995
  # Add body with appropriate field name (include even if empty to allow clearing)
@@ -2928,6 +4004,9 @@ def quiz_attributes_to_api_params(attributes, is_new, html_body):
2928
4004
 
2929
4005
  \subsection{Updating a New Quiz}
2930
4006
 
4007
+ Updating a New Quiz uses the same nested parameter structure as creation.
4008
+ We reuse the [[build_new_quiz_api_params]] function to handle the conversion.
4009
+
2931
4010
  <<functions>>=
2932
4011
  def update_new_quiz(course, assignment_id, requester, quiz_params):
2933
4012
  """Updates a New Quiz via the New Quizzes API
@@ -2936,16 +4015,15 @@ def update_new_quiz(course, assignment_id, requester, quiz_params):
2936
4015
  course: Course object
2937
4016
  assignment_id: The quiz/assignment ID
2938
4017
  requester: Canvas API requester
2939
- quiz_params: Dictionary of parameters to update
4018
+ quiz_params: Dictionary of parameters to update, may include nested quiz_settings
2940
4019
 
2941
4020
  Returns:
2942
4021
  True on success, False on failure
2943
4022
  """
2944
4023
  endpoint = f"courses/{course.id}/quizzes/{assignment_id}"
2945
4024
 
2946
- params = {}
2947
- for key, value in quiz_params.items():
2948
- params[f'quiz[{key}]'] = value
4025
+ # Build the request parameters, handling nested quiz_settings
4026
+ params = build_new_quiz_api_params(quiz_params)
2949
4027
 
2950
4028
  try:
2951
4029
  requester.request(
@@ -3104,6 +4182,213 @@ def delete_classic_quiz(quiz):
3104
4182
  @
3105
4183
 
3106
4184
 
4185
+ \section{The [[quizzes export]] subcommand}
4186
+ \label{sec:quizzes-export}
4187
+
4188
+ The [[quizzes export]] command exports a complete quiz (settings and questions)
4189
+ to JSON format. The output is designed to be directly usable with
4190
+ [[quizzes create]], enabling a complete backup and migration workflow:
4191
+ \begin{minted}{bash}
4192
+ # Export a quiz from one course
4193
+ canvaslms quizzes export -c "Source Course" -a "Midterm" -I > midterm.json
4194
+
4195
+ # Create the same quiz in another course
4196
+ canvaslms quizzes create -c "Target Course" -f midterm.json
4197
+ \end{minted}
4198
+
4199
+ The [[--importable]] flag produces clean JSON suitable for import, stripping
4200
+ Canvas-specific IDs and metadata that would conflict when creating a new quiz.
4201
+
4202
+ <<add quizzes export command to subp>>=
4203
+ export_parser = subp.add_parser("export",
4204
+ help="Export a complete quiz to JSON",
4205
+ description="""Export a quiz (settings and questions) to JSON format.
4206
+
4207
+ The output can be directly used with 'quizzes create' to duplicate a quiz
4208
+ in another course or create a backup.
4209
+
4210
+ WORKFLOW EXAMPLE:
4211
+ # Export quiz from source course
4212
+ canvaslms quizzes export -c "Course A" -a "Quiz Name" -I > quiz.json
4213
+
4214
+ # Create identical quiz in target course
4215
+ canvaslms quizzes create -c "Course B" -f quiz.json
4216
+
4217
+ OUTPUT FORMAT:
4218
+ {
4219
+ "quiz_type": "new" or "classic",
4220
+ "settings": { ... quiz settings ... },
4221
+ "items": [ ... ] (New Quizzes) or "questions": [ ... ] (Classic)
4222
+ }
4223
+
4224
+ Use --importable/-I for clean JSON ready for 'quizzes create'.
4225
+ Without -I, the output includes Canvas IDs and metadata for reference.""")
4226
+
4227
+ export_parser.set_defaults(func=export_command)
4228
+
4229
+ try:
4230
+ courses.add_course_option(export_parser, required=True)
4231
+ except argparse.ArgumentError:
4232
+ pass
4233
+
4234
+ export_parser.add_argument("-a", "--assignment",
4235
+ required=True,
4236
+ help="Regex matching quiz title or Canvas ID")
4237
+
4238
+ export_parser.add_argument("--importable", "-I",
4239
+ action="store_true",
4240
+ help="Output clean JSON directly usable with 'quizzes create' command")
4241
+
4242
+ export_parser.add_argument("--include-banks", "-B",
4243
+ action="store_true",
4244
+ default=True,
4245
+ help="Include questions from referenced item banks (default: true)")
4246
+
4247
+ export_parser.add_argument("--no-banks",
4248
+ action="store_true",
4249
+ help="Don't expand item bank references")
4250
+ @
4251
+
4252
+
4253
+ \subsection{Processing the export command}
4254
+
4255
+ The export command finds the quiz, extracts its settings, and exports all
4256
+ questions. We reuse the existing [[export_new_quiz_items]] and
4257
+ [[export_classic_questions]] functions for question export.
4258
+
4259
+ <<functions>>=
4260
+ def export_command(config, canvas, args):
4261
+ """Exports a complete quiz (settings + questions) to JSON"""
4262
+ # Find the quiz
4263
+ course_list = courses.process_course_option(canvas, args)
4264
+ quiz_list = list(filter_quizzes(course_list, args.assignment))
4265
+
4266
+ if not quiz_list:
4267
+ canvaslms.cli.err(1, f"No quiz found matching: {args.assignment}")
4268
+
4269
+ quiz = quiz_list[0]
4270
+ requester = canvas._Canvas__requester
4271
+ include_banks = args.include_banks and not args.no_banks
4272
+ importable = getattr(args, 'importable', False)
4273
+
4274
+ # Build the export structure
4275
+ if is_new_quiz(quiz):
4276
+ export = export_full_new_quiz(quiz, requester, include_banks, importable)
4277
+ else:
4278
+ export = export_full_classic_quiz(quiz, importable)
4279
+
4280
+ # Output as JSON
4281
+ print(json.dumps(export, indent=2, ensure_ascii=False))
4282
+ @
4283
+
4284
+
4285
+ \subsection{Exporting a complete New Quiz}
4286
+
4287
+ For New Quizzes, we extract settings from the quiz object and combine them
4288
+ with the items export. We also fetch and include [[quiz_settings]] which
4289
+ contains the important [[multiple_attempts]] and [[result_view_settings]]
4290
+ structures. The [[--importable]] flag triggers cleaning of Canvas-specific
4291
+ metadata.
4292
+
4293
+ <<functions>>=
4294
+ def export_full_new_quiz(quiz, requester, include_banks=True, importable=False):
4295
+ """Exports a complete New Quiz with settings and items
4296
+
4297
+ Args:
4298
+ quiz: Quiz object (must have .id and .course attributes)
4299
+ requester: Canvas API requester
4300
+ include_banks: If True, expand Bank/BankEntry items to include bank questions
4301
+ importable: If True, clean output for direct import
4302
+
4303
+ Returns:
4304
+ Dictionary with quiz_type, settings (including quiz_settings), and items
4305
+ """
4306
+ # Extract basic settings
4307
+ settings = {
4308
+ 'title': getattr(quiz, 'title', ''),
4309
+ 'instructions': getattr(quiz, 'instructions', '') or '',
4310
+ 'time_limit': getattr(quiz, 'time_limit', None),
4311
+ 'points_possible': getattr(quiz, 'points_possible', None),
4312
+ 'due_at': getattr(quiz, 'due_at', None),
4313
+ 'unlock_at': getattr(quiz, 'unlock_at', None),
4314
+ 'lock_at': getattr(quiz, 'lock_at', None),
4315
+ }
4316
+
4317
+ # Fetch quiz_settings from the API (contains multiple_attempts, result_view_settings, etc.)
4318
+ quiz_settings = fetch_new_quiz_settings(quiz, requester)
4319
+ if quiz_settings:
4320
+ settings['quiz_settings'] = quiz_settings
4321
+
4322
+ # Get items
4323
+ items_export = export_new_quiz_items(quiz, requester, include_banks=include_banks)
4324
+ items = items_export.get('items', [])
4325
+
4326
+ # Clean for import if requested
4327
+ if importable:
4328
+ items_cleaned = clean_for_import({'items': items}, quiz_type='new_quiz')
4329
+ items = items_cleaned.get('items', [])
4330
+
4331
+ return {
4332
+ 'quiz_type': 'new',
4333
+ 'settings': settings,
4334
+ 'items': items
4335
+ }
4336
+ @
4337
+
4338
+
4339
+ \subsection{Exporting a complete Classic Quiz}
4340
+
4341
+ For Classic Quizzes, we extract settings and questions using the canvasapi
4342
+ library's native methods.
4343
+
4344
+ <<functions>>=
4345
+ def export_full_classic_quiz(quiz, importable=False):
4346
+ """Exports a complete Classic Quiz with settings and questions
4347
+
4348
+ Args:
4349
+ quiz: Quiz object
4350
+ importable: If True, clean output for direct import
4351
+
4352
+ Returns:
4353
+ Dictionary with quiz_type, settings, and questions
4354
+ """
4355
+ # Extract settings
4356
+ settings = {
4357
+ 'title': getattr(quiz, 'title', ''),
4358
+ 'description': getattr(quiz, 'description', '') or '',
4359
+ 'quiz_type': getattr(quiz, 'quiz_type', 'assignment'),
4360
+ 'time_limit': getattr(quiz, 'time_limit', None),
4361
+ 'allowed_attempts': getattr(quiz, 'allowed_attempts', 1),
4362
+ 'shuffle_questions': getattr(quiz, 'shuffle_questions', False),
4363
+ 'shuffle_answers': getattr(quiz, 'shuffle_answers', False),
4364
+ 'points_possible': getattr(quiz, 'points_possible', None),
4365
+ 'published': getattr(quiz, 'published', False),
4366
+ 'due_at': getattr(quiz, 'due_at', None),
4367
+ 'unlock_at': getattr(quiz, 'unlock_at', None),
4368
+ 'lock_at': getattr(quiz, 'lock_at', None),
4369
+ 'show_correct_answers': getattr(quiz, 'show_correct_answers', True),
4370
+ 'one_question_at_a_time': getattr(quiz, 'one_question_at_a_time', False),
4371
+ 'cant_go_back': getattr(quiz, 'cant_go_back', False),
4372
+ 'access_code': getattr(quiz, 'access_code', None),
4373
+ }
4374
+
4375
+ # Get questions
4376
+ questions_export = export_classic_questions(quiz)
4377
+ questions = questions_export.get('questions', [])
4378
+
4379
+ # Clean for import if requested
4380
+ if importable:
4381
+ questions_cleaned = clean_for_import({'questions': questions}, quiz_type='classic')
4382
+ questions = questions_cleaned.get('questions', [])
4383
+
4384
+ return {
4385
+ 'quiz_type': 'classic',
4386
+ 'settings': settings,
4387
+ 'questions': questions
4388
+ }
4389
+ @
4390
+
4391
+
3107
4392
  %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
3108
4393
  \chapter{Viewing Quiz Content}
3109
4394
  \label{quiz-view}
@@ -4889,6 +6174,226 @@ EXAMPLE_CLASSIC_QUIZ_JSON = {
4889
6174
  }
4890
6175
  @
4891
6176
 
6177
+
6178
+ \subsection{Full quiz JSON format for export/create workflow}
6179
+
6180
+ While the examples above show the format for adding \emph{questions} to an
6181
+ existing quiz, users often need to export a complete quiz (settings and
6182
+ questions) and create a copy elsewhere. The [[quizzes export]] and
6183
+ [[quizzes create]] commands use a unified format that combines quiz settings
6184
+ with questions.
6185
+
6186
+ The format wraps settings in a [[settings]] object and includes a [[quiz_type]]
6187
+ field so [[quizzes create]] knows which API to use:
6188
+ \begin{description}
6189
+ \item[[[quiz_type]]] Either [[new]] (New Quizzes) or [[classic]] (Classic
6190
+ Quizzes)
6191
+ \item[[[settings]]] Quiz settings like title, time limit, instructions
6192
+ \item[[[quiz_settings]]] For New Quizzes: nested object with [[multiple_attempts]]
6193
+ and [[result_view_settings]] for controlling attempts and what students see
6194
+ \item[[[items]]] For New Quizzes: array of question items
6195
+ \item[[[questions]]] For Classic Quizzes: array of questions
6196
+ \end{description}
6197
+
6198
+ \paragraph{New Quiz settings structure.}
6199
+ The [[quiz_settings]] object within [[settings]] controls advanced quiz behavior:
6200
+ \begin{description}
6201
+ \item[[[multiple_attempts]]] Controls multiple attempts, waiting periods between
6202
+ attempts, and which score to keep (see [[NEW_QUIZ_MULTIPLE_ATTEMPTS_SCHEMA]])
6203
+ \item[[[result_view_settings]]] Controls what students see after submission,
6204
+ including whether to show correct answers (see [[NEW_QUIZ_RESULT_VIEW_SCHEMA]])
6205
+ \item[[[shuffle_answers]]] Whether to randomize answer order
6206
+ \item[[[shuffle_questions]]] Whether to randomize question order
6207
+ \item[[[has_time_limit]]] Whether the quiz has a time limit
6208
+ \item[[[session_time_limit_in_seconds]]] Time limit in seconds
6209
+ \end{description}
6210
+
6211
+ <<constants>>=
6212
+ EXAMPLE_FULL_NEW_QUIZ_JSON = {
6213
+ "quiz_type": "new",
6214
+ "settings": {
6215
+ "title": "Example Practice Quiz",
6216
+ "instructions": "<p>This is a practice quiz to test your knowledge. "
6217
+ "You can retry multiple times with a 1-hour waiting period "
6218
+ "between attempts. Your latest score will be kept.</p>"
6219
+ "<p>You will see your score but not the correct answers, "
6220
+ "so you can keep practicing until you get them all right!</p>",
6221
+ "time_limit": 1800,
6222
+ "points_possible": 20,
6223
+ "due_at": None,
6224
+ "unlock_at": None,
6225
+ "lock_at": None,
6226
+ "quiz_settings": {
6227
+ # Randomization settings
6228
+ "shuffle_answers": True,
6229
+ "shuffle_questions": False,
6230
+
6231
+ # Time limit settings
6232
+ "has_time_limit": True,
6233
+ "session_time_limit_in_seconds": 1800,
6234
+
6235
+ # Question display settings
6236
+ "one_at_a_time_type": "none",
6237
+ "allow_backtracking": True,
6238
+
6239
+ # Calculator settings
6240
+ "calculator_type": "none",
6241
+
6242
+ # Access restrictions
6243
+ "filter_ip_address": False,
6244
+ "filters": {},
6245
+ "require_student_access_code": False,
6246
+ "student_access_code": None,
6247
+
6248
+ # Multiple attempts settings
6249
+ "multiple_attempts": {
6250
+ "multiple_attempts_enabled": True,
6251
+ "attempt_limit": False,
6252
+ "max_attempts": None,
6253
+ "score_to_keep": "latest",
6254
+ "cooling_period": True,
6255
+ "cooling_period_seconds": 3600
6256
+ },
6257
+
6258
+ # Result view settings - what students see after submission
6259
+ "result_view_settings": {
6260
+ "result_view_restricted": True,
6261
+ "display_points_awarded": True,
6262
+ "display_points_possible": True,
6263
+ "display_items": True,
6264
+ "display_item_response": True,
6265
+ "display_item_response_qualifier": "always",
6266
+ "display_item_response_correctness": True,
6267
+ "display_item_correct_answer": False,
6268
+ "display_item_feedback": False,
6269
+ "display_correct_answer_at": None,
6270
+ "hide_correct_answer_at": None
6271
+ }
6272
+ }
6273
+ },
6274
+ "items": [
6275
+ {
6276
+ "position": 1,
6277
+ "points_possible": 5,
6278
+ "entry": {
6279
+ "title": "Geography: Capital Cities",
6280
+ "item_body": "<p>What is the capital of Sweden?</p>",
6281
+ "interaction_type_slug": "choice",
6282
+ "scoring_algorithm": "Equivalence",
6283
+ "interaction_data": {
6284
+ "choices": [
6285
+ {"position": 1, "item_body": "<p>Stockholm</p>"},
6286
+ {"position": 2, "item_body": "<p>Gothenburg</p>"},
6287
+ {"position": 3, "item_body": "<p>Malmö</p>"},
6288
+ {"position": 4, "item_body": "<p>Uppsala</p>"}
6289
+ ]
6290
+ },
6291
+ "scoring_data": {"value": 1}
6292
+ }
6293
+ },
6294
+ {
6295
+ "position": 2,
6296
+ "points_possible": 5,
6297
+ "entry": {
6298
+ "title": "Programming: Language Type",
6299
+ "item_body": "<p>Python is an interpreted programming language.</p>",
6300
+ "interaction_type_slug": "true-false",
6301
+ "scoring_algorithm": "Equivalence",
6302
+ "interaction_data": {
6303
+ "true_choice": "True",
6304
+ "false_choice": "False"
6305
+ },
6306
+ "scoring_data": {"value": True}
6307
+ }
6308
+ },
6309
+ {
6310
+ "position": 3,
6311
+ "points_possible": 5,
6312
+ "entry": {
6313
+ "title": "Math: Select All Correct",
6314
+ "item_body": "<p>Which of the following are prime numbers?</p>",
6315
+ "interaction_type_slug": "multi-answer",
6316
+ "scoring_algorithm": "AllOrNothing",
6317
+ "interaction_data": {
6318
+ "choices": [
6319
+ {"position": 1, "item_body": "<p>2</p>"},
6320
+ {"position": 2, "item_body": "<p>4</p>"},
6321
+ {"position": 3, "item_body": "<p>7</p>"},
6322
+ {"position": 4, "item_body": "<p>9</p>"},
6323
+ {"position": 5, "item_body": "<p>11</p>"}
6324
+ ]
6325
+ },
6326
+ "scoring_data": {"value": [1, 3, 5]}
6327
+ }
6328
+ },
6329
+ {
6330
+ "position": 4,
6331
+ "points_possible": 5,
6332
+ "entry": {
6333
+ "title": "Programming: Output Question",
6334
+ "item_body": "<p>What does the following Python code print?</p>"
6335
+ "<pre>x = 5\nif x > 3:\n print('big')\nelse:\n print('small')</pre>",
6336
+ "interaction_type_slug": "choice",
6337
+ "scoring_algorithm": "Equivalence",
6338
+ "interaction_data": {
6339
+ "choices": [
6340
+ {"position": 1, "item_body": "<p>big</p>"},
6341
+ {"position": 2, "item_body": "<p>small</p>"},
6342
+ {"position": 3, "item_body": "<p>5</p>"},
6343
+ {"position": 4, "item_body": "<p>Nothing is printed</p>"}
6344
+ ]
6345
+ },
6346
+ "scoring_data": {"value": 1}
6347
+ }
6348
+ }
6349
+ ]
6350
+ }
6351
+ @
6352
+
6353
+ <<constants>>=
6354
+ EXAMPLE_FULL_CLASSIC_QUIZ_JSON = {
6355
+ "quiz_type": "classic",
6356
+ "settings": {
6357
+ "title": "Example Classic Quiz",
6358
+ "description": "<p>Answer all questions carefully. Time limit: 60 minutes.</p>",
6359
+ "quiz_type": "assignment",
6360
+ "time_limit": 60,
6361
+ "allowed_attempts": 2,
6362
+ "shuffle_questions": True,
6363
+ "shuffle_answers": True,
6364
+ "points_possible": 100,
6365
+ "published": False,
6366
+ "due_at": None,
6367
+ "unlock_at": None,
6368
+ "lock_at": None
6369
+ },
6370
+ "questions": [
6371
+ {
6372
+ "question_name": "Capital Question",
6373
+ "question_text": "<p>What is the capital of Sweden?</p>",
6374
+ "question_type": "multiple_choice_question",
6375
+ "points_possible": 5,
6376
+ "answers": [
6377
+ {"answer_text": "Stockholm", "answer_weight": 100},
6378
+ {"answer_text": "Gothenburg", "answer_weight": 0},
6379
+ {"answer_text": "Malmö", "answer_weight": 0}
6380
+ ]
6381
+ },
6382
+ {
6383
+ "question_name": "True/False Question",
6384
+ "question_text": "<p>Python is an interpreted language.</p>",
6385
+ "question_type": "true_false_question",
6386
+ "points_possible": 5,
6387
+ "answers": [
6388
+ {"answer_text": "True", "answer_weight": 100},
6389
+ {"answer_text": "False", "answer_weight": 0}
6390
+ ]
6391
+ }
6392
+ ]
6393
+ }
6394
+ @
6395
+
6396
+
4892
6397
  The [[print_example_json]] function outputs both formats with explanatory
4893
6398
  headers, making it easy for users to copy the appropriate format for their quiz
4894
6399
  type.
@@ -4920,6 +6425,100 @@ def print_example_json():
4920
6425
  @
4921
6426
 
4922
6427
 
6428
+ Similarly, the [[print_full_quiz_example_json]] function outputs the full quiz
6429
+ format (settings plus questions) for use with [[quizzes create]] and
6430
+ [[quizzes export]].
6431
+
6432
+ <<functions>>=
6433
+ def print_full_quiz_example_json():
6434
+ """Prints example JSON for full quiz creation (settings + questions)"""
6435
+ print("=" * 70)
6436
+ print("EXAMPLE JSON FOR CREATING NEW QUIZZES (Quizzes.Next)")
6437
+ print("=" * 70)
6438
+ print()
6439
+ print("This format includes both quiz settings and questions.")
6440
+ print("Save to a file and use with:")
6441
+ print(" canvaslms quizzes create -c COURSE -f quiz.json")
6442
+ print()
6443
+ print("This is the same format produced by 'quizzes export -I'.")
6444
+ print()
6445
+ print("BASIC SETTINGS:")
6446
+ print(" title - Quiz title")
6447
+ print(" instructions - HTML instructions shown to students")
6448
+ print(" time_limit - Time limit in SECONDS (or null)")
6449
+ print(" points_possible - Total points")
6450
+ print(" due_at/unlock_at/lock_at - ISO 8601 dates (or null)")
6451
+ print()
6452
+ print("QUIZ SETTINGS (in 'settings.quiz_settings'):")
6453
+ print()
6454
+ print(" Randomization:")
6455
+ print(" shuffle_answers: true/false - Randomize answer order")
6456
+ print(" shuffle_questions: true/false - Randomize question order")
6457
+ print()
6458
+ print(" Time limit:")
6459
+ print(" has_time_limit: true/false")
6460
+ print(" session_time_limit_in_seconds: number")
6461
+ print()
6462
+ print(" Question display:")
6463
+ print(" one_at_a_time_type: 'none' or 'question'")
6464
+ print(" allow_backtracking: true/false - Can go back to previous questions")
6465
+ print()
6466
+ print(" Calculator:")
6467
+ print(" calculator_type: 'none', 'basic', or 'scientific'")
6468
+ print()
6469
+ print(" Access restrictions:")
6470
+ print(" require_student_access_code: true/false")
6471
+ print(" student_access_code: 'password' or null")
6472
+ print(" filter_ip_address: true/false")
6473
+ print(" filters: {} or IP filter rules")
6474
+ print()
6475
+ print(" Multiple attempts:")
6476
+ print(" multiple_attempts_enabled: true/false")
6477
+ print(" attempt_limit: true/false (true = limited, false = unlimited)")
6478
+ print(" max_attempts: number or null")
6479
+ print(" score_to_keep: 'highest' or 'latest'")
6480
+ print(" cooling_period: true/false (require wait between attempts)")
6481
+ print(" cooling_period_seconds: seconds (e.g., 3600 = 1 hour)")
6482
+ print()
6483
+ print(" Result view (what students see after submission):")
6484
+ print(" result_view_restricted: true/false")
6485
+ print(" display_items: true/false - Show questions")
6486
+ print(" display_item_response: true/false - Show student's answers")
6487
+ print(" display_item_response_correctness: true/false - Show right/wrong")
6488
+ print(" display_item_correct_answer: true/false - Show correct answers")
6489
+ print(" display_item_feedback: true/false - Show per-question feedback")
6490
+ print(" display_points_awarded: true/false - Show points earned")
6491
+ print(" display_points_possible: true/false - Show max points")
6492
+ print(" display_correct_answer_at: ISO date or null - When to reveal")
6493
+ print(" hide_correct_answer_at: ISO date or null - When to hide")
6494
+ print()
6495
+ print("SCORING:")
6496
+ print(" Use position numbers (1, 2, 3...) to reference correct answers.")
6497
+ print(" UUIDs are generated automatically during import.")
6498
+ print()
6499
+ print(json.dumps(EXAMPLE_FULL_NEW_QUIZ_JSON, indent=2))
6500
+ print()
6501
+ print()
6502
+ print("=" * 70)
6503
+ print("EXAMPLE JSON FOR CREATING CLASSIC QUIZZES")
6504
+ print("=" * 70)
6505
+ print()
6506
+ print("Classic Quizzes use different field names and units.")
6507
+ print()
6508
+ print("Settings (time_limit in MINUTES for Classic Quizzes):")
6509
+ print(" title, description (not instructions), quiz_type,")
6510
+ print(" time_limit, allowed_attempts, shuffle_questions,")
6511
+ print(" shuffle_answers, points_possible, published,")
6512
+ print(" due_at, unlock_at, lock_at, show_correct_answers,")
6513
+ print(" one_question_at_a_time, cant_go_back, access_code")
6514
+ print()
6515
+ print("quiz_type values: assignment, practice_quiz, graded_survey, survey")
6516
+ print("answer_weight: 100 = correct, 0 = incorrect")
6517
+ print()
6518
+ print(json.dumps(EXAMPLE_FULL_CLASSIC_QUIZ_JSON, indent=2))
6519
+ @
6520
+
6521
+
4923
6522
  \subsection{Processing the add command}
4924
6523
 
4925
6524
  When [[--example]] is provided, we print the example JSON and exit immediately
@@ -5093,21 +6692,33 @@ def ensure_uuids_in_entry(entry):
5093
6692
  new_choices = []
5094
6693
 
5095
6694
  for i, choice in enumerate(interaction_data['choices']):
5096
- old_id = choice.get('id')
5097
- position = choice.get('position', i + 1)
5098
-
5099
- # Generate new UUID if missing
5100
- if not old_id:
5101
- new_id = str(uuid.uuid4())
6695
+ # Handle both dict choices (choice, multi-answer) and string choices (ordering)
6696
+ if isinstance(choice, str):
6697
+ # Ordering questions have choices as plain UUIDs/strings
6698
+ # The string itself is the ID - keep it or generate new if invalid
6699
+ if choice and len(choice) > 10: # Looks like a UUID
6700
+ new_id = choice
6701
+ else:
6702
+ new_id = str(uuid.uuid4())
6703
+ position_to_uuid[i + 1] = new_id
6704
+ new_choices.append(new_id)
5102
6705
  else:
5103
- new_id = old_id
6706
+ # Regular choice dict
6707
+ old_id = choice.get('id')
6708
+ position = choice.get('position', i + 1)
6709
+
6710
+ # Generate new UUID if missing
6711
+ if not old_id:
6712
+ new_id = str(uuid.uuid4())
6713
+ else:
6714
+ new_id = old_id
5104
6715
 
5105
- position_to_uuid[position] = new_id
6716
+ position_to_uuid[position] = new_id
5106
6717
 
5107
- new_choice = dict(choice)
5108
- new_choice['id'] = new_id
5109
- new_choice['position'] = position
5110
- new_choices.append(new_choice)
6718
+ new_choice = dict(choice)
6719
+ new_choice['id'] = new_id
6720
+ new_choice['position'] = position
6721
+ new_choices.append(new_choice)
5111
6722
 
5112
6723
  interaction_data['choices'] = new_choices
5113
6724
  entry['interaction_data'] = interaction_data
@@ -5648,6 +7259,17 @@ The [[clean_interaction_data]] function strips UUIDs from choices, keeping only
5648
7259
  the [[position]] and [[item_body]] fields. This makes the JSON human-readable
5649
7260
  and avoids UUID conflicts when importing to a different quiz.
5650
7261
 
7262
+ Some question types use different structures:
7263
+ \begin{description}
7264
+ \item[choices] Multiple choice and multi-answer questions use a list of
7265
+ dictionaries with [[id]], [[position]], and [[item_body]].
7266
+ \item[answers] Matching questions use [[answers]] (a list of strings for the
7267
+ right column) and [[questions]] (a list of dicts for the left column).
7268
+ \end{description}
7269
+
7270
+ We handle both cases, preserving string arrays as-is while cleaning dict-based
7271
+ choices.
7272
+
5651
7273
  <<functions>>=
5652
7274
  def clean_interaction_data(interaction_data):
5653
7275
  """Removes UUIDs from interaction_data choices"""
@@ -5657,15 +7279,36 @@ def clean_interaction_data(interaction_data):
5657
7279
  clean = dict(interaction_data)
5658
7280
 
5659
7281
  # Handle choices array (multiple choice, multi-answer)
7282
+ # Choices are dicts with id, position, item_body
5660
7283
  if 'choices' in clean:
5661
7284
  clean_choices = []
5662
7285
  for i, choice in enumerate(clean['choices']):
7286
+ # Skip if choice is not a dict (shouldn't happen, but be safe)
7287
+ if not isinstance(choice, dict):
7288
+ clean_choices.append(choice)
7289
+ continue
5663
7290
  clean_choice = {'position': choice.get('position', i + 1)}
5664
7291
  if 'item_body' in choice:
5665
7292
  clean_choice['item_body'] = choice['item_body']
5666
7293
  clean_choices.append(clean_choice)
5667
7294
  clean['choices'] = clean_choices
5668
7295
 
7296
+ # Handle questions array (matching questions)
7297
+ # Questions are dicts with id, item_body - we keep item_body, drop id
7298
+ if 'questions' in clean:
7299
+ clean_questions = []
7300
+ for i, question in enumerate(clean['questions']):
7301
+ if not isinstance(question, dict):
7302
+ clean_questions.append(question)
7303
+ continue
7304
+ clean_q = {}
7305
+ if 'item_body' in question:
7306
+ clean_q['item_body'] = question['item_body']
7307
+ clean_questions.append(clean_q)
7308
+ clean['questions'] = clean_questions
7309
+
7310
+ # 'answers' is a list of strings (matching questions) - keep as-is
7311
+
5669
7312
  return clean
5670
7313
  @
5671
7314