@jutge.org/toolkit 4.2.29 → 4.2.33

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -230,6 +230,7 @@ jtk make # Build all problem elements
230
230
  jtk verify <program> # Test a solution
231
231
  jtk upload # Upload to Jutge.org
232
232
  jtk clean # Clean temporary files
233
+ jtk passcode # Manage problem passcode
233
234
 
234
235
  jtk doctor # Check system dependencies
235
236
  ```
@@ -66,33 +66,6 @@ Depending on your needs, you may want to install:
66
66
  sudo apt-get install build-essential
67
67
  ```
68
68
 
69
- ## Setting Up AI Features (Optional)
70
-
71
- If you want to use JutgeAI features to generate problems and content, you need to set up API keys:
72
-
73
- ### For Google Gemini (Free for UPC users):
74
-
75
- 1. Visit https://aistudio.google.com/ and sign in
76
- 2. Click "Get API key" in the sidebar
77
- 3. Click "Create API key"
78
- 4. Copy the generated key
79
- 5. Add to `~/.bashrc` or `~/.zshrc`:
80
-
81
- ```bash
82
- export GEMINI_API_KEY="your-key-here"
83
- ```
84
-
85
- ### For OpenAI (Paid):
86
-
87
- 1. Create an account at https://platform.openai.com/
88
- 2. Navigate to API Keys section
89
- 3. Create a new secret key
90
- 4. Add to `~/.bashrc` or `~/.zshrc`:
91
-
92
- ```bash
93
- export OPENAI_API_KEY="your-key-here"
94
- ```
95
-
96
69
  ## Troubleshooting
97
70
 
98
71
  **Command not found after installation:**
@@ -70,33 +70,6 @@ Then install the tools you need:
70
70
  xcode-select --install
71
71
  ```
72
72
 
73
- ## Setting Up AI Features (Optional)
74
-
75
- If you want to use JutgeAI features to generate problems and content, you need to set up API keys:
76
-
77
- ### For Google Gemini (Free for UPC users):
78
-
79
- 1. Visit https://aistudio.google.com/ and sign in
80
- 2. Click "Get API key" in the sidebar
81
- 3. Click "Create API key"
82
- 4. Copy the generated key
83
- 5. Add to `~/.bashrc` or `~/.zshrc`:
84
-
85
- ```bash
86
- export GEMINI_API_KEY="your-key-here"
87
- ```
88
-
89
- ### For OpenAI (Paid):
90
-
91
- 1. Create an account at https://platform.openai.com/
92
- 2. Navigate to API Keys section
93
- 3. Create a new secret key
94
- 4. Add to `~/.bashrc` or `~/.zshrc`:
95
-
96
- ```bash
97
- export OPENAI_API_KEY="your-key-here"
98
- ```
99
-
100
73
  ## Troubleshooting
101
74
 
102
75
  **Command not found after installation:**
@@ -57,33 +57,6 @@ This command will show which tools are installed on your system.
57
57
  2. Extract to a folder (e.g., `C:\w64devkit`)
58
58
  3. Run `w64devkit.exe` to open a terminal with GCC available
59
59
 
60
- ## Setting Up AI Features (Optional)
61
-
62
- If you want to use JutgeAI features to generate problems and content, you need to set up API keys:
63
-
64
- ### For Google Gemini (Free for UPC users):
65
-
66
- 1. Visit https://aistudio.google.com/ and sign in
67
- 2. Click "Get API key" in the sidebar
68
- 3. Click "Create API key"
69
- 4. Copy the generated key
70
- 5. Set the environment variable permanently:
71
-
72
- ```powershell
73
- [System.Environment]::SetEnvironmentVariable('GEMINI_API_KEY', 'your-key-here', 'User')
74
- ```
75
-
76
- ### For OpenAI (Paid):
77
-
78
- 1. Create an account at https://platform.openai.com/
79
- 2. Navigate to API Keys section
80
- 3. Create a new secret key
81
- 4. Set the environment variable permanently:
82
-
83
- ```powershell
84
- [System.Environment]::SetEnvironmentVariable('OPENAI_API_KEY', 'your-key-here', 'User')
85
- ```
86
-
87
60
  ## Troubleshooting
88
61
 
89
62
  **Command not found after installation:**
package/docs/jutge-ai.md CHANGED
@@ -29,54 +29,42 @@ In particular, Jutge<sup>AI</sup> features can assist in:
29
29
 
30
30
  - You can create new test case generators to extend the existing test suite.
31
31
 
32
- - You can generate `award.png` and `award.html` files for the problem. (Note: `award.png` requires `dall-e-3` model access.)
32
+ - You can generate `award.png` and `award.html` files for the problem.
33
33
 
34
34
  As in any other use of AI and LLMs, it is important to review and validate the generated content to ensure its correctness and quality. Treat the generated content as a first draft that needs to be refined and validated.
35
35
 
36
- In order to use the Jutge<sup>AI</sup> features of the toolkit, you need to have API keys for the models you wish to use. You should get the keys from the respective providers and set them as environment variables in your system. Because of the costs associated to the use of these models, the toolkit or Jutge.org cannot provide these keys directly.
36
+ # Jutge<sup>AI</sup> models
37
37
 
38
- UPC users can get free access to Gemini models through their institutional Google accounts. These work well with Jutge<sup>AI</sup> features but have usage limits. If you need more capacity, consider using an OpenAI API key.
38
+ The toolkit currently supports the following models:
39
39
 
40
- ## How to get Gemini API key
40
+ ### Google Gemini
41
41
 
42
- 1. Visit Google AI Studio:
42
+ Google Gemini is fast and free for UPC users but its rate limits are so low it is almost impossible to use it for practical purposes.
43
43
 
44
- Go to [aistudio.google.com](https://aistudio.google.com/) and sign in with your Google Account.
44
+ Available models:
45
45
 
46
- 2. Access the API Section:
46
+ - google/gemini-2.5-flash
47
+ - google/gemini-2.5-flash-lite
47
48
 
48
- Click on the **Get API key** button located in the left-hand sidebar menu.
49
+ ### OpenAI GPT
49
50
 
50
- 3. Generate the Key:
51
- - Click the **Create API key** button.
52
- - You will have two options: **Create API key in new project** (recommended for beginners) or **Create API key in existing project.**
53
- - Select your preference, and the system will generate a unique key for you.
51
+ OpenAI is slower but more reliable and has a higher rate limit. However, it is not free and requires a paid account.
54
52
 
55
- 4. Secure Your Key:
53
+ Available models:
56
54
 
57
- Copy the generated key immediately.
55
+ - openai/gpt-5-nano
56
+ - openai/gpt-5-mini
57
+ - openai/gpt-4.1-nano
58
+ - openai/gpt-4.1-mini
58
59
 
59
- 5. Set `GEMINI_API_KEY` environment variable with the obtained key to use it in the toolkit. This will make models such as `google/gemini-2.5-flash` or `google/gemini-2.5-flash-lite` available for Jutge<sup>AI</sup> features.
60
+ See https://platform.openai.com/docs/pricing for the pricing of the OpenAI models.
60
61
 
61
- ## How to get OpenAI API key
62
+ ### Recommendation
62
63
 
63
- 1. Create an OpenAI Account:
64
+ Try to use `gpt-4.1-nano` or `gpt-4.1-mini` for the quickest results. If you need more reliable results, use `gpt-5-nano` or `gpt-5-mini`.
64
65
 
65
- Go to the OpenAI website and sign up (or log in if you already have an account).
66
+ # Jutge<sup>AI</sup> costs
66
67
 
67
- 2. Access the API Dashboard:
68
+ In order to use the Jutge<sup>AI</sup> features of the toolkit, we have allocated a small budget to cover the costs associated to pay for the use of the models. This budget is shared by all instructors of Jutge.org. Jutge.org records the costs incurred for each instructor as estimated from input and output token counts. Rate limits are applied to avoid abuse or misuse.
68
69
 
69
- After logging in, open the **API dashboard** from your account menu.
70
-
71
- 3. Create an API Key:
72
- - Navigate to **API Keys**.
73
- - Click **Create new secret key**.
74
- - Copy the key immediately (it will not be shown again).
75
-
76
- 4. Secure Your Key: Copy the generated key immediately.
77
-
78
- 5. Set `OPENAI_API_KEY` environment variable with the obtained key to use it in the toolkit.This will make models such as `openai/gpt-5-mini` or `openai/gpt-5-nano` available for Jutge<sup>AI</sup> features.
79
-
80
- ## Other models
81
-
82
- We use `multi-llm-ts` package to interface with different models. If you have access to other models supported by this package, you can set the corresponding environment variables as described in the [multi-llm-ts documentation](https://github.com/nbonamy/multi-llm-ts) to use them with Jutge<sup>AI</sup> features.
70
+ Please contact the Jutge.org team if you need to increase your budget.
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "@jutge.org/toolkit",
3
3
  "description": "Toolkit to prepare problems for Jutge.org",
4
- "version": "4.2.29",
4
+ "version": "4.2.33",
5
5
  "homepage": "https://jutge.org",
6
6
  "author": {
7
7
  "name": "Jutge.org",
@@ -79,16 +79,12 @@
79
79
  "handlebars": "^4.7.8",
80
80
  "image-size": "^2.0.2",
81
81
  "inquirer-checkbox-plus-plus": "^1.1.1",
82
- "multi-llm-ts": "^5.0.1",
83
82
  "nanoid": "^5.1.6",
84
83
  "open": "^11.0.0",
85
- "openai": "^6.17.0",
86
- "ora": "^9.1.0",
87
84
  "pretty-bytes": "^7.1.0",
88
85
  "pretty-ms": "^9.3.0",
89
86
  "radash": "^12.1.1",
90
87
  "semver": "^7.7.3",
91
- "sharp": "^0.34.5",
92
88
  "terminal-link": "^5.0.0",
93
89
  "tree-node-cli": "^1.6.0",
94
90
  "yaml": "^2.8.2",
@@ -97,13 +93,12 @@
97
93
  "zod-validation-error": "^5.0.0"
98
94
  },
99
95
  "devDependencies": {
100
- "prettier": "^3.8.1",
101
- "typescript-eslint": "^8.54.0",
102
96
  "@types/archiver": "^7.0.0",
103
97
  "@types/image-size": "^0.8.0",
104
98
  "@types/node": "^25.1.0",
105
- "@types/ora": "^3.2.0",
106
- "@types/semver": "^7.7.1"
99
+ "@types/semver": "^7.7.1",
100
+ "prettier": "^3.8.1",
101
+ "typescript-eslint": "^8.54.0"
107
102
  },
108
103
  "peerDependencies": {
109
104
  "typescript": "^5.9.3"
package/toolkit/ask.ts CHANGED
@@ -1,6 +1,6 @@
1
1
  import { Command } from '@commander-js/extra-typings'
2
2
  import { glob } from 'fs/promises'
3
- import { complete } from '../lib/ai'
3
+ import { complete } from '../lib/aiclient'
4
4
  import { settings } from '../lib/settings'
5
5
  import tui from '../lib/tui'
6
6
  import { projectDir, readTextInDir } from '../lib/utils'
@@ -1,8 +1,6 @@
1
1
  import { Argument, Command } from '@commander-js/extra-typings'
2
- import { join } from 'path'
3
- import sharp from 'sharp'
4
- import { complete, generateImage } from '../lib/ai'
5
- import { languageKeys, languageNames, proglangKeys } from '../lib/data'
2
+ import { createProblemWithJutgeAI } from '../lib/create-with-jutgeai'
3
+ import { languageKeys, languageNames, proglangKeys, proglangNames } from '../lib/data'
6
4
  import {
7
5
  addAlternativeSolution,
8
6
  addMainFile,
@@ -12,8 +10,7 @@ import {
12
10
  import { newProblem } from '../lib/problem'
13
11
  import { settings } from '../lib/settings'
14
12
  import tui from '../lib/tui'
15
- import { writeText } from '../lib/utils'
16
- import { createProblemWithJutgeAI } from '../lib/create-with-jutgeai'
13
+ import { getLoggedInJutgeClient } from '../lib/login'
17
14
 
18
15
  export const generateCmd = new Command('generate')
19
16
  .description('Generate problem elements using JutgeAI')
@@ -33,7 +30,10 @@ generateCmd
33
30
  .option('-m, --model <model>', 'AI model to use', settings.defaultModel)
34
31
 
35
32
  .action(async ({ input, output, directory, model, doNotAsk }) => {
36
- await createProblemWithJutgeAI(model, directory, input, output, doNotAsk)
33
+ const jutge = await getLoggedInJutgeClient()
34
+ await tui.section('Generating problem with JutgeAI', async () => {
35
+ await createProblemWithJutgeAI(jutge, model, directory, input, output, doNotAsk)
36
+ })
37
37
  })
38
38
 
39
39
  generateCmd
@@ -47,8 +47,8 @@ The original statement will be used as the source text for translation.
47
47
 
48
48
  Provide one or more target language from the following list:
49
49
  ${Object.entries(languageNames)
50
- .map(([key, name]) => ` - ${key}: ${name}`)
51
- .join('\n')}
50
+ .map(([key, name]) => ` - ${key}: ${name}`)
51
+ .join('\n')}
52
52
 
53
53
  The added translations will be saved in the problem directory overwrite possible existing files.`,
54
54
  )
@@ -58,10 +58,11 @@ The added translations will be saved in the problem directory overwrite possible
58
58
  .option('-m, --model <model>', 'AI model to use', settings.defaultModel)
59
59
 
60
60
  .action(async (languages, { directory, model }) => {
61
+ const jutge = await getLoggedInJutgeClient()
61
62
  const problem = await newProblem(directory)
62
63
  await tui.section('Generating statement translations', async () => {
63
64
  for (const language of languages) {
64
- await addStatementTranslation(model, problem, language)
65
+ await addStatementTranslation(jutge, model, problem, language)
65
66
  }
66
67
  })
67
68
  })
@@ -77,8 +78,8 @@ The golden solution will be used as a reference for generating the alternatives.
77
78
 
78
79
  Provide one or more target programming languages from the following list:
79
80
  ${Object.entries(languageNames)
80
- .map(([key, name]) => ` - ${key}: ${name}`)
81
- .join('\n')}
81
+ .map(([key, name]) => ` - ${key}: ${name}`)
82
+ .join('\n')}
82
83
 
83
84
  The added solutions will be saved in the problem directory overwrite possible existing files.`,
84
85
  )
@@ -88,10 +89,15 @@ The added solutions will be saved in the problem directory overwrite possible ex
88
89
  .option('-m, --model <model>', 'AI model to use', settings.defaultModel)
89
90
 
90
91
  .action(async (proglangs, { directory, model }) => {
92
+ const jutge = await getLoggedInJutgeClient()
91
93
  const problem = await newProblem(directory)
92
- for (const proglang of proglangs) {
93
- await addAlternativeSolution(model, problem, proglang)
94
- }
94
+ await tui.section('Generating statement translations', async () => {
95
+ for (const proglang of proglangs) {
96
+ await tui.section(`Generating solution in ${proglangNames[proglang]}`, async () => {
97
+ await addAlternativeSolution(jutge, model, problem, proglang)
98
+ })
99
+ }
100
+ })
95
101
  })
96
102
 
97
103
  generateCmd
@@ -107,8 +113,8 @@ The main file for the golden solution will be used as a reference for generating
107
113
 
108
114
  Provide one or more target programming languages from the following list:
109
115
  ${Object.entries(languageNames)
110
- .map(([key, name]) => ` - ${key}: ${name}`)
111
- .join('\n')}
116
+ .map(([key, name]) => ` - ${key}: ${name}`)
117
+ .join('\n')}
112
118
 
113
119
  The added main files will be saved in the problem directory overwrite possible existing files.`,
114
120
  )
@@ -118,9 +124,10 @@ The added main files will be saved in the problem directory overwrite possible e
118
124
  .option('-m, --model <model>', 'AI model to use', settings.defaultModel)
119
125
 
120
126
  .action(async (proglangs, { directory, model }) => {
127
+ const jutge = await getLoggedInJutgeClient()
121
128
  const problem = await newProblem(directory)
122
129
  for (const proglang of proglangs) {
123
- await addMainFile(model, problem, proglang)
130
+ await addMainFile(jutge, model, problem, proglang)
124
131
  }
125
132
  })
126
133
 
@@ -138,14 +145,15 @@ generateCmd
138
145
  .option('-m, --model <model>', 'AI model to use', settings.defaultModel)
139
146
 
140
147
  .action(async ({ efficiency, hard, random, all, directory, model, output }) => {
148
+ const jutge = await getLoggedInJutgeClient()
141
149
  const problem = await newProblem(directory)
142
150
  await tui.section('Generating test cases generators', async () => {
143
- if (all || random) await generateTestCasesGenerator(model, problem, output, 'random')
144
- if (all || hard) await generateTestCasesGenerator(model, problem, output, 'hard')
145
- if (all || efficiency) await generateTestCasesGenerator(model, problem, output, 'efficiency')
151
+ if (all || random) await generateTestCasesGenerator(jutge, model, problem, output, 'random')
152
+ if (all || hard) await generateTestCasesGenerator(jutge, model, problem, output, 'hard')
153
+ if (all || efficiency) await generateTestCasesGenerator(jutge, model, problem, output, 'efficiency')
146
154
  })
147
155
  })
148
-
156
+ /*
149
157
  generateCmd
150
158
  .command('award.png')
151
159
  .summary('Generate award.png using JutgeAI')
@@ -216,3 +224,4 @@ The new message will be saved as award.html in the problem directory, overriding
216
224
  await writeText(output, message)
217
225
  tui.success(`Added ${output}`)
218
226
  })
227
+ */
package/toolkit/index.ts CHANGED
@@ -6,7 +6,6 @@ import { fromError } from 'zod-validation-error'
6
6
  import { settings } from '../lib/settings'
7
7
  import { packageJson } from '../lib/versions'
8
8
  import { aboutCmd } from './about'
9
- import { aiCmd } from './ai'
10
9
  import { verifyCmd } from './verify'
11
10
  import { cleanCmd } from './clean'
12
11
  import { compilersCmd } from './compilers'
@@ -18,6 +17,7 @@ import { makeCmd } from './make'
18
17
  import { quizCmd } from './quiz'
19
18
  import { upgradeCmd } from './upgrade'
20
19
  import { uploadCmd } from './upload'
20
+ import { cmdPasscode } from './passcode'
21
21
  import { askCmd } from './ask'
22
22
  import { convertCmd } from './convert'
23
23
  import { stageCmd } from './stage'
@@ -31,6 +31,7 @@ program.addHelpText('after', '\nMore documentation:\n https://github.com/jutge-
31
31
 
32
32
  program.addCommand(makeCmd)
33
33
  program.addCommand(uploadCmd)
34
+ program.addCommand(cmdPasscode)
34
35
  program.addCommand(cleanCmd)
35
36
  program.addCommand(cloneCmd)
36
37
  program.addCommand(generateCmd)
@@ -41,7 +42,6 @@ program.addCommand(doctorCmd)
41
42
  if (settings.developer) {
42
43
  program.addCommand(quizCmd)
43
44
  program.addCommand(compilersCmd)
44
- program.addCommand(aiCmd)
45
45
  }
46
46
  program.addCommand(configCmd)
47
47
  program.addCommand(upgradeCmd)
@@ -0,0 +1,41 @@
1
+ import { Command } from '@commander-js/extra-typings'
2
+ import { removePasscodeInDirectory, setPasscodeInDirectory, showPasscodeInDirectory } from '../lib/passcode'
3
+
4
+ export const cmdPasscode = new Command('passcode')
5
+ .summary('Show, set or remove problem passcode')
6
+ .description(
7
+ `Show, set or remove the passcode of a problem at Jutge.org.
8
+
9
+ These operations require an existing problem.yml file in the problem directory.
10
+ On success, problem.yml is updated with the new passcode (or empty for remove).`,
11
+ )
12
+
13
+
14
+ cmdPasscode
15
+ .command('show')
16
+ .description('Show the passcode of the problem')
17
+ .option('-d, --directory <directory>', 'problem directory', '.')
18
+
19
+ .action(async ({ directory }) => {
20
+ await showPasscodeInDirectory(directory)
21
+ })
22
+
23
+ cmdPasscode
24
+ .command('set')
25
+ .description('Set or update the passcode of the problem')
26
+
27
+ .option('-d, --directory <directory>', 'problem directory', '.')
28
+ .option('-p, --passcode <passcode>', 'passcode (if omitted, will prompt)')
29
+
30
+ .action(async ({ directory, passcode }) => {
31
+ await setPasscodeInDirectory(directory, passcode)
32
+ })
33
+
34
+ cmdPasscode
35
+ .command('remove')
36
+ .description('Remove the passcode of the problem')
37
+ .option('-d, --directory <directory>', 'problem directory', '.')
38
+
39
+ .action(async ({ directory }) => {
40
+ await removePasscodeInDirectory(directory)
41
+ })
package/toolkit/ai.ts DELETED
@@ -1,56 +0,0 @@
1
- import { Command } from '@commander-js/extra-typings'
2
- import sharp from 'sharp'
3
- import z from 'zod'
4
- import { complete, generateImage, listModels } from '../lib/ai.ts'
5
- import { settings } from '../lib/settings.ts'
6
- import tui from '../lib/tui.ts'
7
- import { convertStringToItsType } from '../lib/utils.ts'
8
-
9
- export const aiCmd = new Command('ai')
10
- .description('Query AI models')
11
-
12
- .action(() => {
13
- aiCmd.help()
14
- })
15
-
16
- aiCmd
17
- .command('models')
18
- .description('Show available AI models')
19
-
20
- .action(async () => {
21
- const models = await listModels()
22
- tui.yaml(models)
23
- })
24
-
25
- aiCmd
26
- .command('complete')
27
- .description('Complete a prompt using an AI model')
28
-
29
- .argument('<prompt>', 'the user prompt to complete')
30
- .option('-s, --system-prompt <system>', 'the system prompt to use', 'You are a helpful assistant.')
31
- .option('-m, --model <model>', 'the AI model to use', settings.defaultModel)
32
-
33
- .action(async (prompt, { model, systemPrompt }) => {
34
- prompt = prompt.trim()
35
- systemPrompt = systemPrompt.trim()
36
- const answer = await complete(model, systemPrompt, prompt)
37
- tui.print(answer)
38
- })
39
-
40
- // TODO: generate with different aspect ratios
41
- aiCmd
42
- .command('image')
43
- .description('Generate a square image using an AI model')
44
-
45
- .argument('<prompt>', 'description of the image to generate')
46
- .option('-m, --model <model>', 'the graphic AI model to use', 'openai/dall-e-3')
47
- .option('-s, --size <size>', 'the size of the image (in pixels)', '1024')
48
- .option('-o, --output <path>', 'the output image path', 'image.png')
49
-
50
- .action(async (prompt, { model, size, output }) => {
51
- const sizeInt = z.int().min(16).max(2048).parse(convertStringToItsType(size))
52
- const image = await generateImage(model, prompt)
53
- await sharp(image).resize(sizeInt, sizeInt).toFile(output)
54
- tui.success(`Generated image saved to ${output}`)
55
- await tui.image(output, 20, 10)
56
- })