monday_ruby 1.0.0 → 1.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (93) hide show
  1. checksums.yaml +4 -4
  2. data/.env +1 -1
  3. data/.rspec +0 -1
  4. data/.rubocop.yml +19 -0
  5. data/.simplecov +1 -0
  6. data/CHANGELOG.md +49 -0
  7. data/CONTRIBUTING.md +165 -0
  8. data/README.md +167 -88
  9. data/docs/.vitepress/config.mjs +255 -0
  10. data/docs/.vitepress/theme/index.js +4 -0
  11. data/docs/.vitepress/theme/style.css +43 -0
  12. data/docs/README.md +80 -0
  13. data/docs/explanation/architecture.md +507 -0
  14. data/docs/explanation/best-practices/errors.md +478 -0
  15. data/docs/explanation/best-practices/performance.md +1084 -0
  16. data/docs/explanation/best-practices/rate-limiting.md +630 -0
  17. data/docs/explanation/best-practices/testing.md +820 -0
  18. data/docs/explanation/column-values.md +857 -0
  19. data/docs/explanation/design.md +795 -0
  20. data/docs/explanation/graphql.md +356 -0
  21. data/docs/explanation/migration/v1.md +808 -0
  22. data/docs/explanation/pagination.md +447 -0
  23. data/docs/guides/advanced/batch.md +1274 -0
  24. data/docs/guides/advanced/complex-queries.md +1114 -0
  25. data/docs/guides/advanced/errors.md +818 -0
  26. data/docs/guides/advanced/pagination.md +934 -0
  27. data/docs/guides/advanced/rate-limiting.md +981 -0
  28. data/docs/guides/authentication.md +286 -0
  29. data/docs/guides/boards/create.md +386 -0
  30. data/docs/guides/boards/delete.md +405 -0
  31. data/docs/guides/boards/duplicate.md +511 -0
  32. data/docs/guides/boards/query.md +530 -0
  33. data/docs/guides/boards/update.md +453 -0
  34. data/docs/guides/columns/create.md +452 -0
  35. data/docs/guides/columns/metadata.md +492 -0
  36. data/docs/guides/columns/query.md +455 -0
  37. data/docs/guides/columns/update-multiple.md +459 -0
  38. data/docs/guides/columns/update-values.md +509 -0
  39. data/docs/guides/files/add-to-column.md +40 -0
  40. data/docs/guides/files/add-to-update.md +37 -0
  41. data/docs/guides/files/clear-column.md +33 -0
  42. data/docs/guides/first-request.md +285 -0
  43. data/docs/guides/folders/manage.md +750 -0
  44. data/docs/guides/groups/items.md +626 -0
  45. data/docs/guides/groups/manage.md +501 -0
  46. data/docs/guides/installation.md +169 -0
  47. data/docs/guides/items/create.md +493 -0
  48. data/docs/guides/items/delete.md +514 -0
  49. data/docs/guides/items/query.md +605 -0
  50. data/docs/guides/items/subitems.md +483 -0
  51. data/docs/guides/items/update.md +699 -0
  52. data/docs/guides/updates/manage.md +619 -0
  53. data/docs/guides/use-cases/dashboard.md +1421 -0
  54. data/docs/guides/use-cases/import.md +1962 -0
  55. data/docs/guides/use-cases/task-management.md +1381 -0
  56. data/docs/guides/workspaces/manage.md +502 -0
  57. data/docs/index.md +69 -0
  58. data/docs/package-lock.json +2468 -0
  59. data/docs/package.json +13 -0
  60. data/docs/reference/client.md +540 -0
  61. data/docs/reference/configuration.md +586 -0
  62. data/docs/reference/errors.md +693 -0
  63. data/docs/reference/resources/account.md +208 -0
  64. data/docs/reference/resources/activity-log.md +369 -0
  65. data/docs/reference/resources/board-view.md +359 -0
  66. data/docs/reference/resources/board.md +393 -0
  67. data/docs/reference/resources/column.md +543 -0
  68. data/docs/reference/resources/file.md +236 -0
  69. data/docs/reference/resources/folder.md +386 -0
  70. data/docs/reference/resources/group.md +507 -0
  71. data/docs/reference/resources/item.md +348 -0
  72. data/docs/reference/resources/subitem.md +267 -0
  73. data/docs/reference/resources/update.md +259 -0
  74. data/docs/reference/resources/workspace.md +213 -0
  75. data/docs/reference/response.md +560 -0
  76. data/docs/tutorial/first-integration.md +713 -0
  77. data/lib/monday/client.rb +41 -2
  78. data/lib/monday/configuration.rb +13 -0
  79. data/lib/monday/deprecation.rb +23 -0
  80. data/lib/monday/error.rb +5 -2
  81. data/lib/monday/request.rb +19 -1
  82. data/lib/monday/resources/base.rb +4 -0
  83. data/lib/monday/resources/board.rb +52 -0
  84. data/lib/monday/resources/column.rb +6 -0
  85. data/lib/monday/resources/file.rb +56 -0
  86. data/lib/monday/resources/folder.rb +55 -0
  87. data/lib/monday/resources/group.rb +66 -0
  88. data/lib/monday/resources/item.rb +62 -0
  89. data/lib/monday/util.rb +33 -1
  90. data/lib/monday/version.rb +1 -1
  91. data/lib/monday_ruby.rb +1 -0
  92. metadata +92 -11
  93. data/monday_ruby.gemspec +0 -39
@@ -0,0 +1,1274 @@
1
+ # Batch Operations
2
+
3
+ Efficiently perform bulk create, update, and delete operations on monday.com boards.
4
+
5
+ ## Overview
6
+
7
+ Since monday_ruby doesn't provide native batch API endpoints, batch operations involve looping through items with proper rate limiting, error handling, and progress tracking. This guide shows production-ready patterns for bulk operations.
8
+
9
+ ::: warning <span style="display: inline-flex; align-items: center; gap: 6px;"><svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"><path d="M10.29 3.86L1.82 18a2 2 0 0 0 1.71 3h16.94a2 2 0 0 0 1.71-3L13.71 3.86a2 2 0 0 0-3.42 0z"></path><line x1="12" y1="9" x2="12" y2="13"></line><line x1="12" y1="17" x2="12.01" y2="17"></line></svg>Rate Limiting Required</span>
10
+ Always include delays between requests when performing batch operations to avoid hitting monday.com's API rate limits.
11
+ :::
12
+
13
+ ## Bulk Create Operations
14
+
15
+ ### Create Multiple Items
16
+
17
+ Create multiple items efficiently with rate limiting:
18
+
19
+ ```ruby
20
+ require "monday_ruby"
21
+
22
+ Monday.configure do |config|
23
+ config.token = ENV["MONDAY_TOKEN"]
24
+ end
25
+
26
+ client = Monday::Client.new
27
+
28
+ def bulk_create_items(client, board_id, items_data, delay: 0.3)
29
+ results = {
30
+ created: [],
31
+ failed: []
32
+ }
33
+
34
+ puts "Creating #{items_data.length} items..."
35
+
36
+ items_data.each_with_index do |item_data, index|
37
+ response = client.item.create(
38
+ args: {
39
+ board_id: board_id,
40
+ item_name: item_data[:name],
41
+ column_values: item_data[:columns] || {}
42
+ }
43
+ )
44
+
45
+ if response.success?
46
+ item = response.body.dig("data", "create_item")
47
+ results[:created] << item
48
+ puts "[#{index + 1}/#{items_data.length}] ✓ Created: #{item['name']}"
49
+ else
50
+ results[:failed] << { name: item_data[:name], error: response.body }
51
+ puts "[#{index + 1}/#{items_data.length}] ✗ Failed: #{item_data[:name]}"
52
+ end
53
+
54
+ # Rate limiting delay
55
+ sleep(delay) unless index == items_data.length - 1
56
+ end
57
+
58
+ results
59
+ end
60
+
61
+ # Usage
62
+ items = [
63
+ { name: "Marketing Campaign Q1" },
64
+ { name: "Product Launch Planning" },
65
+ { name: "Customer Research" },
66
+ { name: "Website Redesign" },
67
+ { name: "Social Media Strategy" }
68
+ ]
69
+
70
+ results = bulk_create_items(client, 1234567890, items)
71
+
72
+ puts "\n" + "=" * 50
73
+ puts "Created: #{results[:created].length}"
74
+ puts "Failed: #{results[:failed].length}"
75
+ ```
76
+
77
+ **Output:**
78
+ ```
79
+ Creating 5 items...
80
+ [1/5] ✓ Created: Marketing Campaign Q1
81
+ [2/5] ✓ Created: Product Launch Planning
82
+ [3/5] ✓ Created: Customer Research
83
+ [4/5] ✓ Created: Website Redesign
84
+ [5/5] ✓ Created: Social Media Strategy
85
+
86
+ ==================================================
87
+ Created: 5
88
+ Failed: 0
89
+ ```
90
+
91
+ ### Create with Column Values
92
+
93
+ Bulk create items with column values:
94
+
95
+ ```ruby
96
+ require "json"
97
+
98
+ def bulk_create_with_values(client, board_id, items_data, delay: 0.3)
99
+ results = { created: [], failed: [] }
100
+
101
+ items_data.each_with_index do |data, index|
102
+ # ⚠️ Replace column IDs with your board's actual column IDs
103
+ column_values = {
104
+ status: { label: data[:status] || "Not Started" },
105
+ date4: { date: data[:due_date] } if data[:due_date],
106
+ text: data[:description] || ""
107
+ }.compact
108
+
109
+ response = client.item.create(
110
+ args: {
111
+ board_id: board_id,
112
+ item_name: data[:name],
113
+ column_values: JSON.generate(column_values),
114
+ create_labels_if_missing: true
115
+ }
116
+ )
117
+
118
+ if response.success?
119
+ item = response.body.dig("data", "create_item")
120
+ results[:created] << item
121
+ puts "[#{index + 1}/#{items_data.length}] ✓ #{item['name']}"
122
+ else
123
+ results[:failed] << data
124
+ puts "[#{index + 1}/#{items_data.length}] ✗ #{data[:name]}"
125
+ end
126
+
127
+ sleep(delay) unless index == items_data.length - 1
128
+ end
129
+
130
+ results
131
+ end
132
+
133
+ # Usage
134
+ tasks = [
135
+ {
136
+ name: "Design Homepage",
137
+ status: "Working on it",
138
+ due_date: "2024-12-15",
139
+ description: "Create new homepage mockups"
140
+ },
141
+ {
142
+ name: "Implement API",
143
+ status: "Not Started",
144
+ due_date: "2024-12-20",
145
+ description: "Build REST API endpoints"
146
+ },
147
+ {
148
+ name: "Write Tests",
149
+ status: "Not Started",
150
+ due_date: "2024-12-22",
151
+ description: "Unit and integration tests"
152
+ }
153
+ ]
154
+
155
+ results = bulk_create_with_values(client, 1234567890, tasks)
156
+ puts "\nCreated #{results[:created].length} items with values"
157
+ ```
158
+
159
+ ### Create Multiple Boards
160
+
161
+ Create multiple boards efficiently:
162
+
163
+ ```ruby
164
+ def bulk_create_boards(client, boards_data, delay: 0.5)
165
+ results = { created: [], failed: [] }
166
+
167
+ puts "Creating #{boards_data.length} boards..."
168
+
169
+ boards_data.each_with_index do |board_data, index|
170
+ response = client.board.create(
171
+ args: {
172
+ board_name: board_data[:name],
173
+ board_kind: board_data[:kind] || "public",
174
+ workspace_id: board_data[:workspace_id]
175
+ }
176
+ )
177
+
178
+ if response.success?
179
+ board = response.body.dig("data", "create_board")
180
+ results[:created] << board
181
+ puts "[#{index + 1}/#{boards_data.length}] ✓ Created: #{board['name']}"
182
+ else
183
+ results[:failed] << board_data
184
+ puts "[#{index + 1}/#{boards_data.length}] ✗ Failed: #{board_data[:name]}"
185
+ end
186
+
187
+ sleep(delay) unless index == boards_data.length - 1
188
+ end
189
+
190
+ results
191
+ end
192
+
193
+ # Usage
194
+ boards = [
195
+ { name: "Marketing Q1 2024", workspace_id: 12345 },
196
+ { name: "Product Roadmap", workspace_id: 12345 },
197
+ { name: "Customer Feedback", workspace_id: 12345 }
198
+ ]
199
+
200
+ results = bulk_create_boards(client, boards)
201
+ puts "\nCreated #{results[:created].length} boards"
202
+ ```
203
+
204
+ ### Create Multiple Columns
205
+
206
+ Add multiple columns to a board:
207
+
208
+ ```ruby
209
+ def bulk_create_columns(client, board_id, columns_data, delay: 0.3)
210
+ results = { created: [], failed: [] }
211
+
212
+ columns_data.each_with_index do |col_data, index|
213
+ response = client.column.create(
214
+ args: {
215
+ board_id: board_id,
216
+ title: col_data[:title],
217
+ column_type: col_data[:type]
218
+ }
219
+ )
220
+
221
+ if response.success?
222
+ column = response.body.dig("data", "create_column")
223
+ results[:created] << column
224
+ puts "[#{index + 1}/#{columns_data.length}] ✓ Created: #{column['title']}"
225
+ else
226
+ results[:failed] << col_data
227
+ puts "[#{index + 1}/#{columns_data.length}] ✗ Failed: #{col_data[:title]}"
228
+ end
229
+
230
+ sleep(delay) unless index == columns_data.length - 1
231
+ end
232
+
233
+ results
234
+ end
235
+
236
+ # Usage
237
+ columns = [
238
+ { title: "Priority", type: "status" },
239
+ { title: "Assignee", type: "people" },
240
+ { title: "Due Date", type: "date" },
241
+ { title: "Progress", type: "numbers" },
242
+ { title: "Notes", type: "text" }
243
+ ]
244
+
245
+ results = bulk_create_columns(client, 1234567890, columns)
246
+ puts "\nCreated #{results[:created].length} columns"
247
+ ```
248
+
249
+ ## Bulk Update Operations
250
+
251
+ ### Update Multiple Items
252
+
253
+ Update multiple items with the same values:
254
+
255
+ ```ruby
256
+ def bulk_update_items(client, board_id, item_ids, column_values, delay: 0.3)
257
+ results = { updated: [], failed: [] }
258
+
259
+ puts "Updating #{item_ids.length} items..."
260
+
261
+ item_ids.each_with_index do |item_id, index|
262
+ response = client.column.change_multiple_values(
263
+ args: {
264
+ board_id: board_id,
265
+ item_id: item_id,
266
+ column_values: JSON.generate(column_values)
267
+ }
268
+ )
269
+
270
+ if response.success?
271
+ item = response.body.dig("data", "change_multiple_column_values")
272
+ results[:updated] << item
273
+ puts "[#{index + 1}/#{item_ids.length}] ✓ Updated: #{item['name']}"
274
+ else
275
+ results[:failed] << item_id
276
+ puts "[#{index + 1}/#{item_ids.length}] ✗ Failed: #{item_id}"
277
+ end
278
+
279
+ sleep(delay) unless index == item_ids.length - 1
280
+ end
281
+
282
+ results
283
+ end
284
+
285
+ # Usage: Mark all items as complete
286
+ item_ids = [987654321, 987654322, 987654323, 987654324]
287
+
288
+ # ⚠️ Replace column IDs with your board's actual column IDs
289
+ updates = {
290
+ status: { label: "Done" },
291
+ date4: { date: Date.today.to_s },
292
+ text: "Bulk completed"
293
+ }
294
+
295
+ results = bulk_update_items(client, 1234567890, item_ids, updates)
296
+ puts "\nUpdated #{results[:updated].length} items"
297
+ ```
298
+
299
+ ### Update with Different Values
300
+
301
+ Update each item with unique values:
302
+
303
+ ```ruby
304
+ def bulk_update_different(client, board_id, updates_data, delay: 0.3)
305
+ results = { updated: [], failed: [] }
306
+
307
+ updates_data.each_with_index do |update, index|
308
+ response = client.column.change_multiple_values(
309
+ args: {
310
+ board_id: board_id,
311
+ item_id: update[:item_id],
312
+ column_values: JSON.generate(update[:values])
313
+ }
314
+ )
315
+
316
+ if response.success?
317
+ item = response.body.dig("data", "change_multiple_column_values")
318
+ results[:updated] << item
319
+ puts "[#{index + 1}/#{updates_data.length}] ✓ #{item['name']}"
320
+ else
321
+ results[:failed] << update
322
+ puts "[#{index + 1}/#{updates_data.length}] ✗ Item #{update[:item_id]}"
323
+ end
324
+
325
+ sleep(delay) unless index == updates_data.length - 1
326
+ end
327
+
328
+ results
329
+ end
330
+
331
+ # Usage: Update items with different statuses
332
+ # ⚠️ Replace column IDs with your board's actual column IDs
333
+ updates = [
334
+ {
335
+ item_id: 987654321,
336
+ values: { status: { label: "Done" }, text: "Completed" }
337
+ },
338
+ {
339
+ item_id: 987654322,
340
+ values: { status: { label: "Working on it" }, text: "In progress" }
341
+ },
342
+ {
343
+ item_id: 987654323,
344
+ values: { status: { label: "Stuck" }, text: "Blocked by dependencies" }
345
+ }
346
+ ]
347
+
348
+ results = bulk_update_different(client, 1234567890, updates)
349
+ ```
350
+
351
+ ### Handle Partial Failures
352
+
353
+ Gracefully handle failures during bulk updates:
354
+
355
+ ```ruby
356
+ def bulk_update_with_retry(client, board_id, item_ids, column_values,
357
+ delay: 0.3, max_retries: 2)
358
+ results = { updated: [], failed: [], retried: [] }
359
+ failed_items = []
360
+
361
+ # First pass
362
+ item_ids.each_with_index do |item_id, index|
363
+ response = client.column.change_multiple_values(
364
+ args: {
365
+ board_id: board_id,
366
+ item_id: item_id,
367
+ column_values: JSON.generate(column_values)
368
+ }
369
+ )
370
+
371
+ if response.success?
372
+ item = response.body.dig("data", "change_multiple_column_values")
373
+ results[:updated] << item
374
+ puts "[#{index + 1}/#{item_ids.length}] ✓ #{item['name']}"
375
+ else
376
+ failed_items << item_id
377
+ puts "[#{index + 1}/#{item_ids.length}] ✗ Failed: #{item_id} (will retry)"
378
+ end
379
+
380
+ sleep(delay) unless index == item_ids.length - 1
381
+ end
382
+
383
+ # Retry failed items
384
+ retry_count = 0
385
+ while failed_items.any? && retry_count < max_retries
386
+ retry_count += 1
387
+ puts "\nRetry attempt #{retry_count}/#{max_retries}..."
388
+
389
+ current_failures = failed_items.dup
390
+ failed_items.clear
391
+
392
+ current_failures.each_with_index do |item_id, index|
393
+ response = client.column.change_multiple_values(
394
+ args: {
395
+ board_id: board_id,
396
+ item_id: item_id,
397
+ column_values: JSON.generate(column_values)
398
+ }
399
+ )
400
+
401
+ if response.success?
402
+ item = response.body.dig("data", "change_multiple_column_values")
403
+ results[:updated] << item
404
+ results[:retried] << item_id
405
+ puts "[Retry #{index + 1}/#{current_failures.length}] ✓ #{item['name']}"
406
+ else
407
+ failed_items << item_id
408
+ puts "[Retry #{index + 1}/#{current_failures.length}] ✗ #{item_id}"
409
+ end
410
+
411
+ sleep(delay * 2) unless index == current_failures.length - 1
412
+ end
413
+ end
414
+
415
+ results[:failed] = failed_items
416
+ results
417
+ end
418
+
419
+ # Usage
420
+ item_ids = [987654321, 987654322, 987654323, 987654324, 987654325]
421
+ values = { status: { label: "Done" } }
422
+
423
+ results = bulk_update_with_retry(client, 1234567890, item_ids, values)
424
+
425
+ puts "\n" + "=" * 50
426
+ puts "Updated: #{results[:updated].length}"
427
+ puts "Retried successfully: #{results[:retried].length}"
428
+ puts "Failed after retries: #{results[:failed].length}"
429
+ ```
430
+
431
+ ## Bulk Delete/Archive
432
+
433
+ ### Archive Multiple Items
434
+
435
+ Archive items in bulk with confirmation:
436
+
437
+ ```ruby
438
+ def bulk_archive_items(client, item_ids, delay: 0.3, confirm: true)
439
+ if confirm
440
+ print "Archive #{item_ids.length} items? (yes/no): "
441
+ return { archived: [], skipped: item_ids } unless gets.chomp.downcase == "yes"
442
+ end
443
+
444
+ results = { archived: [], failed: [] }
445
+
446
+ puts "Archiving #{item_ids.length} items..."
447
+
448
+ item_ids.each_with_index do |item_id, index|
449
+ response = client.item.archive(item_id)
450
+
451
+ if response.success?
452
+ archived = response.body.dig("data", "archive_item")
453
+ results[:archived] << archived
454
+ puts "[#{index + 1}/#{item_ids.length}] ✓ Archived: #{archived['id']}"
455
+ else
456
+ results[:failed] << item_id
457
+ puts "[#{index + 1}/#{item_ids.length}] ✗ Failed: #{item_id}"
458
+ end
459
+
460
+ sleep(delay) unless index == item_ids.length - 1
461
+ end
462
+
463
+ results
464
+ end
465
+
466
+ # Usage
467
+ item_ids = [987654321, 987654322, 987654323]
468
+ results = bulk_archive_items(client, item_ids)
469
+
470
+ puts "\nArchived #{results[:archived].length} items"
471
+ ```
472
+
473
+ ### Delete Multiple Items
474
+
475
+ Safely delete items with double confirmation:
476
+
477
+ ```ruby
478
+ def bulk_delete_items(client, item_ids, delay: 0.3)
479
+ puts "⚠️ WARNING: You are about to DELETE #{item_ids.length} items."
480
+ puts "This action CANNOT be undone!"
481
+ print "\nType 'DELETE' to confirm: "
482
+
483
+ return { deleted: [], cancelled: item_ids } unless gets.chomp == "DELETE"
484
+
485
+ print "Are you absolutely sure? (yes/no): "
486
+ return { deleted: [], cancelled: item_ids } unless gets.chomp.downcase == "yes"
487
+
488
+ results = { deleted: [], failed: [] }
489
+
490
+ puts "\nDeleting #{item_ids.length} items..."
491
+
492
+ item_ids.each_with_index do |item_id, index|
493
+ response = client.item.delete(item_id)
494
+
495
+ if response.success?
496
+ deleted = response.body.dig("data", "delete_item")
497
+ results[:deleted] << deleted
498
+ puts "[#{index + 1}/#{item_ids.length}] ✓ Deleted: #{deleted['id']}"
499
+ else
500
+ results[:failed] << item_id
501
+ puts "[#{index + 1}/#{item_ids.length}] ✗ Failed: #{item_id}"
502
+ end
503
+
504
+ sleep(delay) unless index == item_ids.length - 1
505
+ end
506
+
507
+ results
508
+ end
509
+
510
+ # Usage - requires explicit confirmation
511
+ item_ids = [987654321, 987654322]
512
+ results = bulk_delete_items(client, item_ids)
513
+ ```
514
+
515
+ ### Archive Items by Status
516
+
517
+ Archive items matching specific criteria:
518
+
519
+ ```ruby
520
+ def archive_by_status(client, board_id, status_value, delay: 0.3)
521
+ # Fetch items with the target status
522
+ # ⚠️ Replace 'status' with your actual status column ID
523
+ response = client.item.page_by_column_values(
524
+ board_id: board_id,
525
+ columns: [
526
+ { column_id: "status", column_values: [status_value] }
527
+ ],
528
+ limit: 500
529
+ )
530
+
531
+ return { archived: [], failed: [] } unless response.success?
532
+
533
+ items_page = response.body.dig("data", "items_page_by_column_values")
534
+ items = items_page["items"]
535
+
536
+ puts "Found #{items.length} items with status '#{status_value}'"
537
+ print "Archive all? (yes/no): "
538
+
539
+ return { archived: [], skipped: items.length } unless gets.chomp.downcase == "yes"
540
+
541
+ results = { archived: [], failed: [] }
542
+
543
+ items.each_with_index do |item, index|
544
+ response = client.item.archive(item["id"])
545
+
546
+ if response.success?
547
+ results[:archived] << item
548
+ puts "[#{index + 1}/#{items.length}] ✓ Archived: #{item['name']}"
549
+ else
550
+ results[:failed] << item
551
+ puts "[#{index + 1}/#{items.length}] ✗ Failed: #{item['name']}"
552
+ end
553
+
554
+ sleep(delay) unless index == items.length - 1
555
+ end
556
+
557
+ results
558
+ end
559
+
560
+ # Usage: Archive all completed items
561
+ results = archive_by_status(client, 1234567890, "Done")
562
+ puts "\nArchived #{results[:archived].length} completed items"
563
+ ```
564
+
565
+ ## Process Large Datasets
566
+
567
+ ### Paginate Through All Items
568
+
569
+ Process all items on a board using cursor pagination:
570
+
571
+ ```ruby
572
+ def process_all_items(client, board_id, delay: 0.3)
573
+ all_items = []
574
+ cursor = nil
575
+ page = 1
576
+
577
+ loop do
578
+ puts "Fetching page #{page}..."
579
+
580
+ response = client.board.items_page(
581
+ board_ids: board_id,
582
+ cursor: cursor,
583
+ limit: 100
584
+ )
585
+
586
+ break unless response.success?
587
+
588
+ board = response.body.dig("data", "boards", 0)
589
+ break unless board
590
+
591
+ items_page = board.dig("items_page")
592
+ items = items_page["items"]
593
+
594
+ break if items.empty?
595
+
596
+ all_items.concat(items)
597
+ puts " Fetched #{items.length} items (total: #{all_items.length})"
598
+
599
+ cursor = items_page["cursor"]
600
+ break if cursor.nil?
601
+
602
+ page += 1
603
+ sleep(delay)
604
+ end
605
+
606
+ all_items
607
+ end
608
+
609
+ # Usage
610
+ all_items = process_all_items(client, 1234567890)
611
+ puts "\nTotal items fetched: #{all_items.length}"
612
+ ```
613
+
614
+ ### Process in Batches
615
+
616
+ Process large datasets in manageable batches:
617
+
618
+ ```ruby
619
+ def process_in_batches(client, board_id, batch_size: 50, delay: 0.5)
620
+ all_items = []
621
+ cursor = nil
622
+ batch_num = 1
623
+
624
+ loop do
625
+ response = client.board.items_page(
626
+ board_ids: board_id,
627
+ cursor: cursor,
628
+ limit: batch_size
629
+ )
630
+
631
+ break unless response.success?
632
+
633
+ board = response.body.dig("data", "boards", 0)
634
+ break unless board
635
+
636
+ items_page = board.dig("items_page")
637
+ items = items_page["items"]
638
+ break if items.empty?
639
+
640
+ # Process this batch
641
+ puts "\nProcessing batch #{batch_num} (#{items.length} items)..."
642
+
643
+ items.each_with_index do |item, index|
644
+ # Your processing logic here
645
+ puts " [#{index + 1}/#{items.length}] Processing: #{item['name']}"
646
+
647
+ # Example: Update each item
648
+ # response = client.column.change_value(...)
649
+ end
650
+
651
+ all_items.concat(items)
652
+ cursor = items_page["cursor"]
653
+ break if cursor.nil?
654
+
655
+ batch_num += 1
656
+ puts "\nWaiting before next batch..."
657
+ sleep(delay)
658
+ end
659
+
660
+ puts "\n" + "=" * 50
661
+ puts "Processed #{all_items.length} items in #{batch_num} batches"
662
+
663
+ all_items
664
+ end
665
+
666
+ # Usage
667
+ process_in_batches(client, 1234567890, batch_size: 25, delay: 1.0)
668
+ ```
669
+
670
+ ### Progress Tracking
671
+
672
+ Track progress for long-running operations:
673
+
674
+ ```ruby
675
+ require "benchmark"
676
+
677
+ def bulk_operation_with_progress(client, board_id, item_ids,
678
+ column_values, delay: 0.3)
679
+ results = {
680
+ updated: [],
681
+ failed: [],
682
+ timing: {}
683
+ }
684
+
685
+ total = item_ids.length
686
+ start_time = Time.now
687
+
688
+ puts "\n" + "=" * 60
689
+ puts "Starting bulk update of #{total} items"
690
+ puts "=" * 60
691
+
692
+ item_ids.each_with_index do |item_id, index|
693
+ item_start = Time.now
694
+
695
+ response = client.column.change_multiple_values(
696
+ args: {
697
+ board_id: board_id,
698
+ item_id: item_id,
699
+ column_values: JSON.generate(column_values)
700
+ }
701
+ )
702
+
703
+ elapsed = Time.now - item_start
704
+
705
+ if response.success?
706
+ item = response.body.dig("data", "change_multiple_column_values")
707
+ results[:updated] << item
708
+ status = "✓"
709
+ else
710
+ results[:failed] << item_id
711
+ status = "✗"
712
+ end
713
+
714
+ # Calculate progress
715
+ progress = ((index + 1).to_f / total * 100).round(1)
716
+ elapsed_total = Time.now - start_time
717
+ avg_time = elapsed_total / (index + 1)
718
+ remaining = avg_time * (total - index - 1)
719
+
720
+ # Progress bar
721
+ bar_length = 30
722
+ filled = (progress / 100 * bar_length).round
723
+ bar = "█" * filled + "░" * (bar_length - filled)
724
+
725
+ puts "[#{bar}] #{progress}% #{status} Item #{index + 1}/#{total}"
726
+ puts " Time: #{elapsed.round(2)}s | Avg: #{avg_time.round(2)}s | " \
727
+ "ETA: #{remaining.round(0)}s"
728
+
729
+ sleep(delay) unless index == total - 1
730
+ end
731
+
732
+ total_time = Time.now - start_time
733
+ results[:timing] = {
734
+ total: total_time,
735
+ average: total_time / total,
736
+ items_per_second: total / total_time
737
+ }
738
+
739
+ puts "\n" + "=" * 60
740
+ puts "COMPLETED"
741
+ puts "=" * 60
742
+ puts "Updated: #{results[:updated].length}"
743
+ puts "Failed: #{results[:failed].length}"
744
+ puts "Total time: #{total_time.round(2)}s"
745
+ puts "Average: #{results[:timing][:average].round(2)}s per item"
746
+ puts "Speed: #{results[:timing][:items_per_second].round(2)} items/second"
747
+ puts "=" * 60
748
+
749
+ results
750
+ end
751
+
752
+ # Usage
753
+ item_ids = [987654321, 987654322, 987654323, 987654324, 987654325]
754
+ values = { status: { label: "Done" } }
755
+
756
+ results = bulk_operation_with_progress(client, 1234567890, item_ids, values)
757
+ ```
758
+
759
+ ### Error Recovery
760
+
761
+ Save progress and resume after failures:
762
+
763
+ ```ruby
764
+ require "json"
765
+
766
+ def bulk_update_with_checkpoint(client, board_id, item_ids,
767
+ column_values, checkpoint_file: "checkpoint.json",
768
+ delay: 0.3)
769
+ # Load checkpoint if exists
770
+ completed = []
771
+ if File.exist?(checkpoint_file)
772
+ checkpoint = JSON.parse(File.read(checkpoint_file))
773
+ completed = checkpoint["completed"] || []
774
+ puts "Resuming from checkpoint: #{completed.length} items already processed"
775
+ end
776
+
777
+ # Filter out already completed items
778
+ remaining = item_ids - completed
779
+
780
+ if remaining.empty?
781
+ puts "All items already processed!"
782
+ return { updated: completed, failed: [] }
783
+ end
784
+
785
+ puts "Processing #{remaining.length} remaining items..."
786
+
787
+ results = { updated: completed.dup, failed: [] }
788
+
789
+ remaining.each_with_index do |item_id, index|
790
+ response = client.column.change_multiple_values(
791
+ args: {
792
+ board_id: board_id,
793
+ item_id: item_id,
794
+ column_values: JSON.generate(column_values)
795
+ }
796
+ )
797
+
798
+ if response.success?
799
+ results[:updated] << item_id
800
+ puts "[#{index + 1}/#{remaining.length}] ✓ Updated: #{item_id}"
801
+
802
+ # Save checkpoint after each success
803
+ File.write(checkpoint_file, JSON.generate({
804
+ completed: results[:updated],
805
+ last_updated: Time.now.to_s
806
+ }))
807
+ else
808
+ results[:failed] << item_id
809
+ puts "[#{index + 1}/#{remaining.length}] ✗ Failed: #{item_id}"
810
+ end
811
+
812
+ sleep(delay) unless index == remaining.length - 1
813
+ end
814
+
815
+ # Clean up checkpoint file when done
816
+ File.delete(checkpoint_file) if File.exist?(checkpoint_file)
817
+
818
+ results
819
+ end
820
+
821
+ # Usage
822
+ item_ids = (1..100).map { |i| 987654000 + i }
823
+ values = { status: { label: "Processed" } }
824
+
825
+ results = bulk_update_with_checkpoint(
826
+ client,
827
+ 1234567890,
828
+ item_ids,
829
+ values,
830
+ checkpoint_file: "bulk_update_checkpoint.json"
831
+ )
832
+ ```
833
+
834
+ ## Best Practices
835
+
836
+ ### Rate Limiting Strategy
837
+
838
+ Implement smart rate limiting:
839
+
840
+ ```ruby
841
+ class RateLimiter
842
+ def initialize(requests_per_second: 2)
843
+ @delay = 1.0 / requests_per_second
844
+ @last_request = Time.now - @delay
845
+ end
846
+
847
+ def throttle
848
+ elapsed = Time.now - @last_request
849
+ if elapsed < @delay
850
+ sleep(@delay - elapsed)
851
+ end
852
+ @last_request = Time.now
853
+ end
854
+ end
855
+
856
+ def bulk_update_with_rate_limit(client, board_id, item_ids, column_values)
857
+ limiter = RateLimiter.new(requests_per_second: 3)
858
+ results = { updated: [], failed: [] }
859
+
860
+ item_ids.each_with_index do |item_id, index|
861
+ limiter.throttle
862
+
863
+ response = client.column.change_multiple_values(
864
+ args: {
865
+ board_id: board_id,
866
+ item_id: item_id,
867
+ column_values: JSON.generate(column_values)
868
+ }
869
+ )
870
+
871
+ if response.success?
872
+ item = response.body.dig("data", "change_multiple_column_values")
873
+ results[:updated] << item
874
+ puts "[#{index + 1}/#{item_ids.length}] ✓ #{item['name']}"
875
+ else
876
+ results[:failed] << item_id
877
+ puts "[#{index + 1}/#{item_ids.length}] ✗ #{item_id}"
878
+ end
879
+ end
880
+
881
+ results
882
+ end
883
+
884
+ # Usage
885
+ item_ids = [987654321, 987654322, 987654323]
886
+ values = { status: { label: "Done" } }
887
+
888
+ results = bulk_update_with_rate_limit(client, 1234567890, item_ids, values)
889
+ ```
890
+
891
+ ### Transaction-like Patterns
892
+
893
+ Implement rollback for failed operations:
894
+
895
+ ```ruby
896
+ def bulk_update_with_rollback(client, board_id, item_ids, new_values, delay: 0.3)
897
+ # First, get current values
898
+ puts "Backing up current values..."
899
+ backups = {}
900
+
901
+ item_ids.each do |item_id|
902
+ response = client.item.query(
903
+ args: { ids: [item_id] },
904
+ select: ["id", "name", { column_values: ["id", "value"] }]
905
+ )
906
+
907
+ if response.success?
908
+ item = response.body.dig("data", "items", 0)
909
+ backups[item_id] = item["column_values"]
910
+ end
911
+
912
+ sleep(delay * 0.5)
913
+ end
914
+
915
+ # Perform updates
916
+ puts "\nUpdating items..."
917
+ results = { updated: [], failed: [] }
918
+
919
+ item_ids.each_with_index do |item_id, index|
920
+ response = client.column.change_multiple_values(
921
+ args: {
922
+ board_id: board_id,
923
+ item_id: item_id,
924
+ column_values: JSON.generate(new_values)
925
+ }
926
+ )
927
+
928
+ if response.success?
929
+ results[:updated] << item_id
930
+ puts "[#{index + 1}/#{item_ids.length}] ✓ Updated: #{item_id}"
931
+ else
932
+ results[:failed] << item_id
933
+ puts "[#{index + 1}/#{item_ids.length}] ✗ Failed: #{item_id}"
934
+
935
+ # Critical failure - rollback
936
+ if results[:failed].length > item_ids.length * 0.3
937
+ puts "\n⚠️ Too many failures (#{results[:failed].length}). Rolling back..."
938
+
939
+ results[:updated].each_with_index do |updated_id, rb_index|
940
+ # Restore original values
941
+ original = backups[updated_id]
942
+ next unless original
943
+
944
+ restore_values = {}
945
+ original.each do |col|
946
+ restore_values[col["id"]] = JSON.parse(col["value"]) rescue nil
947
+ end
948
+
949
+ client.column.change_multiple_values(
950
+ args: {
951
+ board_id: board_id,
952
+ item_id: updated_id,
953
+ column_values: JSON.generate(restore_values.compact)
954
+ }
955
+ )
956
+
957
+ puts "[#{rb_index + 1}/#{results[:updated].length}] ↶ Rolled back: #{updated_id}"
958
+ sleep(delay)
959
+ end
960
+
961
+ return { updated: [], failed: item_ids, rolled_back: true }
962
+ end
963
+ end
964
+
965
+ sleep(delay) unless index == item_ids.length - 1
966
+ end
967
+
968
+ results
969
+ end
970
+
971
+ # Usage
972
+ item_ids = [987654321, 987654322, 987654323]
973
+ values = { status: { label: "Done" } }
974
+
975
+ results = bulk_update_with_rollback(client, 1234567890, item_ids, values)
976
+ ```
977
+
978
+ ### Optimize API Calls
979
+
980
+ Minimize requests by using change_multiple_values:
981
+
982
+ ```ruby
983
+ # ❌ BAD: Multiple API calls per item
984
+ def update_item_inefficient(client, board_id, item_id)
985
+ client.column.change_value(
986
+ args: { board_id: board_id, item_id: item_id, column_id: "status", value: '{"label":"Done"}' }
987
+ )
988
+
989
+ client.column.change_value(
990
+ args: { board_id: board_id, item_id: item_id, column_id: "date4", value: '{"date":"2024-12-31"}' }
991
+ )
992
+
993
+ client.column.change_value(
994
+ args: { board_id: board_id, item_id: item_id, column_id: "text", value: "Completed" }
995
+ )
996
+
997
+ # 3 API calls per item!
998
+ end
999
+
1000
+ # ✅ GOOD: Single API call per item
1001
+ def update_item_efficient(client, board_id, item_id)
1002
+ values = {
1003
+ status: { label: "Done" },
1004
+ date4: { date: "2024-12-31" },
1005
+ text: "Completed"
1006
+ }
1007
+
1008
+ client.column.change_multiple_values(
1009
+ args: {
1010
+ board_id: board_id,
1011
+ item_id: item_id,
1012
+ column_values: JSON.generate(values)
1013
+ }
1014
+ )
1015
+
1016
+ # 1 API call per item - 3x faster!
1017
+ end
1018
+ ```
1019
+
1020
+ ### Batch Size Considerations
1021
+
1022
+ Choose optimal batch sizes:
1023
+
1024
+ ```ruby
1025
+ def adaptive_batch_processing(client, board_id, delay: 0.5)
1026
+ batch_sizes = [100, 50, 25] # Try larger batches first
1027
+ cursor = nil
1028
+ total_processed = 0
1029
+
1030
+ batch_sizes.each do |batch_size|
1031
+ puts "Trying batch size: #{batch_size}"
1032
+
1033
+ begin
1034
+ response = client.board.items_page(
1035
+ board_ids: board_id,
1036
+ cursor: cursor,
1037
+ limit: batch_size
1038
+ )
1039
+
1040
+ if response.success?
1041
+ board = response.body.dig("data", "boards", 0)
1042
+ items_page = board.dig("items_page")
1043
+ items = items_page["items"]
1044
+
1045
+ puts "✓ Successfully fetched #{items.length} items"
1046
+ puts "Using batch size #{batch_size} for remaining pages"
1047
+
1048
+ # Continue with this batch size
1049
+ total_processed += items.length
1050
+ cursor = items_page["cursor"]
1051
+
1052
+ while cursor
1053
+ sleep(delay)
1054
+
1055
+ response = client.board.items_page(
1056
+ board_ids: board_id,
1057
+ cursor: cursor,
1058
+ limit: batch_size
1059
+ )
1060
+
1061
+ break unless response.success?
1062
+
1063
+ board = response.body.dig("data", "boards", 0)
1064
+ items_page = board.dig("items_page")
1065
+ items = items_page["items"]
1066
+
1067
+ break if items.empty?
1068
+
1069
+ total_processed += items.length
1070
+ cursor = items_page["cursor"]
1071
+
1072
+ puts "Processed #{total_processed} items so far..."
1073
+ end
1074
+
1075
+ break
1076
+ end
1077
+ rescue => e
1078
+ puts "✗ Batch size #{batch_size} failed: #{e.message}"
1079
+ next
1080
+ end
1081
+ end
1082
+
1083
+ puts "\nTotal processed: #{total_processed} items"
1084
+ end
1085
+
1086
+ # Usage
1087
+ adaptive_batch_processing(client, 1234567890)
1088
+ ```
1089
+
1090
+ ## Complete Example
1091
+
1092
+ Production-ready bulk operation with all best practices:
1093
+
1094
+ ```ruby
1095
+ require "monday_ruby"
1096
+ require "dotenv/load"
1097
+ require "json"
1098
+ require "benchmark"
1099
+
1100
+ Monday.configure do |config|
1101
+ config.token = ENV["MONDAY_TOKEN"]
1102
+ end
1103
+
1104
+ client = Monday::Client.new
1105
+
1106
+ class BulkProcessor
1107
+ attr_reader :client, :results
1108
+
1109
+ def initialize(client, delay: 0.3, max_retries: 2)
1110
+ @client = client
1111
+ @delay = delay
1112
+ @max_retries = max_retries
1113
+ @results = {
1114
+ successful: [],
1115
+ failed: [],
1116
+ retried: [],
1117
+ total_time: 0
1118
+ }
1119
+ end
1120
+
1121
+ def bulk_update_items(board_id, updates_data)
1122
+ puts "\n" + "=" * 60
1123
+ puts "Bulk Update Operation"
1124
+ puts "=" * 60
1125
+ puts "Items to update: #{updates_data.length}"
1126
+ puts "Rate limit delay: #{@delay}s"
1127
+ puts "Max retries: #{@max_retries}"
1128
+ puts "=" * 60 + "\n"
1129
+
1130
+ start_time = Time.now
1131
+ failed_updates = []
1132
+
1133
+ # First pass
1134
+ updates_data.each_with_index do |update, index|
1135
+ process_update(board_id, update, index, updates_data.length) do |success, data|
1136
+ if success
1137
+ @results[:successful] << data
1138
+ else
1139
+ failed_updates << update
1140
+ end
1141
+ end
1142
+
1143
+ sleep(@delay) unless index == updates_data.length - 1
1144
+ end
1145
+
1146
+ # Retry failed updates
1147
+ retry_count = 0
1148
+ while failed_updates.any? && retry_count < @max_retries
1149
+ retry_count += 1
1150
+ puts "\n" + "-" * 60
1151
+ puts "Retry Attempt #{retry_count}/#{@max_retries}"
1152
+ puts "-" * 60
1153
+
1154
+ current_failures = failed_updates.dup
1155
+ failed_updates.clear
1156
+
1157
+ current_failures.each_with_index do |update, index|
1158
+ process_update(board_id, update, index, current_failures.length, retry: true) do |success, data|
1159
+ if success
1160
+ @results[:successful] << data
1161
+ @results[:retried] << data
1162
+ else
1163
+ failed_updates << update
1164
+ end
1165
+ end
1166
+
1167
+ sleep(@delay * 1.5) unless index == current_failures.length - 1
1168
+ end
1169
+ end
1170
+
1171
+ @results[:failed] = failed_updates
1172
+ @results[:total_time] = Time.now - start_time
1173
+
1174
+ print_summary
1175
+ @results
1176
+ end
1177
+
1178
+ private
1179
+
1180
+ def process_update(board_id, update, index, total, retry: false)
1181
+ prefix = retry ? " [Retry #{index + 1}/#{total}]" : "[#{index + 1}/#{total}]"
1182
+
1183
+ response = @client.column.change_multiple_values(
1184
+ args: {
1185
+ board_id: board_id,
1186
+ item_id: update[:item_id],
1187
+ column_values: JSON.generate(update[:values])
1188
+ }
1189
+ )
1190
+
1191
+ if response.success?
1192
+ item = response.body.dig("data", "change_multiple_column_values")
1193
+ puts "#{prefix} ✓ Updated: #{item['name']}"
1194
+ yield(true, item)
1195
+ else
1196
+ puts "#{prefix} ✗ Failed: Item #{update[:item_id]}"
1197
+ yield(false, update)
1198
+ end
1199
+ rescue => e
1200
+ puts "#{prefix} ✗ Error: #{e.message}"
1201
+ yield(false, update)
1202
+ end
1203
+
1204
+ def print_summary
1205
+ puts "\n" + "=" * 60
1206
+ puts "SUMMARY"
1207
+ puts "=" * 60
1208
+ puts "Successful: #{@results[:successful].length}"
1209
+ puts "Retried & succeeded: #{@results[:retried].length}"
1210
+ puts "Failed: #{@results[:failed].length}"
1211
+ puts "Total time: #{@results[:total_time].round(2)}s"
1212
+
1213
+ if @results[:successful].any?
1214
+ avg_time = @results[:total_time] / @results[:successful].length
1215
+ puts "Average time per item: #{avg_time.round(2)}s"
1216
+ end
1217
+
1218
+ puts "=" * 60 + "\n"
1219
+ end
1220
+ end
1221
+
1222
+ # Usage
1223
+ processor = BulkProcessor.new(client, delay: 0.3, max_retries: 2)
1224
+
1225
+ # ⚠️ Replace with your actual board ID, item IDs, and column IDs
1226
+ updates = [
1227
+ {
1228
+ item_id: 987654321,
1229
+ values: {
1230
+ status: { label: "Done" },
1231
+ date4: { date: "2024-12-31" },
1232
+ text: "Completed successfully"
1233
+ }
1234
+ },
1235
+ {
1236
+ item_id: 987654322,
1237
+ values: {
1238
+ status: { label: "Working on it" },
1239
+ date4: { date: "2024-12-15" },
1240
+ text: "In progress"
1241
+ }
1242
+ },
1243
+ {
1244
+ item_id: 987654323,
1245
+ values: {
1246
+ status: { label: "Done" },
1247
+ date4: { date: "2024-12-20" },
1248
+ text: "Review complete"
1249
+ }
1250
+ }
1251
+ ]
1252
+
1253
+ results = processor.bulk_update_items(1234567890, updates)
1254
+
1255
+ # Access results
1256
+ puts "\nSuccessful updates:"
1257
+ results[:successful].each do |item|
1258
+ puts " • #{item['name']} (ID: #{item['id']})"
1259
+ end
1260
+
1261
+ if results[:failed].any?
1262
+ puts "\nFailed updates:"
1263
+ results[:failed].each do |update|
1264
+ puts " • Item ID: #{update[:item_id]}"
1265
+ end
1266
+ end
1267
+ ```
1268
+
1269
+ ## Next Steps
1270
+
1271
+ - [Advanced pagination](/guides/advanced/pagination)
1272
+ - [Error handling patterns](/guides/advanced/errors)
1273
+ - [Update multiple columns](/guides/columns/update-multiple)
1274
+ - [Query items efficiently](/guides/items/query)